Article summary
Recently, I’ve been on two solo software development projects, inheriting legacy code. One was a Software-as-a-Service (SaaS) project and the other was an Internet of Things (IoT) project, but both projects required a significant learning gap for different reasons.
The SaaS project was using PHP, a language I had never been exposed to before, and domain knowledge that would require numerous attempts to understand. On the other hand, the IoT project was using an elaborate cloud-native ecosystem to manage clusters and machine learning. With my time split between support for the SaaS project and spearheading the IoT project, I decided to take two different approaches, one using traditional coding and the other using low code.
IoT Project: Traditional Coding
Because this project was quickly scaling, I was navigating new challenges to ensure the project’s system was reliable and accurate. This meant that I needed the ability to do the following:
- Quickly debug any issues the on-site team was facing.
- Guarantee maintainable deployment patterns to ensure consistent version upgrades across all sites.
- Ensure the system’s output was accurate.
- Continue to consolidate configuration with external services (machine learning/hardware) so it does not vary from site to site.
So I had to make a choice. Do I quickly onboard this project using Cursor? And should I build AI Agents to help me tackle all of this solo? Or do I go old-school?
You can already guess what I decided, and you’re almost right.
Cursor was a crucial learning tool I have to give credit to. Without it, it would have taken me twice as long to understand the cloud-native ecosystem, and I definitely was not gifted that time. Because most of my debugging required me to look at logs from pods, re-deploy to a cluster, adjust cloud-level configurations across multiple levels (code -> Kubernetes manifest -> cluster management tool), having time to learn was a luxury. So there’s this caveat: I didn’t fully go old-school.
Why traditional coding?
If there’s anything I’ve learnt so far with my time with AI, it’s that it’s only as good as my prompts. As I said, this project encompassed cloud, cluster management, and integration with machine learning. Layers that I needed to understand: how it all works together and what the consequences of change were. Besides, I was pushing code for on-site testing almost weekly. I could not afford to let Cursor introduce a mysterious bug and have it suggest pointless solutions because I couldn’t prompt it better due to my lack of understanding. I concluded that I needed a deep level of understanding, and the only way I was going to get there was through traditional coding.
With traditional coding, I was tinkering. I used tests to check my logic and wrote scripts to do real-world simulations. For deployment, I was comparing and tweaking each site’s configurations and directly seeing how this affected accuracy or a customer’s experience. Luckily, all of this direct experience allowed me to debug faster because I had the whole system mapped out in my head.
Maintenance was easier, too. Following software engineering principles like DRY (Don’t Repeat Yourself), single responsibility, DIP (Dependency Inversion Principle) kept the code clean while reducing complexity. Soon enough, versions weren’t different across each site, but rather the main code contained a main logic and customized logic so that all sites could have the same deployed version.
SaaS Project: Low Code
Because I was just on support, I was only investing 20% of my time on this project to complete assigned tickets. Sure, I didn’t know PHP, the domain knowledge needed some cognitive effort, and I didn’t know the framework well enough. But I figured that documentation and Cursor were all I needed to onboard quickly for a SaaS product. Plus, this product was using React, something I’m familiar with, making me more confident that I could write better prompts.
What happened?
I did put in my own effort to understand domain terminology and user workflow, but if there were any parts I didn’t understand, I relied on the client to help me gain clarity. Once I had a clear idea of what needed to be accomplished for a ticket, I could prompt Cursor to build the desired behavior. For that shaky PHP, I made sure to review, test and challenge the generated code. So far, this worked.
Reflection
My experience between both projects were staggeringly different. My approach to the IoT project helped me build confidence that I understood the system. If the on-site team needed help, I knew I could immediately jump in and help them. However, I did not feel the same confidence in the SaaS project. Instead, I only understood isolated parts of the system, which left me unsure about how changes in one area might impact others. Only later did I realize that I was nervous because I could be missing vital details from other parts of the system I had not looked into yet.
Don’t get me wrong. Low code did work. But my confidence gap shows me that if I want to stick to this approach, I should take the time to build a thorough, high-level understanding of a project’s system. If I am shrinking my time learning language syntax, then I have to reallocate my time to high-level vision. For some, this might be obvious, but I underestimated what “support” work meant.
As for which approach is better? I view all of them as abilities under my belt that I can use any time to my discretion. I hope you do too.