Back to all posts
How To Leverage AI in Professional Software Development
Lessons learned from a quality-focused senior software engineer trying to get the most out of AI
N
Noah Kurz
March 7, 2026
5 min read
The debate is over. AI is here to stay. Like a lot of engineers, though, that was worrisome to me. Partly because I LOVE coding and partly because I did not like the loss of context that I, the engineer, had when relying on AI too much. I saw the benefits: less mental fatigue, quicker answers when troubleshooting niche issues, and the ability to quickly experiment with more product ideas. But the drawbacks were just as real. When a production bug came up, I felt like I was going in blind. Side projects got more expensive due to token usage, and keeping up with the pace of change in our industry started to feel like a full-time job in and of itself.
However, what if I told you there was a way to capture the productivity gains AI provides while still remaining in full control of your project? When I think about what gives me the most context on an application, it is engineering the data flow, naming methods, variables, and types, and influencing how the logic is implemented. When I think about what takes the most time and mental energy, it is implementing that logic. Using those facts, I have found a methodology that gives me most of the benefits of AI without the drawbacks that bothered me most. It's pretty simple:
The way I do this is by keeping ownership of the shape of the code while letting AI handle the heavy lifting of implementation. In practice, that means I stub out my types, function names, return values, and the rough data flow first. Then, inside the function body, I write in plain English exactly what I want to happen.
This is the sweet spot for me. I am still the one deciding what the important abstractions are, what things are called, what data moves where, and what the public API looks like. AI is not inventing the architecture for me. It is filling in the blanks on a blueprint I already designed.
That matters more than people think. When I come back later to debug an issue, I still recognize the codebase. The names make sense to me. The flow makes sense to me. The implementation may have been accelerated by AI, but the intent is still mine. As a bonus, the context window stays smaller because I am not asking the model to reason about an entire feature from scratch. I am giving it a constrained problem with clear rails, which usually means better output and lower token usage.
Here is a simple example using a FizzBuzz-style problem:
type FizzBuzzRule = {
divisor: number
label: string
}
type FizzBuzzResult = {
value: number
output: string
}
function getFizzBuzzOutput(value: number, rules: FizzBuzzRule[]): string {
// Check every rule and collect the labels for divisors that evenly divide the value.
// If at least one rule matches, return the combined label string.
// If no rules match, return the number as a string.
}
function buildFizzBuzzSequence(
start: number,
end: number,
rules: FizzBuzzRule[],
): FizzBuzzResult[] {
// Create an empty list of results.
// Loop from start to end inclusive.
// For each number, call getFizzBuzzOutput and store the value/output pair.
// Return the finished list.
}Notice what is happening here. I already decided the data structures. I already decided there should be a helper function and a coordinating function. I already decided the naming. I even described the behavior in English. At this point, AI does not need to be "creative." It just needs to execute.
Inside Cursor, I like to use a command pattern I call Blueprint Mode. The goal of the command is simple: implement the code, respect the shape, and do nothing extra.
An example prompt would be:
Blueprint Mode:
Implement only the function bodies from the English instructions in the selected code.
Preserve all existing types, function names, signatures, and return values.
Do not rename anything.
Do not refactor anything.
Do not add new abstractions unless they are absolutely required to complete the described logic.That last part is important. If you do not constrain the AI, it will often try to "help" by redesigning things that were never broken in the first place. Sometimes that is useful. A lot of times it is not. I want implementation, not a surprise architecture review in the middle of my flow.
If I run that command on the stub above, I should get something like this:
type FizzBuzzRule = {
divisor: number
label: string
}
type FizzBuzzResult = {
value: number
output: string
}
function getFizzBuzzOutput(value: number, rules: FizzBuzzRule[]): string {
const matchingLabels = rules
.filter((rule) => value % rule.divisor === 0)
.map((rule) => rule.label)
if (matchingLabels.length > 0) {
return matchingLabels.join('')
}
return value.toString()
}
function buildFizzBuzzSequence(
start: number,
end: number,
rules: FizzBuzzRule[],
): FizzBuzzResult[] {
const results: FizzBuzzResult[] = []
for (let value = start; value <= end; value += 1) {
results.push({
value,
output: getFizzBuzzOutput(value, rules),
})
}
return results
}Is this revolutionary? No. That is exactly why I like it. It is boring in the best way possible. The AI did not need to guess what kind of solution I wanted. It just had to translate clear intent into working code.
The benefits of this workflow have been pretty substantial for me. I move faster because I spend less time grinding through repetitive implementation. I keep context because I am still the one shaping the system. I spend less on tokens because the prompt is smaller and more focused. And when bugs happen, I do not feel like I am debugging somebody else's codebase. I am debugging my codebase, just one that got built faster.
That said, there are some drawbacks. You still need enough experience to create a good blueprint. If your types are vague and your English instructions are sloppy, the output will be sloppy too. There is also an upfront cost, because you are spending time defining the structure before asking for the implementation. And of course, AI can still hallucinate or make weird decisions inside the boundaries you gave it, so this is not a replacement for code review or engineering judgment.
Even with those tradeoffs, this is still my favorite way to work with AI right now. It lets me keep the parts of software development I enjoy most, the architecture, the naming, the intent, the ownership, while offloading the parts that drain the most energy. For me, that is the best balance I have found so far.
If you are thinking through a new architecture and want multiple AI models to pressure test your ideas before you start building, check out AI Brainstorm. It lets multiple LLMs review your plans, challenge each other, and give you more rounded feedback in a token-efficient way.