.jpg)
There's a growing gap between how AI is discussed in software development and how experienced engineering teams actually use it in production.
Most conversations focus on prompts and which tool is "best." Professional teams focus on validation, constraints, and control.
Last week, our lead engineer Aram Hammoudeh walked the team through his workflow for managing four concurrent client projects using AI-assisted development. What emerged wasn't a story about automation replacing engineers—it was a masterclass in using AI to amplify engineering discipline.
Here's what we learned about how serious teams are actually building software with AI.
The highest-leverage AI work happens before any code is written.
Aram spends hours in planning mode before touching an IDE: "I'll use Claude Projects and really just kind of start brainstorming on whatever I might be working on... After I do all my conversation and planning around it, grabbing these artifacts from the end is normally what I take into the actual ticket."
Each project gets:
As Aram puts it: "The better tickets you are presenting to start this, and the better planning you do before you even touch the code, it really does show down the line."
This aligns with what we see across the industry: better planning reduces rework and accounts for the majority of productivity gains. AI doesn't replace this discipline—it makes it faster and more thorough.
Professional engineers don't hand features to AI and walk away.
Aram's typical setup: "My IDE on the left, Claude Code terminal on the right. Every change gets reviewed in real time. I stop execution when something looks wrong."
This mirrors how experienced developers have always worked—one person drives, another reviews, both remain accountable. AI has simply become the pair that's always available.
Different AI tools excel at different tasks. Understanding which tool to use when separates amateur adoption from professional implementation.
For asynchronous, well-defined tasks: Cursor's web agents with browser MCP integration. "I can trust that it is actually testing the UI of a feature before it comes back into a pull request," Aram explains. These agents iterate until tests pass, handling UI refinements and mechanical refactors overnight.
For daily pair programming: Claude Code via terminal. "I would say Cloud Code is probably about 30% of mine... I really do use it the same way that I used GitHub Copilot five years ago."
For planning and research: Claude Projects in the desktop app, where Research mode can visit thousands of pages to gather context before development begins.
For overflow capacity: Google's Anti-gravity provides additional Claude Opus 4.5 access when other tools hit rate limits.
The sophistication isn't in knowing every tool—it's in knowing which tool solves which problem most cost-effectively.
Here's what most AI content misses: prompts guide, but validation enforces reality.
Every AI-assisted workflow at Vertice Labs is gated by:
"If something fails, execution stops. The agent iterates. No exceptions," Aram emphasizes.
This became critical on a recent React Native project with a shared UI system. AI would generate code that worked on web but failed on iOS because it couldn't see the simulator. Stricter linting rules solved 90% of the problems.
Without this validation layer, AI doesn't accelerate development—it accelerates entropy.
AI development isn't free, and professional teams manage it as a first-class constraint.
Aram hits the $500 token limit on Cursor's $200/month plan every month. A single eight-hour development session can burn through $150 in tokens.
This creates forcing functions: "I try to use this pretty exclusively for these niche tickets rather than my catch-all... I wouldn't just use it for research and development iteration."
Professional teams actively manage:
Our team rotates tools strategically. Automated wake-up scripts at 5am start sessions before anyone arrives, ensuring context limits reset during less critical hours. "At 10am, I get a full new breadth across all of my tools," Aram notes.
This isn't optimization theater—it's the difference between shipping and being blocked by rate limits.
Despite the hype, long-running autonomous agents still fail in predictable ways:
Tools like Zenflow promise to automate sequential tasks over days. When they work, results are impressive. When they break, you've burned days of context reviewing 137-file pull requests full of compounding errors.
"When it works well, it can handle a large project, run for a couple of days, and turn out a giant feature that's 95% of the way done," Aram explains. "When it breaks, you've had it running for two days straight... and suddenly you're re-reviewing a bunch of slop."
Professional teams plan around these failures rather than pretending they don't exist. They limit scope, expect AI to be wrong sometimes, and never skip human review.
AI doesn't make weak engineering teams strong. It makes strong teams faster.
Our lead engineer manages four concurrent client projects not because AI writes all the code, but because AI handles execution while he focuses on architecture, tradeoffs, and standards.
The discipline that makes this work:
As one of our engineers put it during the session: "I spend a lot of my time like, this is not how we do it. Do it this way."
That's the real work. AI removes the drudgery of typing. It doesn't remove the need for judgment, architecture decisions, or taking responsibility when things break.
When evaluating software development firms, ask how they're using AI.
If the answer focuses on prompts and automation, be skeptical. If it focuses on validation, process, and human oversight, pay attention.
The teams that succeed with AI aren't the ones chasing every new tool. They're the ones with strong fundamentals who use AI to compound existing discipline.
At Vertice Labs, we're building software the way it's actually being built by experienced teams: with AI as an accelerator for engineering rigor, not a replacement for it.
The future of software development isn't prompt engineering. It's discipline, enforced at machine speed.
---
Based on an internal training session with Aram Hammoudeh, Staff Engineer at Vertice Labs. Vertice Labs is a software development firm specializing in AI-native full-stack engineering for mid-market companies.