Using AI in the IDE is not the same as building with AI.
Why Adding AI Tools Isn’t Enough
One Step at a Time
Requirements are written, then handed to designers, then to developers, then to testers, then to ops. Each handoff loses context and adds delay.
Handoff overhead accounts for an estimated 20–40% of total project time. Explore the evidence →
What changes:
AI agents carry context across every stage. What the requirements agent produces flows directly into architecture and code - no translation, no information loss.
Knowledge That Walks Out the Door
When someone leaves, their understanding of the system leaves with them. Getting a new team member up to speed takes months.
Institutional knowledge is fragile and expensive to rebuild. Explore the evidence →
What changes:
Knowledge lives in the system - documented architecture, configured agents, living specifications. A new team member inherits the full context in days, not months.
Disconnected Everything
Your project management, your codebase, your documentation, your compliance records - separate systems with manual bridges between them.
Engineers spend a significant portion of their time finding and assembling information rather than producing it. Explore the evidence →
What changes:
An interconnected system where information flows automatically. A change in requirements updates the architecture, the tests, and the documentation - without someone chasing each update manually.
One Brain. Every Discipline.
What the Brain Contains
Specifications
Every requirement, user story, and acceptance criterion produced by the system. Living documents that agents read and update, not PDFs gathering dust.
Architecture
Documented decisions, component designs, API contracts, data models. When an agent proposes a change, it checks against what already exists.
Rules
Coding standards, naming conventions, security policies, quality gates. The non-negotiable constraints that every agent respects.
Commands
Structured instructions that tell agents how to perform specific tasks. Not vague prompts - precise, tested, versioned operating procedures.
Templates
Standard patterns for requirements, architecture decisions, test plans, documentation, deployment configs. Consistency by default.
Code
The actual codebase, with its history, its patterns, and its conventions. Agents don’t generate code in a vacuum - they generate code that fits.
Operations
Monitoring configurations, runbook procedures, incident response playbooks, performance baselines. What the system learned from running in production.
One System. Every Stage.
Interactive. Select any stage or The Brain to see how agent execution and human judgement divide in practice, then try the next stage to compare.
How It Powers Every Role
The Brain on its own is a knowledge base. It becomes a methodology when every engineering discipline draws from it - each one augmented by a specialised agent configuration that handles routine work, maintains context, and connects to every other role through the shared core.
From Prompt to Product
From a business need to working, tested, documented, monitored software - here’s what the process actually looks like.
Prompt to Product is our name for the journey from a business need to production-ready software. Our AI Builder and AI Engineering programmes teach your team to run this process independently.
Explore AI TrainingsThis is not a rigid assembly line. Real engineering is iterative - you learn things during design that change the requirements, and things during testing that change the design. The difference in an AI-native approach is that these iterations happen in hours rather than days, because every agent already has the full context of what came before.
The starting point: A product owner submits a business need - “Customers need to download their invoices without contacting support.”
Understand the need
Product DiscoveryAgent
The agent analyses existing customer feedback, support tickets, and usage data related to invoicing. Produces a brief with user needs, edge cases, and validation criteria.
Human
Reviews the brief. Adds business constraints (“invoices must include the new VAT format”). Confirms priority.
Specify what to build
Requirements EngineeringAgent
The agent generates structured requirements - user stories, acceptance criteria, data model changes, API contracts. Flags dependencies on existing systems.
Human
Reviews for accuracy. Adds context from stakeholder conversations. Approves the specification.
Design the solution
Architecture & DesignAgent
Proposes component design, API structure, database changes. Checks against the existing architecture for consistency.
Human
Reviews trade-offs. Approves the approach. May send back to Step 2 if design reveals new requirements.
↻ Requirements and architecture often refine each other through 2–3 rapid cycles before implementation begins.
Build it
DevelopmentAgent
Produces working code following the architecture, coding standards, and patterns established for this project. Generates unit tests alongside the code.
Human
Reviews output. Handles edge cases. Refactors where needed.
Verify it
Testing & Quality + Documentation & ComplianceAgent
Testing agent generates test cases from the original requirements and runs the automated suite. Documentation & Compliance agent runs compliance checks against applicable regulations and begins generating audit trail documentation. Both work simultaneously.
Human
Reviews results. Makes risk decisions on any flagged items. May send back to Step 4 if issues found.
↻ Build-test cycles repeat until quality gates pass.
Document and ship
Documentation & ComplianceAgent
Technical docs, release notes, and deployment guide produced from the codebase and specifications - not written from memory after the fact. Compliance documentation finalised.
Human
Reviews documentation. Approves deployment.
Operate and learn
Operations & MaintenanceAgent
Monitoring active. Performance baselines established. The Brain is updated - the invoice feature’s architecture, code patterns, test results, and deployment configuration are now available to every future sprint.
Human
Reviews operational metrics. Evaluates whether the feature needs refinement based on real usage data. Feeds learnings back into product decisions.
↻ Production feedback loops into the next iteration’s requirements and priorities.
A working feature in production - tested, documented, compliant, and monitored. The Brain now contains everything this sprint produced. The next feature that touches invoicing inherits all of this context automatically. The system compounds.
What Gets Delivered
AI-native development doesn’t just produce code faster. It produces complete, governed output. But what “complete” means depends entirely on what you’re building and who it’s for.
Plan
Requirements, user stories, acceptance criteria, dependency maps, prioritisation rationale
Design
Architecture decisions, component design, API contracts, data models
Build
Production code following established patterns, standards, and constraints
Test
Unit tests, integration tests, E2E tests, coverage reports, regression checks
Release
Deployment configuration, release notes, migration scripts, rollback procedures
Operate
Monitoring setup, alerting rules, runbooks, incident response procedures
Secure
Security review, vulnerability checks, access control verification, compliance audit trail
Document
Technical documentation, API references, user guides, architecture decision records
Not all categories are produced fresh every sprint. Plan, Design, Build, Test, and Release are typical sprint outputs. Operate and Secure produce foundational deliverables during initial setup that are then maintained and updated in subsequent iterations rather than regenerated each time. Document is produced alongside the work, not written after the fact.
Configured to Your Reality
Not every project needs all eight categories at full depth. The framework defines what’s available - the configuration determines what’s required.
Startup
A startup building a task management app needs working software, solid test coverage, and enough documentation to onboard the next developer. The agent ecosystem runs lean - fewer gates, lighter documentation, faster cycles. The goal is validated software, shipped quickly.
SaaS
A SaaS company serving enterprise clients needs all of the above plus thorough security review, deployment procedures, and thorough API documentation. Their customers expect it, and their contracts require it.
Regulated
A medical device company building patient-facing software under MDR and IEC 62304 needs full traceability from every requirement to every test case, validated processes, and audit-ready documentation at every stage. The Compliance agent checks against specific regulatory standards, not generic checklists. Every deliverable in the framework is mandatory and deeply detailed.
Same methodology. Different configuration. Before any engagement starts, we assess your product, your industry, your regulatory environment, and your team’s maturity to configure the agent ecosystem appropriately.
Three Ways to Build Software With AI
Not all AI-assisted development is the same. Understanding which approach fits your situation is the first question that matters.
Who uses it
- Vibe Coding
- Non-engineers, founders, makers
- AI-Assisted
- Engineers with AI coding tools
- AI-Native (Agentic SDLC)
- Engineers + configured agent ecosystem
What it does
- Vibe Coding
- Generates working software from natural language descriptions
- AI-Assisted
- Accelerates individual coding tasks - autocomplete, suggestions, refactoring
- AI-Native (Agentic SDLC)
- Augments every role in the engineering process - requirements through operations
Engineering practices
- Vibe Coding
- Not embedded. The AI generates code; nobody validates architecture, security, scalability, or maintainability unless the user already knows how
- AI-Assisted
- Provided by the human. AI speeds up execution but the engineer must know what good looks like and enforce it
- AI-Native (Agentic SDLC)
- Embedded in the agents. Architecture, testing, compliance, documentation practices are configured into the system itself
Quality
- Vibe Coding
- Depends entirely on the user’s ability to spot problems they may not know exist
- AI-Assisted
- Depends on the engineer’s own discipline and experience
- AI-Native (Agentic SDLC)
- Consistent. Quality gates are part of the process, not dependent on individual discipline
Best for
- Vibe Coding
- Validation. Proof of concept. Internal tools. Learning.
- AI-Assisted
- Established teams that want to move faster on known patterns
- AI-Native (Agentic SDLC)
- Teams building production software that needs to be reliable, secure, and maintainable at scale
What breaks at scale
- Vibe Coding
- Accumulates invisible technical debt. No architecture decisions means no foundation to build on
- AI-Assisted
- Knowledge stays locked in individuals. Team growth doesn’t transfer capability
- AI-Native (Agentic SDLC)
- Requires upfront configuration investment. Only pays off for sustained development, not one-off projects
How We Deliver It
We don’t install this and leave. The methodology only works when your team can run it independently. That’s why every engagement follows Build-Operate-Transfer - a model we’ve used across three international programmes before applying it to AI-native development.
Roberto Fognini led Build-Operate-Transfer programmes across three countries at ERNI - private banking in Ticino, fintech in Romania, and a MedTech delivery site in Barcelona built during the pandemic. Each followed the same pattern: build the capability, operate it to full capacity, transfer ownership to the client. The AI-native version applies the same proven approach to engineering methodology rather than team establishment.
About Roberto →
BUILD
Typical: 2–8 weeks
We assess your current engineering practices, document your architecture, establish the Brain, and configure agent roles for your technology stack and regulatory environment. First features are delivered through the new process to prove it works.
Your Team’s Role
Observers becoming participants. They see the agents work, start reviewing outputs, learn the patterns.
Milestone
First feature delivered through the AI-native process.
OPERATE
Typical: 1–12 months
We run the methodology alongside your team. For a startup, this might mean operating together from month one. For a larger organisation, this phase is longer because change management, team training, and process integration take time. We measure, optimise, and continuously refine the agent configurations based on what we learn.
Your Team’s Role
Practitioners. They run sprints, work with agents, make decisions. We coach and handle exceptions.
Milestone
Your team runs a full sprint independently.
For larger teams, TEACH training typically runs in parallel with the Operate phase - the team learns the foundations of AI-native engineering while applying them to real work.
TRANSFER
Typical: Weeks to months
Gradual handover. We validate that your team can operate independently - not just technically, but in terms of knowing when to override agents, when to update configurations, and when to call for help. Full documentation, trained agents, and established processes transfer to your ownership.
Your Team’s Role
Owners. They run everything. We’re available for questions but not needed for daily operations.
Milestone
You don’t need us any more. That’s the goal.
These phases aren’t rigid boxes. In practice, they overlap - the team starts participating during Build, and Transfer begins as soon as someone on the team can run a sprint end-to-end. The timelines above are typical ranges. A startup with a small, motivated team can move through the entire cycle in a few months. A large organisation with existing processes to transform takes longer.
Honest Limitations
Every methodology has boundaries. Here are ours.
It requires upfront investment.
Configuring the Brain for your stack, your standards, and your regulatory environment takes time. For a simple project, this might be days. For a complex regulated environment, weeks. This is not a tool you download and start using tomorrow.
It amplifies good engineering - it doesn’t replace it.
AI agents handle routine work brilliantly. They do not make strategic product decisions, navigate organisational politics, or understand your customers’ unspoken needs. Human expertise still matters. The agents make that expertise go further.
It needs a foundation to build on.
If your current processes live in spreadsheets and email threads, there’s nothing for agents to connect to. Some baseline of documented architecture, version control, and structured processes is needed before the methodology can take hold. We assess this during the Build phase and address gaps as part of setup.
It doesn’t pay off for throwaway work.
If you’re building a quick prototype to test an idea, vibe coding tools are faster and cheaper. This methodology is designed for software that needs to be reliable, secure, and maintainable over time. If you’re not planning to maintain it, don’t over-engineer it.
The transfer only works if your team engages.
The BOT model requires your team to actively participate during the Operate phase. If they don’t engage - if they treat it as outsourcing rather than capability building - the transfer will fail. We can build and operate, but we can’t transfer to a team that isn’t ready to receive.
Find Out Where You Stand
The right starting point depends on your current engineering maturity and what you’re trying to achieve.
Assess Your Engineering Practices
Our SDLC Maturity Assessment maps your current practices across 6 dimensions and shows you where AI-native methods will have the biggest impact. Free. No email required to start.
Learn to Build This Way
Our current AI training setup includes dedicated programs for developers and non-technical builders, focused on applying AI-native product engineering in real delivery work.