fognini.tech
Home|
Challenges|
Offerings
AI Strategy & AdvisoryAI Training ProgrammesEngineering PracticesProduct DevelopmentCustom AI Agents
View All Offerings
|
Our Approach|
Why Different|
Pricing
|
EN|DE
Start Self-Check →
EN|DE
HomeChallenges
AI Strategy & AdvisoryAI Training ProgrammesEngineering PracticesProduct DevelopmentCustom AI AgentsView All Offerings
Our ApproachWhy DifferentPricing
Start Self-Check →
Fognini Tech

We think it, build it, run it, and teach it, so you own it. Swiss AI Consulting for SMEs and Engineering Teams.

Roberto Fognini signature
Services
  • AI Strategy & Advisory
  • Product Development
  • Custom AI Agents
  • AI Training Programmes
  • Engineering Practices
  • Ongoing Support

Resources

  • Why We're Different
  • The 5x Story
  • AI Maturity Model
  • Common Challenges
  • Technologies

Company

  • About
  • Roberto Fognini
  • Our Approach
  • Pricing

Get Started

  • Assessments
  • Contact
  • info@fognini.tech
© 2026 Fognini Tech. All rights reserved.
Privacy Policy·Terms & Conditions·Terms of Use·Impressum
Free Assessment - Engineering Maturity

AI-native development requires a foundation most teams have not built yet.

Without knowing where your engineering practices stand, adopting AI in delivery creates new failure modes on top of existing ones. Fifteen questions. Under five minutes. Your results show what needs to be in place before the journey begins.

Our Approach on AI-Native Development

Under 5 Minutes

15 questions across requirements, architecture, code quality, testing, and AI readiness

Personalised Results

Maturity score with dimension breakdown and specific recommendations for your weakest areas

Free PDF Report

Your scores, gaps, and recommended next steps to download and share with your team

This assessment is part of the How We Build track - understanding how AI-native engineering applies to your team.

Methodology informed by

DORAGartnerOWASPCMMIEU AI ActBCG

Assessment Framework Design

These sources shaped the assessment’s structure — the 5 dimensions, differential weighting, maturity levels, and scoring methodology.

Accelerate State of DevOps Report 2024Four key metrics, stability, throughput, AI correlation findingsGoogle/DORAPrimary reference for Testing & Deployment dimension weights and maturity thresholds. Found AI correlated with -1.5% throughput and -7.2% stability.cloud.google.com ↗State of AI-Assisted Software Development 2025AI as amplifier, foundations prerequisite, rework rateGoogle/DORAValidated the "AI amplifies existing capability" thesis. Reversed 2024 throughput finding (now positive), but stability remains negative. Informed gating mechanism.dora.dev ↗DORA 2025 AnnouncementKey findings summary and recommendationsGoogle CloudPublic summary of DORA 2025 findings used for efficiency claims calibration.cloud.google.com ↗AI-Powered SDLC Assessment FrameworkArchitecture dimension, well-structured project scoringDefra (UK Government)Reference framework for Architecture & Design dimension and government-grade SDLC assessment methodology.github.com ↗AI SDLC Maturity AssessmentEnterprise SDLC maturity assessment modelGrid DynamicsCommercial reference for enterprise-grade SDLC maturity measurement and AI readiness gating.griddynamics.com ↗WAVE Framework for AI SDLC TransformationPhased transformation methodologyGrid DynamicsInformed the graduated maturity pathway from Foundation through Optimising.griddynamics.com ↗CMMI v3.0Capability Maturity Model IntegrationISACA/Carnegie MellonEstablished maturity model pattern: staged levels with defined practices. Informed 3-level structure.cmmiinstitute.com ↗OWASP SAMM v2Software Assurance Maturity ModelOWASPSecurity practices dimension within Testing & Deployment. Progressive maturity measurement pattern.owaspsamm.org ↗OWASP DSOMMDevSecOps Maturity ModelOWASPDevSecOps integration practices informing deployment security maturity.dsomm.owasp.org ↗AI Maturity Model for SE TeamsOpen-source AI maturity model for software engineeringGigacoreCommunity-validated AI readiness criteria for engineering teams.github.com ↗

Together, these sources represent the current evidence base for AI-native software development maturity assessment.

Research & Industry Benchmarks

These sources provide the evidence base for efficiency claims, pain-point validation, and agent recommendation thresholds.

METR RCT - AI + Experienced DevelopersRandomised controlled trial of AI tools with experienced developersMETRFound experienced developers 19% slower with AI on complex codebases. Directly informs Foundation-level efficiency claim qualifiers.arxiv.org ↗AI Agents, Productivity, Higher-Order Thinking39% increase in weekly merges for mature teams using AI agentsSarkarStrongest quantified evidence for Optimising-level efficiency claims. Informed agent recommendation thresholds.papers.ssrn.com ↗Stack Overflow Developer Survey 2025 - AI84% AI adoption rate, testing as top AI use caseStack OverflowValidates AI Agent Readiness dimension relevance. Testing identified as highest-value AI application area.survey.stackoverflow.co ↗Professional Developers Don't Vibe, They ControlStudy of professional developer AI tool usage patternsMultiple authorsEvidence that strongest AI gains appear in well-defined, isolated tasks. Supports graduated agent deployment model.arxiv.org ↗PwC AI Agent SurveyEnterprise AI agent adoption patterns and challengesPwCEnterprise context for AI readiness gating and organisational readiness requirements.pwc.com ↗GitClear Code Quality AnalysisImpact of AI code generation on code quality metricsDevOps.com/GitClearTechnical debt correlation data informing Code Generation & Quality dimension scoring.devops.com ↗Impact of AI on Developer Productivity (Copilot)First large-scale RCT of GitHub Copilot productivity impactPeng et al.Baseline evidence for AI-assisted coding productivity gains on well-defined tasks.arxiv.org ↗Measuring Copilot's ImpactLongitudinal study of GitHub Copilot adoption and productivityZiegler et al./ACMExtended evidence for sustained productivity gains with AI coding tools.cacm.acm.org ↗Driving Sustainable Cost Advantage with AIEnterprise AI cost advantage and implementation patternsBCGBusiness case data for ROI claims at mature AI adoption levels.media-publications.bcg.com ↗Standish CHAOS 2020Project success/failure rates and contributing factorsStandish GroupContext only - requirements quality correlation with project outcomes. Not used for SDLC scoring (paywalled, methodologically criticised).standishgroup.com ↗DORA 2025 AnalysisIn-depth analysis of DORA 2025 findingsIT RevolutionDetailed interpretation of DORA 2025 AI amplification thesis.itrevolution.com ↗DORA 2025 Key TakeawaysPractitioner summary of DORA 2025 key findingsFaros AIActionable summary used for calibrating maturity level descriptions.faros.ai ↗

All sources last verified on 17 February 2026.