Skip to content

Audit Framework

The audit framework uses a multi-agent orchestration pattern where a lead orchestrator delegates analysis to 8 specialist sub-agents. Each agent analyses the codebase through a specific lens, and the orchestrator compiles findings into a structured report.

LEAD_AUDIT_ORCHESTRATOR
├── AGENT_1: Source Code Analyser
├── AGENT_2: Documentation Validator
├── AGENT_3: Test Quality Inspector
├── AGENT_4: Dependency Auditor
├── AGENT_5: Security Scanner
├── AGENT_6: Architecture Reviewer
├── AGENT_7: Performance Analyst
└── AGENT_8: Code Hygiene Checker

Each agent has:

  • Defined scope — What it analyses
  • Exclusions — What it leaves to other agents (prevents overlap)
  • Output format — Structured findings with evidence
  • Evaluation criteria — Objective metrics, not subjective opinions
AgentAnalysesKey metrics
Source CodeComplexity, code smells, AI artifactsCyclomatic complexity >10, function lines >20, nesting >3
DocumentationAccuracy, coverage, freshnessDream vs reality (aspirational language), undocumented APIs
Test QualityCoverage, assertion quality, test patternsTest-to-code ratio, assertion count per test, edge case coverage
DependenciesSecurity, licensing, freshnessKnown CVEs, outdated packages, licence conflicts
SecurityVulnerabilities, auth, data handlingOWASP Top 10, hardcoded secrets, injection vectors
ArchitecturePatterns, coupling, boundariesCircular dependencies, God objects, layer violations
PerformanceBottlenecks, resource usage, scalingN+1 queries, unbounded loops, memory leaks
Code HygieneStyle, naming, dead codeUnused imports, inconsistent naming, commented-out code

The audit framework enforces strict principles:

  1. Report issues only — No solutions or recommendations (that’s a separate step)
  2. Brutal transparency — No softening of findings
  3. Evidence required — Every subjective finding needs objective justification
  4. Maintain history — Audit results are versioned for trend tracking

The primary audit prompt (codebase-audit-prompt.md) runs all 8 agents:

Terminal window
git clone https://github.com/ydun-code-library/Ydun_ai_workflow.git
cd Ydun_ai_workflow
# Copy and customise
cp prompts/audit/codebase-audit-prompt.md ./AUDIT.md
# Feed to AI:
# "Run this audit against our codebase. Analyse every file."

Output is a structured AUDIT_RESULTS.md with sections per agent.

A faster, lighter audit (dev_audit_prompt.txt) focused on:

  • Development friction points
  • Build and test speed
  • Developer experience issues

Tailored for Claude Code output (claude-code-audit-prompt.md):

  • AI-generated code patterns
  • Verbose implementations
  • Redundant elements
  • Console log cleanup

The audit uses the same core principles from AGENTS.md:

PrincipleAudit application
KISSFlag complexity (surface area = bug probability)
TDDVerify test-first evidence via commit order
SOCIdentify mixed responsibilities, coupling
DRYDetect duplication with hash similarity
CLEANAssess readability with objective metrics
SOLIDCheck all five principles with concrete examples

The complete audit prompts with agent specifications, output formats, and examples are in the prompts/audit/ directory. The main orchestration prompt is 500+ lines with detailed agent boundaries and delegation maps.