Audit Framework
The audit framework uses a multi-agent orchestration pattern where a lead orchestrator delegates analysis to 8 specialist sub-agents. Each agent analyses the codebase through a specific lens, and the orchestrator compiles findings into a structured report.
How it works
Section titled “How it works”LEAD_AUDIT_ORCHESTRATOR├── AGENT_1: Source Code Analyser├── AGENT_2: Documentation Validator├── AGENT_3: Test Quality Inspector├── AGENT_4: Dependency Auditor├── AGENT_5: Security Scanner├── AGENT_6: Architecture Reviewer├── AGENT_7: Performance Analyst└── AGENT_8: Code Hygiene CheckerEach agent has:
- Defined scope — What it analyses
- Exclusions — What it leaves to other agents (prevents overlap)
- Output format — Structured findings with evidence
- Evaluation criteria — Objective metrics, not subjective opinions
The 8 lenses
Section titled “The 8 lenses”| Agent | Analyses | Key metrics |
|---|---|---|
| Source Code | Complexity, code smells, AI artifacts | Cyclomatic complexity >10, function lines >20, nesting >3 |
| Documentation | Accuracy, coverage, freshness | Dream vs reality (aspirational language), undocumented APIs |
| Test Quality | Coverage, assertion quality, test patterns | Test-to-code ratio, assertion count per test, edge case coverage |
| Dependencies | Security, licensing, freshness | Known CVEs, outdated packages, licence conflicts |
| Security | Vulnerabilities, auth, data handling | OWASP Top 10, hardcoded secrets, injection vectors |
| Architecture | Patterns, coupling, boundaries | Circular dependencies, God objects, layer violations |
| Performance | Bottlenecks, resource usage, scaling | N+1 queries, unbounded loops, memory leaks |
| Code Hygiene | Style, naming, dead code | Unused imports, inconsistent naming, commented-out code |
Critical rules
Section titled “Critical rules”The audit framework enforces strict principles:
- Report issues only — No solutions or recommendations (that’s a separate step)
- Brutal transparency — No softening of findings
- Evidence required — Every subjective finding needs objective justification
- Maintain history — Audit results are versioned for trend tracking
Using the audit prompts
Section titled “Using the audit prompts”Full codebase audit
Section titled “Full codebase audit”The primary audit prompt (codebase-audit-prompt.md) runs all 8 agents:
git clone https://github.com/ydun-code-library/Ydun_ai_workflow.gitcd Ydun_ai_workflow
# Copy and customisecp prompts/audit/codebase-audit-prompt.md ./AUDIT.md
# Feed to AI:# "Run this audit against our codebase. Analyse every file."Output is a structured AUDIT_RESULTS.md with sections per agent.
Development velocity audit
Section titled “Development velocity audit”A faster, lighter audit (dev_audit_prompt.txt) focused on:
- Development friction points
- Build and test speed
- Developer experience issues
Claude Code-specific audit
Section titled “Claude Code-specific audit”Tailored for Claude Code output (claude-code-audit-prompt.md):
- AI-generated code patterns
- Verbose implementations
- Redundant elements
- Console log cleanup
Evaluation principles
Section titled “Evaluation principles”The audit uses the same core principles from AGENTS.md:
| Principle | Audit application |
|---|---|
| KISS | Flag complexity (surface area = bug probability) |
| TDD | Verify test-first evidence via commit order |
| SOC | Identify mixed responsibilities, coupling |
| DRY | Detect duplication with hash similarity |
| CLEAN | Assess readability with objective metrics |
| SOLID | Check all five principles with concrete examples |
Full reference
Section titled “Full reference”The complete audit prompts with agent specifications, output formats, and examples are in the prompts/audit/ directory. The main orchestration prompt is 500+ lines with detailed agent boundaries and delegation maps.