Skip to content

CAP Methodology

The CAP (Composable Agentic Prompt) methodology solves the prompt maintenance problem: as “God Prompts” grow to thousands of lines, they become impossible to maintain. CAP applies software engineering principles — Separation of Concerns and Test-Driven Development — to prompt engineering.

Develop in modules, deploy as a monolith.

Instead of one 5000-line file that everyone’s afraid to touch:

  • A “security checking” component (200 lines) owned by the security team
  • A “performance analysis” component (150 lines) owned by the platform team
  • A “documentation checker” component (100 lines) owned by the docs team
  • A simple script that combines them into the final prompt
ChallengeImpact
Low maintainability5000-line prompts become cognitive overload
Collaboration frictionVersion control conflicts when multiple people edit
Lack of reusabilityCore logic duplicated across projects
Brittle testingAny change requires full end-to-end testing
Hidden dependenciesPhase interactions aren’t explicit

Break the prompt into components, each handling one analytical lens:

components/
├── security-lens.md # Security analysis rules
├── performance-lens.md # Performance analysis rules
├── architecture-lens.md # Architecture review rules
├── testing-lens.md # Test quality rules
├── shared/
│ ├── severity-scale.md # Shared severity definitions
│ └── output-format.md # Shared output structure
└── compiler/
└── build.py # Combines components into final prompt

Each component follows a standard structure:

---
component_id: security-lens
version: 2.1.0
depends_on: [shared/severity-scale, shared/output-format]
---
## ROLE
You are SECURITY_ANALYST...
## SCOPE
- SQL injection detection
- XSS vulnerability identification
- Authentication bypass patterns
## EXCLUSIONS
- Performance concerns (handled by performance-lens)
- Architecture patterns (handled by architecture-lens)
## OUTPUT
[Structured output format]
def test_security_lens_detects_sql_injection():
code = "query = 'SELECT * FROM users WHERE id=' + userId"
result = run_component("security-lens", code)
assert any(f["type"] == "sql_injection" for f in result["findings"])
def test_security_lens_ignores_performance():
slow_code = "for i in range(1000000): data.append(fetch(i))"
result = run_component("security-lens", slow_code)
assert not any(f["type"] == "performance" for f in result["findings"])
build.py
components = load_components("components/")
resolved = resolve_dependencies(components)
final_prompt = compile_prompt(resolved)
write_output("dist/audit-prompt.md", final_prompt)

CAP uses a testing pyramid:

/\
/ \ E2E (5%) — Full prompt integration
/----\
/ \ Integration (15%) — Multi-component
/--------\
/ \ Component (30%) — Single lens
/------------\
/ \ Unit (50%) — Schema, parsing
/________________\
LevelTestsExample
UnitSchema validation, output parsing”Does the output match the expected JSON schema?”
ComponentSingle lens with known inputs”Does security lens detect XSS in this code?”
IntegrationMultiple lenses together”Do security and architecture findings not overlap?”
E2EFull compiled prompt”Does the final audit produce a valid report?”
BenefitHow
Independent developmentTeams own their components
Isolated testingChange one lens, test one lens
Version controlEach component has its own version
ReusabilityShare severity scales across projects
AuditabilityClear ownership and change history

The complete CAP methodology (1000+ lines with examples, dependency resolution patterns, and CI/CD integration) is at cap-workflow-methodology.md.

The companion Prompt Testing guide covers implementation in Python, TypeScript, and Rust.