CAP Methodology
The CAP (Composable Agentic Prompt) methodology solves the prompt maintenance problem: as “God Prompts” grow to thousands of lines, they become impossible to maintain. CAP applies software engineering principles — Separation of Concerns and Test-Driven Development — to prompt engineering.
The core insight
Section titled “The core insight”Develop in modules, deploy as a monolith.
Instead of one 5000-line file that everyone’s afraid to touch:
- A “security checking” component (200 lines) owned by the security team
- A “performance analysis” component (150 lines) owned by the platform team
- A “documentation checker” component (100 lines) owned by the docs team
- A simple script that combines them into the final prompt
The monolith problem
Section titled “The monolith problem”| Challenge | Impact |
|---|---|
| Low maintainability | 5000-line prompts become cognitive overload |
| Collaboration friction | Version control conflicts when multiple people edit |
| Lack of reusability | Core logic duplicated across projects |
| Brittle testing | Any change requires full end-to-end testing |
| Hidden dependencies | Phase interactions aren’t explicit |
How CAP works
Section titled “How CAP works”1. Separation of Concerns
Section titled “1. Separation of Concerns”Break the prompt into components, each handling one analytical lens:
components/├── security-lens.md # Security analysis rules├── performance-lens.md # Performance analysis rules├── architecture-lens.md # Architecture review rules├── testing-lens.md # Test quality rules├── shared/│ ├── severity-scale.md # Shared severity definitions│ └── output-format.md # Shared output structure└── compiler/ └── build.py # Combines components into final prompt2. Component structure
Section titled “2. Component structure”Each component follows a standard structure:
---component_id: security-lensversion: 2.1.0depends_on: [shared/severity-scale, shared/output-format]---
## ROLEYou are SECURITY_ANALYST...
## SCOPE- SQL injection detection- XSS vulnerability identification- Authentication bypass patterns
## EXCLUSIONS- Performance concerns (handled by performance-lens)- Architecture patterns (handled by architecture-lens)
## OUTPUT[Structured output format]3. Test each component independently
Section titled “3. Test each component independently”def test_security_lens_detects_sql_injection(): code = "query = 'SELECT * FROM users WHERE id=' + userId" result = run_component("security-lens", code) assert any(f["type"] == "sql_injection" for f in result["findings"])
def test_security_lens_ignores_performance(): slow_code = "for i in range(1000000): data.append(fetch(i))" result = run_component("security-lens", slow_code) assert not any(f["type"] == "performance" for f in result["findings"])4. Compile into final prompt
Section titled “4. Compile into final prompt”components = load_components("components/")resolved = resolve_dependencies(components)final_prompt = compile_prompt(resolved)write_output("dist/audit-prompt.md", final_prompt)Testing strategy
Section titled “Testing strategy”CAP uses a testing pyramid:
/\ / \ E2E (5%) — Full prompt integration /----\ / \ Integration (15%) — Multi-component /--------\ / \ Component (30%) — Single lens /------------\ / \ Unit (50%) — Schema, parsing/________________\| Level | Tests | Example |
|---|---|---|
| Unit | Schema validation, output parsing | ”Does the output match the expected JSON schema?” |
| Component | Single lens with known inputs | ”Does security lens detect XSS in this code?” |
| Integration | Multiple lenses together | ”Do security and architecture findings not overlap?” |
| E2E | Full compiled prompt | ”Does the final audit produce a valid report?” |
Benefits
Section titled “Benefits”| Benefit | How |
|---|---|
| Independent development | Teams own their components |
| Isolated testing | Change one lens, test one lens |
| Version control | Each component has its own version |
| Reusability | Share severity scales across projects |
| Auditability | Clear ownership and change history |
Full reference
Section titled “Full reference”The complete CAP methodology (1000+ lines with examples, dependency resolution patterns, and CI/CD integration) is at cap-workflow-methodology.md.
The companion Prompt Testing guide covers implementation in Python, TypeScript, and Rust.