adr-management
When you need to record why a decision was made — creates and manages Architecture Decision Records.
38 skills tagged reversible · risk axis.
When you need to record why a decision was made — creates and manages Architecture Decision Records.
When consumers need to know how to call your API — endpoint docs, OpenAPI specs, request/response examples.
When the API contract is ready and needs code behind it — implements handlers/controllers with validation.
When the same expensive query runs hundreds of times — implements caching patterns with proper invalidation.
When users need to know what changed and why — conventional commits parsing, semantic versioning, release notes.
When you need CI that catches failures before they hit main — generates GitHub Actions/GitLab CI with artifact gating.
When the codebase has accumulated cruft — removes noise, enforces formatting, safely renames identifiers.
When you have a spec and need idiomatic code — generates from specs with language convention enforcement.
When your API is fast but your database calls aren't — repository pattern, query optimization, N+1 prevention.
When queries are the bottleneck — index analysis, query plan optimization, connection pooling.
When future-you needs to know why past-you chose this — ADR authoring with context, decision, consequences.
When dependencies drift and Dependabot floods you with PRs — automated updates with semver and lockfile discipline.
When your Docker image is bloated and slow to build — generates optimized multi-stage Dockerfiles.
When every module handles errors differently — standardizes error types, logging, and response formats.
When errors cascade instead of being contained — exception hierarchies, retry strategies, graceful degradation.
Master workflow for shipping a feature end-to-end. Orchestrates prd -> task-decomposition -> tdd-workflow -> code-review -> release-pipeline -> auditor. Composes the release-pipeline master as the delivery sub-workflow.
When the team needs git discipline without git fights — branching strategies, commit conventions, conflict resolution.
When code needs to explain itself to future readers — JSDoc/docstrings, comment quality, self-documenting patterns.
When unit tests pass but the system doesn't work — tests across component boundaries with Docker containers.
When tribal knowledge walks out the door with departing teammates — wikis, runbooks, onboarding guides.
When you need to know something is broken before users tell you — SLIs/SLOs, alerting, distributed tracing.
When you suspect tests pass but don't actually catch bugs — verifies test quality by injecting faults.
When your API docs are stale or missing — generates Swagger/OpenAPI YAML from code.
Pre-compute once, share via context object to all pipeline stages.
When PRs linger or get rubber-stamped — templates, review assignments, merge criteria that work.
When requirements exist but need structure — selects the right PRD format (lean, full, working-backwards, hypothesis) and produces a living document with versioning and lifecycle.
Fast approximate results (Phase 1) then slow enriched results (Phase 2).
When a new contributor clones the repo and has no idea where to start — creates READMEs with usage and install guides.
When code needs restructuring but tests must stay green — safe refactoring with test-first verification.
When you need to restructure code without breaking behavior — complex refactors with strict test verification.
Master workflow for refactoring at four named depth levels (cosmetic, micro, meso, architectural) that compose as a ladder. Always runs chesterton-fence + style-conformance up front, closes with characterisation tests + minimal-diff + auditor. No skipped rungs.
When release day shouldn't mean manually writing changelog — parses git logs for changelogs and version bumps.
Adaptive cleanup framework — start simple, discover value, pivot toward greater good.
Circuit breaker, retry with backoff, bulkhead isolation, fallback chains, timeouts.
Single schema powers defaults, validation, storage, and UI generation.
Composable evaluation — independent micro-scorers with shared context and explicit weights.
When you want tests to drive the design — Red-Green-Refactor cycle with coverage targets.
When a function needs proof it works — generates unit tests for Jest, JUnit, Go Test, PyTest.