aparigraha-task
Master workflow walking the Aparigraha gates end-to-end — onboarding, dependency inventory, style detection, reuse-first authoring, surgical diff, audit. Orchestrates the 25-pragmatism category skills. No implementation logic.
33 skills tagged advisory · risk axis.
Master workflow walking the Aparigraha gates end-to-end — onboarding, dependency inventory, style detection, reuse-first authoring, surgical diff, audit. Orchestrates the 25-pragmatism category skills. No implementation logic.
First-touch protocol for a new project — read README/AGENTS.md/ADRs, find build/test/CI commands, hot zones, entry points, and test layout; emit an onboarding cheat-sheet other Aparigraha skills consume.
Before deleting or refactoring code that looks unused or weird, reconstruct why it exists from history, tests, comments, and call-graph; produce a memo plus an edge-case checklist that gates the change.
When a PR needs more than a thumbs-up — review checklists, defect categories, severity classification.
The laws of the agent's universe — Satya (truth), Dharma (safety), Ahimsa (non-destruction), Pragya (wisdom).
When you're not sure what depends on what — builds dependency graphs, detects cycles, audits for staleness and CVEs.
Mine the project's declared dependencies and produce a per-capability inventory of utilities the team already imports — advisory, never auto-rewrites.
When the team uses different words for the same thing — creates a domain glossary, entity map, and Protected Terms.
When "how long will this take?" needs a real answer — PERT estimation, relative sizing, confidence intervals.
Finds the right skill even when you describe the problem instead of naming the tool — graduated fuzzy routing.
When the answer is buried in 10,000 log lines — parsing, pattern recognition, event correlation, anomaly detection.
Smallest correct change that solves the problem. No drive-by formatting or opportunistic refactors; diff-size caps, reversibility checks, and commit splitting when concerns are tangled.
Mid-task skill injection — detects domain shifts and suggests relevant skills.
When two heads are better than one — driver/navigator roles, mob programming, knowledge transfer protocols.
When the app is slow but you don't know where — bottleneck identification, complexity analysis, caching review.
Master workflow for thorough PR review. Orchestrates code-review + security-review + performance-review in parallel, aggregates verdict, applies pr-management, audits. No implementation logic.
Aparigraha — non-accumulation. Check before create, conform within scope, stay surgical, and validate edge cases before trusting any reuse or improvement of existing code.
Direction-seeking protocol — seek corrections, present options, let humans steer.
When something is slow and you need proof of where — analyzes logs/traces to find bottlenecks and hotspots.
When scope is fuzzy — interviews the user for goals, constraints, and success metrics before coding starts.
Monitor constraints (memory, CPU, time, rate limits), adapt behavior gracefully.
Before writing any new utility, scan the codebase and declared dependencies for an equivalent. Reuse only after fit and edge-case validation.
Before you ship something that keeps you up at night — identifies risks, scores probability/impact, plans mitigations.
When the bug is a symptom, not the cause — 5-whys, fault trees, git bisection, symptom-to-cause mapping.
Forces explicit reasoning before acting — declares task, complexity, risk, and plan so nothing starts half-thought.
Detect the project's existing conventions (naming, formatting, errors, logging, tests) and produce a house-style profile downstream skills consult — awareness, not enforcement.
When you need to see the big picture before building — component diagrams, data flow, trade-off analysis.
When a task feels too big to start — breaks it into dependency-ordered subtasks with T-shirt sizing.
When tech debt is everywhere but you can only fix some — categorization, impact scoring, payoff prioritization.
When you need a testing philosophy, not just tests — defines the pyramid, coverage targets, and conventions.
When you're unsure what to test next — coverage analysis, test type selection, boundary condition identification.
Resource-aware model tier, tool, and delegation selection per cognitive mode.
When you need metrics without false-failing the build — separates collection (Pass 1) from gating (Pass 2).