mirror of
https://github.com/ksyasuda/dotfiles.git
synced 2026-03-20 06:11:27 -07:00
update skills
This commit is contained in:
@@ -1,180 +0,0 @@
|
|||||||
---
|
|
||||||
name: dispatching-parallel-agents
|
|
||||||
description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
|
|
||||||
---
|
|
||||||
|
|
||||||
# Dispatching Parallel Agents
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
|
|
||||||
|
|
||||||
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
```dot
|
|
||||||
digraph when_to_use {
|
|
||||||
"Multiple failures?" [shape=diamond];
|
|
||||||
"Are they independent?" [shape=diamond];
|
|
||||||
"Single agent investigates all" [shape=box];
|
|
||||||
"One agent per problem domain" [shape=box];
|
|
||||||
"Can they work in parallel?" [shape=diamond];
|
|
||||||
"Sequential agents" [shape=box];
|
|
||||||
"Parallel dispatch" [shape=box];
|
|
||||||
|
|
||||||
"Multiple failures?" -> "Are they independent?" [label="yes"];
|
|
||||||
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
|
|
||||||
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
|
|
||||||
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
|
|
||||||
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use when:**
|
|
||||||
- 3+ test files failing with different root causes
|
|
||||||
- Multiple subsystems broken independently
|
|
||||||
- Each problem can be understood without context from others
|
|
||||||
- No shared state between investigations
|
|
||||||
|
|
||||||
**Don't use when:**
|
|
||||||
- Failures are related (fix one might fix others)
|
|
||||||
- Need to understand full system state
|
|
||||||
- Agents would interfere with each other
|
|
||||||
|
|
||||||
## The Pattern
|
|
||||||
|
|
||||||
### 1. Identify Independent Domains
|
|
||||||
|
|
||||||
Group failures by what's broken:
|
|
||||||
- File A tests: Tool approval flow
|
|
||||||
- File B tests: Batch completion behavior
|
|
||||||
- File C tests: Abort functionality
|
|
||||||
|
|
||||||
Each domain is independent - fixing tool approval doesn't affect abort tests.
|
|
||||||
|
|
||||||
### 2. Create Focused Agent Tasks
|
|
||||||
|
|
||||||
Each agent gets:
|
|
||||||
- **Specific scope:** One test file or subsystem
|
|
||||||
- **Clear goal:** Make these tests pass
|
|
||||||
- **Constraints:** Don't change other code
|
|
||||||
- **Expected output:** Summary of what you found and fixed
|
|
||||||
|
|
||||||
### 3. Dispatch in Parallel
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// In Claude Code / AI environment
|
|
||||||
Task("Fix agent-tool-abort.test.ts failures")
|
|
||||||
Task("Fix batch-completion-behavior.test.ts failures")
|
|
||||||
Task("Fix tool-approval-race-conditions.test.ts failures")
|
|
||||||
// All three run concurrently
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Review and Integrate
|
|
||||||
|
|
||||||
When agents return:
|
|
||||||
- Read each summary
|
|
||||||
- Verify fixes don't conflict
|
|
||||||
- Run full test suite
|
|
||||||
- Integrate all changes
|
|
||||||
|
|
||||||
## Agent Prompt Structure
|
|
||||||
|
|
||||||
Good agent prompts are:
|
|
||||||
1. **Focused** - One clear problem domain
|
|
||||||
2. **Self-contained** - All context needed to understand the problem
|
|
||||||
3. **Specific about output** - What should the agent return?
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
|
|
||||||
|
|
||||||
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
|
|
||||||
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
|
|
||||||
3. "should properly track pendingToolCount" - expects 3 results but gets 0
|
|
||||||
|
|
||||||
These are timing/race condition issues. Your task:
|
|
||||||
|
|
||||||
1. Read the test file and understand what each test verifies
|
|
||||||
2. Identify root cause - timing issues or actual bugs?
|
|
||||||
3. Fix by:
|
|
||||||
- Replacing arbitrary timeouts with event-based waiting
|
|
||||||
- Fixing bugs in abort implementation if found
|
|
||||||
- Adjusting test expectations if testing changed behavior
|
|
||||||
|
|
||||||
Do NOT just increase timeouts - find the real issue.
|
|
||||||
|
|
||||||
Return: Summary of what you found and what you fixed.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Mistakes
|
|
||||||
|
|
||||||
**❌ Too broad:** "Fix all the tests" - agent gets lost
|
|
||||||
**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
|
|
||||||
|
|
||||||
**❌ No context:** "Fix the race condition" - agent doesn't know where
|
|
||||||
**✅ Context:** Paste the error messages and test names
|
|
||||||
|
|
||||||
**❌ No constraints:** Agent might refactor everything
|
|
||||||
**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
|
|
||||||
|
|
||||||
**❌ Vague output:** "Fix it" - you don't know what changed
|
|
||||||
**✅ Specific:** "Return summary of root cause and changes"
|
|
||||||
|
|
||||||
## When NOT to Use
|
|
||||||
|
|
||||||
**Related failures:** Fixing one might fix others - investigate together first
|
|
||||||
**Need full context:** Understanding requires seeing entire system
|
|
||||||
**Exploratory debugging:** You don't know what's broken yet
|
|
||||||
**Shared state:** Agents would interfere (editing same files, using same resources)
|
|
||||||
|
|
||||||
## Real Example from Session
|
|
||||||
|
|
||||||
**Scenario:** 6 test failures across 3 files after major refactoring
|
|
||||||
|
|
||||||
**Failures:**
|
|
||||||
- agent-tool-abort.test.ts: 3 failures (timing issues)
|
|
||||||
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
|
|
||||||
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
|
|
||||||
|
|
||||||
**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
|
|
||||||
|
|
||||||
**Dispatch:**
|
|
||||||
```
|
|
||||||
Agent 1 → Fix agent-tool-abort.test.ts
|
|
||||||
Agent 2 → Fix batch-completion-behavior.test.ts
|
|
||||||
Agent 3 → Fix tool-approval-race-conditions.test.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
**Results:**
|
|
||||||
- Agent 1: Replaced timeouts with event-based waiting
|
|
||||||
- Agent 2: Fixed event structure bug (threadId in wrong place)
|
|
||||||
- Agent 3: Added wait for async tool execution to complete
|
|
||||||
|
|
||||||
**Integration:** All fixes independent, no conflicts, full suite green
|
|
||||||
|
|
||||||
**Time saved:** 3 problems solved in parallel vs sequentially
|
|
||||||
|
|
||||||
## Key Benefits
|
|
||||||
|
|
||||||
1. **Parallelization** - Multiple investigations happen simultaneously
|
|
||||||
2. **Focus** - Each agent has narrow scope, less context to track
|
|
||||||
3. **Independence** - Agents don't interfere with each other
|
|
||||||
4. **Speed** - 3 problems solved in time of 1
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
After agents return:
|
|
||||||
1. **Review each summary** - Understand what changed
|
|
||||||
2. **Check for conflicts** - Did agents edit same code?
|
|
||||||
3. **Run full suite** - Verify all fixes work together
|
|
||||||
4. **Spot check** - Agents can make systematic errors
|
|
||||||
|
|
||||||
## Real-World Impact
|
|
||||||
|
|
||||||
From debugging session (2025-10-03):
|
|
||||||
- 6 failures across 3 files
|
|
||||||
- 3 agents dispatched in parallel
|
|
||||||
- All investigations completed concurrently
|
|
||||||
- All fixes integrated successfully
|
|
||||||
- Zero conflicts between agent changes
|
|
||||||
@@ -1,84 +0,0 @@
|
|||||||
---
|
|
||||||
name: executing-plans
|
|
||||||
description: Use when you have a written implementation plan to execute in a separate session with review checkpoints
|
|
||||||
---
|
|
||||||
|
|
||||||
# Executing Plans
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Load plan, review critically, execute tasks in batches, report for review between batches.
|
|
||||||
|
|
||||||
**Core principle:** Batch execution with checkpoints for architect review.
|
|
||||||
|
|
||||||
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
|
|
||||||
|
|
||||||
## The Process
|
|
||||||
|
|
||||||
### Step 1: Load and Review Plan
|
|
||||||
1. Read plan file
|
|
||||||
2. Review critically - identify any questions or concerns about the plan
|
|
||||||
3. If concerns: Raise them with your human partner before starting
|
|
||||||
4. If no concerns: Create TodoWrite and proceed
|
|
||||||
|
|
||||||
### Step 2: Execute Batch
|
|
||||||
**Default: First 3 tasks**
|
|
||||||
|
|
||||||
For each task:
|
|
||||||
1. Mark as in_progress
|
|
||||||
2. Follow each step exactly (plan has bite-sized steps)
|
|
||||||
3. Run verifications as specified
|
|
||||||
4. Mark as completed
|
|
||||||
|
|
||||||
### Step 3: Report
|
|
||||||
When batch complete:
|
|
||||||
- Show what was implemented
|
|
||||||
- Show verification output
|
|
||||||
- Say: "Ready for feedback."
|
|
||||||
|
|
||||||
### Step 4: Continue
|
|
||||||
Based on feedback:
|
|
||||||
- Apply changes if needed
|
|
||||||
- Execute next batch
|
|
||||||
- Repeat until complete
|
|
||||||
|
|
||||||
### Step 5: Complete Development
|
|
||||||
|
|
||||||
After all tasks complete and verified:
|
|
||||||
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
|
|
||||||
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
|
|
||||||
- Follow that skill to verify tests, present options, execute choice
|
|
||||||
|
|
||||||
## When to Stop and Ask for Help
|
|
||||||
|
|
||||||
**STOP executing immediately when:**
|
|
||||||
- Hit a blocker mid-batch (missing dependency, test fails, instruction unclear)
|
|
||||||
- Plan has critical gaps preventing starting
|
|
||||||
- You don't understand an instruction
|
|
||||||
- Verification fails repeatedly
|
|
||||||
|
|
||||||
**Ask for clarification rather than guessing.**
|
|
||||||
|
|
||||||
## When to Revisit Earlier Steps
|
|
||||||
|
|
||||||
**Return to Review (Step 1) when:**
|
|
||||||
- Partner updates the plan based on your feedback
|
|
||||||
- Fundamental approach needs rethinking
|
|
||||||
|
|
||||||
**Don't force through blockers** - stop and ask.
|
|
||||||
|
|
||||||
## Remember
|
|
||||||
- Review plan critically first
|
|
||||||
- Follow plan steps exactly
|
|
||||||
- Don't skip verifications
|
|
||||||
- Reference skills when plan says to
|
|
||||||
- Between batches: just report and wait
|
|
||||||
- Stop when blocked, don't guess
|
|
||||||
- Never start implementation on main/master branch without explicit user consent
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
**Required workflow skills:**
|
|
||||||
- **superpowers:using-git-worktrees** - REQUIRED: Set up isolated workspace before starting
|
|
||||||
- **superpowers:writing-plans** - Creates the plan this skill executes
|
|
||||||
- **superpowers:finishing-a-development-branch** - Complete development after all tasks
|
|
||||||
@@ -1,242 +0,0 @@
|
|||||||
---
|
|
||||||
name: subagent-driven-development
|
|
||||||
description: Use when executing implementation plans with independent tasks in the current session
|
|
||||||
---
|
|
||||||
|
|
||||||
# Subagent-Driven Development
|
|
||||||
|
|
||||||
Execute plan by dispatching fresh subagent per task, with two-stage review after each: spec compliance review first, then code quality review.
|
|
||||||
|
|
||||||
**Core principle:** Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
```dot
|
|
||||||
digraph when_to_use {
|
|
||||||
"Have implementation plan?" [shape=diamond];
|
|
||||||
"Tasks mostly independent?" [shape=diamond];
|
|
||||||
"Stay in this session?" [shape=diamond];
|
|
||||||
"subagent-driven-development" [shape=box];
|
|
||||||
"executing-plans" [shape=box];
|
|
||||||
"Manual execution or brainstorm first" [shape=box];
|
|
||||||
|
|
||||||
"Have implementation plan?" -> "Tasks mostly independent?" [label="yes"];
|
|
||||||
"Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"];
|
|
||||||
"Tasks mostly independent?" -> "Stay in this session?" [label="yes"];
|
|
||||||
"Tasks mostly independent?" -> "Manual execution or brainstorm first" [label="no - tightly coupled"];
|
|
||||||
"Stay in this session?" -> "subagent-driven-development" [label="yes"];
|
|
||||||
"Stay in this session?" -> "executing-plans" [label="no - parallel session"];
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**vs. Executing Plans (parallel session):**
|
|
||||||
- Same session (no context switch)
|
|
||||||
- Fresh subagent per task (no context pollution)
|
|
||||||
- Two-stage review after each task: spec compliance first, then code quality
|
|
||||||
- Faster iteration (no human-in-loop between tasks)
|
|
||||||
|
|
||||||
## The Process
|
|
||||||
|
|
||||||
```dot
|
|
||||||
digraph process {
|
|
||||||
rankdir=TB;
|
|
||||||
|
|
||||||
subgraph cluster_per_task {
|
|
||||||
label="Per Task";
|
|
||||||
"Dispatch implementer subagent (./implementer-prompt.md)" [shape=box];
|
|
||||||
"Implementer subagent asks questions?" [shape=diamond];
|
|
||||||
"Answer questions, provide context" [shape=box];
|
|
||||||
"Implementer subagent implements, tests, commits, self-reviews" [shape=box];
|
|
||||||
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [shape=box];
|
|
||||||
"Spec reviewer subagent confirms code matches spec?" [shape=diamond];
|
|
||||||
"Implementer subagent fixes spec gaps" [shape=box];
|
|
||||||
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [shape=box];
|
|
||||||
"Code quality reviewer subagent approves?" [shape=diamond];
|
|
||||||
"Implementer subagent fixes quality issues" [shape=box];
|
|
||||||
"Mark task complete in TodoWrite" [shape=box];
|
|
||||||
}
|
|
||||||
|
|
||||||
"Read plan, extract all tasks with full text, note context, create TodoWrite" [shape=box];
|
|
||||||
"More tasks remain?" [shape=diamond];
|
|
||||||
"Dispatch final code reviewer subagent for entire implementation" [shape=box];
|
|
||||||
"Use superpowers:finishing-a-development-branch" [shape=box style=filled fillcolor=lightgreen];
|
|
||||||
|
|
||||||
"Read plan, extract all tasks with full text, note context, create TodoWrite" -> "Dispatch implementer subagent (./implementer-prompt.md)";
|
|
||||||
"Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?";
|
|
||||||
"Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"];
|
|
||||||
"Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)";
|
|
||||||
"Implementer subagent asks questions?" -> "Implementer subagent implements, tests, commits, self-reviews" [label="no"];
|
|
||||||
"Implementer subagent implements, tests, commits, self-reviews" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)";
|
|
||||||
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" -> "Spec reviewer subagent confirms code matches spec?";
|
|
||||||
"Spec reviewer subagent confirms code matches spec?" -> "Implementer subagent fixes spec gaps" [label="no"];
|
|
||||||
"Implementer subagent fixes spec gaps" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [label="re-review"];
|
|
||||||
"Spec reviewer subagent confirms code matches spec?" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="yes"];
|
|
||||||
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?";
|
|
||||||
"Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"];
|
|
||||||
"Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="re-review"];
|
|
||||||
"Code quality reviewer subagent approves?" -> "Mark task complete in TodoWrite" [label="yes"];
|
|
||||||
"Mark task complete in TodoWrite" -> "More tasks remain?";
|
|
||||||
"More tasks remain?" -> "Dispatch implementer subagent (./implementer-prompt.md)" [label="yes"];
|
|
||||||
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
|
|
||||||
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Prompt Templates
|
|
||||||
|
|
||||||
- `./implementer-prompt.md` - Dispatch implementer subagent
|
|
||||||
- `./spec-reviewer-prompt.md` - Dispatch spec compliance reviewer subagent
|
|
||||||
- `./code-quality-reviewer-prompt.md` - Dispatch code quality reviewer subagent
|
|
||||||
|
|
||||||
## Example Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
You: I'm using Subagent-Driven Development to execute this plan.
|
|
||||||
|
|
||||||
[Read plan file once: docs/plans/feature-plan.md]
|
|
||||||
[Extract all 5 tasks with full text and context]
|
|
||||||
[Create TodoWrite with all tasks]
|
|
||||||
|
|
||||||
Task 1: Hook installation script
|
|
||||||
|
|
||||||
[Get Task 1 text and context (already extracted)]
|
|
||||||
[Dispatch implementation subagent with full task text + context]
|
|
||||||
|
|
||||||
Implementer: "Before I begin - should the hook be installed at user or system level?"
|
|
||||||
|
|
||||||
You: "User level (~/.config/superpowers/hooks/)"
|
|
||||||
|
|
||||||
Implementer: "Got it. Implementing now..."
|
|
||||||
[Later] Implementer:
|
|
||||||
- Implemented install-hook command
|
|
||||||
- Added tests, 5/5 passing
|
|
||||||
- Self-review: Found I missed --force flag, added it
|
|
||||||
- Committed
|
|
||||||
|
|
||||||
[Dispatch spec compliance reviewer]
|
|
||||||
Spec reviewer: ✅ Spec compliant - all requirements met, nothing extra
|
|
||||||
|
|
||||||
[Get git SHAs, dispatch code quality reviewer]
|
|
||||||
Code reviewer: Strengths: Good test coverage, clean. Issues: None. Approved.
|
|
||||||
|
|
||||||
[Mark Task 1 complete]
|
|
||||||
|
|
||||||
Task 2: Recovery modes
|
|
||||||
|
|
||||||
[Get Task 2 text and context (already extracted)]
|
|
||||||
[Dispatch implementation subagent with full task text + context]
|
|
||||||
|
|
||||||
Implementer: [No questions, proceeds]
|
|
||||||
Implementer:
|
|
||||||
- Added verify/repair modes
|
|
||||||
- 8/8 tests passing
|
|
||||||
- Self-review: All good
|
|
||||||
- Committed
|
|
||||||
|
|
||||||
[Dispatch spec compliance reviewer]
|
|
||||||
Spec reviewer: ❌ Issues:
|
|
||||||
- Missing: Progress reporting (spec says "report every 100 items")
|
|
||||||
- Extra: Added --json flag (not requested)
|
|
||||||
|
|
||||||
[Implementer fixes issues]
|
|
||||||
Implementer: Removed --json flag, added progress reporting
|
|
||||||
|
|
||||||
[Spec reviewer reviews again]
|
|
||||||
Spec reviewer: ✅ Spec compliant now
|
|
||||||
|
|
||||||
[Dispatch code quality reviewer]
|
|
||||||
Code reviewer: Strengths: Solid. Issues (Important): Magic number (100)
|
|
||||||
|
|
||||||
[Implementer fixes]
|
|
||||||
Implementer: Extracted PROGRESS_INTERVAL constant
|
|
||||||
|
|
||||||
[Code reviewer reviews again]
|
|
||||||
Code reviewer: ✅ Approved
|
|
||||||
|
|
||||||
[Mark Task 2 complete]
|
|
||||||
|
|
||||||
...
|
|
||||||
|
|
||||||
[After all tasks]
|
|
||||||
[Dispatch final code-reviewer]
|
|
||||||
Final reviewer: All requirements met, ready to merge
|
|
||||||
|
|
||||||
Done!
|
|
||||||
```
|
|
||||||
|
|
||||||
## Advantages
|
|
||||||
|
|
||||||
**vs. Manual execution:**
|
|
||||||
- Subagents follow TDD naturally
|
|
||||||
- Fresh context per task (no confusion)
|
|
||||||
- Parallel-safe (subagents don't interfere)
|
|
||||||
- Subagent can ask questions (before AND during work)
|
|
||||||
|
|
||||||
**vs. Executing Plans:**
|
|
||||||
- Same session (no handoff)
|
|
||||||
- Continuous progress (no waiting)
|
|
||||||
- Review checkpoints automatic
|
|
||||||
|
|
||||||
**Efficiency gains:**
|
|
||||||
- No file reading overhead (controller provides full text)
|
|
||||||
- Controller curates exactly what context is needed
|
|
||||||
- Subagent gets complete information upfront
|
|
||||||
- Questions surfaced before work begins (not after)
|
|
||||||
|
|
||||||
**Quality gates:**
|
|
||||||
- Self-review catches issues before handoff
|
|
||||||
- Two-stage review: spec compliance, then code quality
|
|
||||||
- Review loops ensure fixes actually work
|
|
||||||
- Spec compliance prevents over/under-building
|
|
||||||
- Code quality ensures implementation is well-built
|
|
||||||
|
|
||||||
**Cost:**
|
|
||||||
- More subagent invocations (implementer + 2 reviewers per task)
|
|
||||||
- Controller does more prep work (extracting all tasks upfront)
|
|
||||||
- Review loops add iterations
|
|
||||||
- But catches issues early (cheaper than debugging later)
|
|
||||||
|
|
||||||
## Red Flags
|
|
||||||
|
|
||||||
**Never:**
|
|
||||||
- Start implementation on main/master branch without explicit user consent
|
|
||||||
- Skip reviews (spec compliance OR code quality)
|
|
||||||
- Proceed with unfixed issues
|
|
||||||
- Dispatch multiple implementation subagents in parallel (conflicts)
|
|
||||||
- Make subagent read plan file (provide full text instead)
|
|
||||||
- Skip scene-setting context (subagent needs to understand where task fits)
|
|
||||||
- Ignore subagent questions (answer before letting them proceed)
|
|
||||||
- Accept "close enough" on spec compliance (spec reviewer found issues = not done)
|
|
||||||
- Skip review loops (reviewer found issues = implementer fixes = review again)
|
|
||||||
- Let implementer self-review replace actual review (both are needed)
|
|
||||||
- **Start code quality review before spec compliance is ✅** (wrong order)
|
|
||||||
- Move to next task while either review has open issues
|
|
||||||
|
|
||||||
**If subagent asks questions:**
|
|
||||||
- Answer clearly and completely
|
|
||||||
- Provide additional context if needed
|
|
||||||
- Don't rush them into implementation
|
|
||||||
|
|
||||||
**If reviewer finds issues:**
|
|
||||||
- Implementer (same subagent) fixes them
|
|
||||||
- Reviewer reviews again
|
|
||||||
- Repeat until approved
|
|
||||||
- Don't skip the re-review
|
|
||||||
|
|
||||||
**If subagent fails task:**
|
|
||||||
- Dispatch fix subagent with specific instructions
|
|
||||||
- Don't try to fix manually (context pollution)
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
**Required workflow skills:**
|
|
||||||
- **superpowers:using-git-worktrees** - REQUIRED: Set up isolated workspace before starting
|
|
||||||
- **superpowers:writing-plans** - Creates the plan this skill executes
|
|
||||||
- **superpowers:requesting-code-review** - Code review template for reviewer subagents
|
|
||||||
- **superpowers:finishing-a-development-branch** - Complete development after all tasks
|
|
||||||
|
|
||||||
**Subagents should use:**
|
|
||||||
- **superpowers:test-driven-development** - Subagents follow TDD for each task
|
|
||||||
|
|
||||||
**Alternative workflow:**
|
|
||||||
- **superpowers:executing-plans** - Use for parallel session instead of same-session execution
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
# Code Quality Reviewer Prompt Template
|
|
||||||
|
|
||||||
Use this template when dispatching a code quality reviewer subagent.
|
|
||||||
|
|
||||||
**Purpose:** Verify implementation is well-built (clean, tested, maintainable)
|
|
||||||
|
|
||||||
**Only dispatch after spec compliance review passes.**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task tool (superpowers:code-reviewer):
|
|
||||||
Use template at requesting-code-review/code-reviewer.md
|
|
||||||
|
|
||||||
WHAT_WAS_IMPLEMENTED: [from implementer's report]
|
|
||||||
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
|
|
||||||
BASE_SHA: [commit before task]
|
|
||||||
HEAD_SHA: [current commit]
|
|
||||||
DESCRIPTION: [task summary]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment
|
|
||||||
@@ -1,78 +0,0 @@
|
|||||||
# Implementer Subagent Prompt Template
|
|
||||||
|
|
||||||
Use this template when dispatching an implementer subagent.
|
|
||||||
|
|
||||||
```
|
|
||||||
Task tool (general-purpose):
|
|
||||||
description: "Implement Task N: [task name]"
|
|
||||||
prompt: |
|
|
||||||
You are implementing Task N: [task name]
|
|
||||||
|
|
||||||
## Task Description
|
|
||||||
|
|
||||||
[FULL TEXT of task from plan - paste it here, don't make subagent read file]
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
[Scene-setting: where this fits, dependencies, architectural context]
|
|
||||||
|
|
||||||
## Before You Begin
|
|
||||||
|
|
||||||
If you have questions about:
|
|
||||||
- The requirements or acceptance criteria
|
|
||||||
- The approach or implementation strategy
|
|
||||||
- Dependencies or assumptions
|
|
||||||
- Anything unclear in the task description
|
|
||||||
|
|
||||||
**Ask them now.** Raise any concerns before starting work.
|
|
||||||
|
|
||||||
## Your Job
|
|
||||||
|
|
||||||
Once you're clear on requirements:
|
|
||||||
1. Implement exactly what the task specifies
|
|
||||||
2. Write tests (following TDD if task says to)
|
|
||||||
3. Verify implementation works
|
|
||||||
4. Commit your work
|
|
||||||
5. Self-review (see below)
|
|
||||||
6. Report back
|
|
||||||
|
|
||||||
Work from: [directory]
|
|
||||||
|
|
||||||
**While you work:** If you encounter something unexpected or unclear, **ask questions**.
|
|
||||||
It's always OK to pause and clarify. Don't guess or make assumptions.
|
|
||||||
|
|
||||||
## Before Reporting Back: Self-Review
|
|
||||||
|
|
||||||
Review your work with fresh eyes. Ask yourself:
|
|
||||||
|
|
||||||
**Completeness:**
|
|
||||||
- Did I fully implement everything in the spec?
|
|
||||||
- Did I miss any requirements?
|
|
||||||
- Are there edge cases I didn't handle?
|
|
||||||
|
|
||||||
**Quality:**
|
|
||||||
- Is this my best work?
|
|
||||||
- Are names clear and accurate (match what things do, not how they work)?
|
|
||||||
- Is the code clean and maintainable?
|
|
||||||
|
|
||||||
**Discipline:**
|
|
||||||
- Did I avoid overbuilding (YAGNI)?
|
|
||||||
- Did I only build what was requested?
|
|
||||||
- Did I follow existing patterns in the codebase?
|
|
||||||
|
|
||||||
**Testing:**
|
|
||||||
- Do tests actually verify behavior (not just mock behavior)?
|
|
||||||
- Did I follow TDD if required?
|
|
||||||
- Are tests comprehensive?
|
|
||||||
|
|
||||||
If you find issues during self-review, fix them now before reporting.
|
|
||||||
|
|
||||||
## Report Format
|
|
||||||
|
|
||||||
When done, report:
|
|
||||||
- What you implemented
|
|
||||||
- What you tested and test results
|
|
||||||
- Files changed
|
|
||||||
- Self-review findings (if any)
|
|
||||||
- Any issues or concerns
|
|
||||||
```
|
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
# Spec Compliance Reviewer Prompt Template
|
|
||||||
|
|
||||||
Use this template when dispatching a spec compliance reviewer subagent.
|
|
||||||
|
|
||||||
**Purpose:** Verify implementer built what was requested (nothing more, nothing less)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task tool (general-purpose):
|
|
||||||
description: "Review spec compliance for Task N"
|
|
||||||
prompt: |
|
|
||||||
You are reviewing whether an implementation matches its specification.
|
|
||||||
|
|
||||||
## What Was Requested
|
|
||||||
|
|
||||||
[FULL TEXT of task requirements]
|
|
||||||
|
|
||||||
## What Implementer Claims They Built
|
|
||||||
|
|
||||||
[From implementer's report]
|
|
||||||
|
|
||||||
## CRITICAL: Do Not Trust the Report
|
|
||||||
|
|
||||||
The implementer finished suspiciously quickly. Their report may be incomplete,
|
|
||||||
inaccurate, or optimistic. You MUST verify everything independently.
|
|
||||||
|
|
||||||
**DO NOT:**
|
|
||||||
- Take their word for what they implemented
|
|
||||||
- Trust their claims about completeness
|
|
||||||
- Accept their interpretation of requirements
|
|
||||||
|
|
||||||
**DO:**
|
|
||||||
- Read the actual code they wrote
|
|
||||||
- Compare actual implementation to requirements line by line
|
|
||||||
- Check for missing pieces they claimed to implement
|
|
||||||
- Look for extra features they didn't mention
|
|
||||||
|
|
||||||
## Your Job
|
|
||||||
|
|
||||||
Read the implementation code and verify:
|
|
||||||
|
|
||||||
**Missing requirements:**
|
|
||||||
- Did they implement everything that was requested?
|
|
||||||
- Are there requirements they skipped or missed?
|
|
||||||
- Did they claim something works but didn't actually implement it?
|
|
||||||
|
|
||||||
**Extra/unneeded work:**
|
|
||||||
- Did they build things that weren't requested?
|
|
||||||
- Did they over-engineer or add unnecessary features?
|
|
||||||
- Did they add "nice to haves" that weren't in spec?
|
|
||||||
|
|
||||||
**Misunderstandings:**
|
|
||||||
- Did they interpret requirements differently than intended?
|
|
||||||
- Did they solve the wrong problem?
|
|
||||||
- Did they implement the right feature but wrong way?
|
|
||||||
|
|
||||||
**Verify by reading code, not by trusting report.**
|
|
||||||
|
|
||||||
Report:
|
|
||||||
- ✅ Spec compliant (if everything matches after code inspection)
|
|
||||||
- ❌ Issues found: [list specifically what's missing or extra, with file:line references]
|
|
||||||
```
|
|
||||||
@@ -1,116 +0,0 @@
|
|||||||
---
|
|
||||||
name: writing-plans
|
|
||||||
description: Use when you have a spec or requirements for a multi-step task, before touching code
|
|
||||||
---
|
|
||||||
|
|
||||||
# Writing Plans
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
|
|
||||||
|
|
||||||
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
|
|
||||||
|
|
||||||
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
|
|
||||||
|
|
||||||
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
|
|
||||||
|
|
||||||
**Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
|
|
||||||
|
|
||||||
## Bite-Sized Task Granularity
|
|
||||||
|
|
||||||
**Each step is one action (2-5 minutes):**
|
|
||||||
- "Write the failing test" - step
|
|
||||||
- "Run it to make sure it fails" - step
|
|
||||||
- "Implement the minimal code to make the test pass" - step
|
|
||||||
- "Run the tests and make sure they pass" - step
|
|
||||||
- "Commit" - step
|
|
||||||
|
|
||||||
## Plan Document Header
|
|
||||||
|
|
||||||
**Every plan MUST start with this header:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# [Feature Name] Implementation Plan
|
|
||||||
|
|
||||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
|
||||||
|
|
||||||
**Goal:** [One sentence describing what this builds]
|
|
||||||
|
|
||||||
**Architecture:** [2-3 sentences about approach]
|
|
||||||
|
|
||||||
**Tech Stack:** [Key technologies/libraries]
|
|
||||||
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
## Task Structure
|
|
||||||
|
|
||||||
````markdown
|
|
||||||
### Task N: [Component Name]
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Create: `exact/path/to/file.py`
|
|
||||||
- Modify: `exact/path/to/existing.py:123-145`
|
|
||||||
- Test: `tests/exact/path/to/test.py`
|
|
||||||
|
|
||||||
**Step 1: Write the failing test**
|
|
||||||
|
|
||||||
```python
|
|
||||||
def test_specific_behavior():
|
|
||||||
result = function(input)
|
|
||||||
assert result == expected
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 2: Run test to verify it fails**
|
|
||||||
|
|
||||||
Run: `pytest tests/path/test.py::test_name -v`
|
|
||||||
Expected: FAIL with "function not defined"
|
|
||||||
|
|
||||||
**Step 3: Write minimal implementation**
|
|
||||||
|
|
||||||
```python
|
|
||||||
def function(input):
|
|
||||||
return expected
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 4: Run test to verify it passes**
|
|
||||||
|
|
||||||
Run: `pytest tests/path/test.py::test_name -v`
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
**Step 5: Commit**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add tests/path/test.py src/path/file.py
|
|
||||||
git commit -m "feat: add specific feature"
|
|
||||||
```
|
|
||||||
````
|
|
||||||
|
|
||||||
## Remember
|
|
||||||
- Exact file paths always
|
|
||||||
- Complete code in plan (not "add validation")
|
|
||||||
- Exact commands with expected output
|
|
||||||
- Reference relevant skills with @ syntax
|
|
||||||
- DRY, YAGNI, TDD, frequent commits
|
|
||||||
|
|
||||||
## Execution Handoff
|
|
||||||
|
|
||||||
After saving the plan, offer execution choice:
|
|
||||||
|
|
||||||
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
|
|
||||||
|
|
||||||
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
|
|
||||||
|
|
||||||
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
|
|
||||||
|
|
||||||
**Which approach?"**
|
|
||||||
|
|
||||||
**If Subagent-Driven chosen:**
|
|
||||||
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
|
|
||||||
- Stay in this session
|
|
||||||
- Fresh subagent per task + code review
|
|
||||||
|
|
||||||
**If Parallel Session chosen:**
|
|
||||||
- Guide them to open new session in worktree
|
|
||||||
- **REQUIRED SUB-SKILL:** New session uses superpowers:executing-plans
|
|
||||||
Reference in New Issue
Block a user