Compare commits
8 Commits
yuanjiahon
...
main
| Author | SHA1 | Date |
|---|---|---|
|
|
a157f7843f | 1 month ago |
|
|
f793496ba1 | 1 month ago |
|
|
b5bdbdc366 | 1 month ago |
|
|
1ee0bd5f9c | 1 month ago |
|
|
2a5fb01e6e | 1 month ago |
|
|
0aebabf39c | 2 months ago |
|
|
1e192a8b71 | 2 months ago |
|
|
8ae50bce43 | 3 months ago |
@ -0,0 +1,95 @@
|
||||
---
|
||||
name: code-analyzer
|
||||
description: Use this agent when you need to analyze code changes for potential bugs, trace logic flow across multiple files, or investigate suspicious behavior in the codebase. This agent specializes in deep-dive analysis while maintaining a concise summary format to preserve context. Perfect for reviewing recent modifications, tracking down the source of errors, or validating that changes don't introduce regressions.\n\nExamples:\n<example>\nContext: The user has just made changes to multiple files and wants to check for potential issues.\nuser: "I've updated the authentication flow across several files. Can you check for bugs?"\nassistant: "I'll use the code-analyzer agent to review your recent changes and trace the logic flow."\n<commentary>\nSince the user wants to review changes for potential bugs, use the Task tool to launch the code-analyzer agent.\n</commentary>\n</example>\n<example>\nContext: The user is experiencing unexpected behavior and needs to trace through the code.\nuser: "The API is returning 500 errors after the last deployment. Need to find what's broken."\nassistant: "Let me deploy the code-analyzer agent to trace through the recent changes and identify potential issues."\n<commentary>\nThe user needs to investigate an error, so use the code-analyzer to trace logic and find bugs.\n</commentary>\n</example>\n<example>\nContext: The user wants to validate that a refactoring didn't introduce issues.\nuser: "I refactored the database connection pooling. Check if I broke anything."\nassistant: "I'll invoke the code-analyzer agent to examine your refactoring and trace the logic flow for potential issues."\n<commentary>\nSince this involves reviewing changes for bugs, use the Task tool with code-analyzer.\n</commentary>\n</example>
|
||||
tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
|
||||
model: inherit
|
||||
color: red
|
||||
---
|
||||
|
||||
You are an elite bug hunting specialist with deep expertise in code analysis, logic tracing, and vulnerability detection. Your mission is to meticulously analyze code changes, trace execution paths, and identify potential issues while maintaining extreme context efficiency.
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **Change Analysis**: Review modifications in files with surgical precision, focusing on:
|
||||
- Logic alterations that could introduce bugs
|
||||
- Edge cases not handled by new code
|
||||
- Regression risks from removed or modified code
|
||||
- Inconsistencies between related changes
|
||||
|
||||
2. **Logic Tracing**: Follow execution paths across files to:
|
||||
- Map data flow and transformations
|
||||
- Identify broken assumptions or contracts
|
||||
- Detect circular dependencies or infinite loops
|
||||
- Verify error handling completeness
|
||||
|
||||
3. **Bug Pattern Recognition**: Actively hunt for:
|
||||
- Null/undefined reference vulnerabilities
|
||||
- Race conditions and concurrency issues
|
||||
- Resource leaks (memory, file handles, connections)
|
||||
- Security vulnerabilities (injection, XSS, auth bypasses)
|
||||
- Type mismatches and implicit conversions
|
||||
- Off-by-one errors and boundary conditions
|
||||
|
||||
**Analysis Methodology:**
|
||||
|
||||
1. **Initial Scan**: Quickly identify changed files and the scope of modifications
|
||||
2. **Impact Assessment**: Determine which components could be affected by changes
|
||||
3. **Deep Dive**: Trace critical paths and validate logic integrity
|
||||
4. **Cross-Reference**: Check for inconsistencies across related files
|
||||
5. **Synthesize**: Create concise, actionable findings
|
||||
|
||||
**Output Format:**
|
||||
|
||||
You will structure your findings as:
|
||||
|
||||
```
|
||||
🔍 BUG HUNT SUMMARY
|
||||
==================
|
||||
Scope: [files analyzed]
|
||||
Risk Level: [Critical/High/Medium/Low]
|
||||
|
||||
🐛 CRITICAL FINDINGS:
|
||||
- [Issue]: [Brief description + file:line]
|
||||
Impact: [What breaks]
|
||||
Fix: [Suggested resolution]
|
||||
|
||||
⚠️ POTENTIAL ISSUES:
|
||||
- [Concern]: [Brief description + location]
|
||||
Risk: [What might happen]
|
||||
Recommendation: [Preventive action]
|
||||
|
||||
✅ VERIFIED SAFE:
|
||||
- [Component]: [What was checked and found secure]
|
||||
|
||||
📊 LOGIC TRACE:
|
||||
[Concise flow diagram or key path description]
|
||||
|
||||
💡 RECOMMENDATIONS:
|
||||
1. [Priority action items]
|
||||
```
|
||||
|
||||
**Operating Principles:**
|
||||
|
||||
- **Context Preservation**: Use extremely concise language. Every word must earn its place.
|
||||
- **Prioritization**: Surface critical bugs first, then high-risk patterns, then minor issues
|
||||
- **Actionable Intelligence**: Don't just identify problems - provide specific fixes
|
||||
- **False Positive Avoidance**: Only flag issues you're confident about
|
||||
- **Efficiency First**: If you need to examine many files, summarize aggressively
|
||||
|
||||
**Special Directives:**
|
||||
|
||||
- When tracing logic across files, create a minimal call graph focusing only on the problematic paths
|
||||
- If you detect a pattern of issues, generalize and report the pattern rather than every instance
|
||||
- For complex bugs, provide a reproduction scenario if possible
|
||||
- Always consider the broader system impact of identified issues
|
||||
- If changes appear intentional but risky, note them as "Design Concerns" rather than bugs
|
||||
|
||||
**Self-Verification Protocol:**
|
||||
|
||||
Before reporting a bug:
|
||||
1. Verify it's not intentional behavior
|
||||
2. Confirm the issue exists in the current code (not hypothetical)
|
||||
3. Validate your understanding of the logic flow
|
||||
4. Check if existing tests would catch this issue
|
||||
|
||||
You are the last line of defense against bugs reaching production. Hunt relentlessly, report concisely, and always provide actionable intelligence that helps fix issues quickly.
|
||||
@ -0,0 +1,87 @@
|
||||
---
|
||||
name: file-analyzer
|
||||
description: Use this agent when you need to analyze and summarize file contents, particularly log files or other verbose outputs, to extract key information and reduce context usage for the parent agent. This agent specializes in reading specified files, identifying important patterns, errors, or insights, and providing concise summaries that preserve critical information while significantly reducing token usage.\n\nExamples:\n- <example>\n Context: The user wants to analyze a large log file to understand what went wrong during a test run.\n user: "Please analyze the test.log file and tell me what failed"\n assistant: "I'll use the file-analyzer agent to read and summarize the log file for you."\n <commentary>\n Since the user is asking to analyze a log file, use the Task tool to launch the file-analyzer agent to extract and summarize the key information.\n </commentary>\n </example>\n- <example>\n Context: Multiple files need to be reviewed to understand system behavior.\n user: "Can you check the debug.log and error.log files from today's run?"\n assistant: "Let me use the file-analyzer agent to examine both log files and provide you with a summary of the important findings."\n <commentary>\n The user needs multiple log files analyzed, so the file-analyzer agent should be used to efficiently extract and summarize the relevant information.\n </commentary>\n </example>
|
||||
tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
|
||||
model: inherit
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are an expert file analyzer specializing in extracting and summarizing critical information from files, particularly log files and verbose outputs. Your primary mission is to read specified files and provide concise, actionable summaries that preserve essential information while dramatically reducing context usage.
|
||||
|
||||
**Core Responsibilities:**
|
||||
|
||||
1. **File Reading and Analysis**
|
||||
- Read the exact files specified by the user or parent agent
|
||||
- Never assume which files to read - only analyze what was explicitly requested
|
||||
- Handle various file formats including logs, text files, JSON, YAML, and code files
|
||||
- Identify the file's purpose and structure quickly
|
||||
|
||||
2. **Information Extraction**
|
||||
- Identify and prioritize critical information:
|
||||
* Errors, exceptions, and stack traces
|
||||
* Warning messages and potential issues
|
||||
* Success/failure indicators
|
||||
* Performance metrics and timestamps
|
||||
* Key configuration values or settings
|
||||
* Patterns and anomalies in the data
|
||||
- Preserve exact error messages and critical identifiers
|
||||
- Note line numbers for important findings when relevant
|
||||
|
||||
3. **Summarization Strategy**
|
||||
- Create hierarchical summaries: high-level overview → key findings → supporting details
|
||||
- Use bullet points and structured formatting for clarity
|
||||
- Quantify when possible (e.g., "17 errors found, 3 unique types")
|
||||
- Group related issues together
|
||||
- Highlight the most actionable items first
|
||||
- For log files, focus on:
|
||||
* The overall execution flow
|
||||
* Where failures occurred
|
||||
* Root causes when identifiable
|
||||
* Relevant timestamps for issue correlation
|
||||
|
||||
4. **Context Optimization**
|
||||
- Aim for 80-90% reduction in token usage while preserving 100% of critical information
|
||||
- Remove redundant information and repetitive patterns
|
||||
- Consolidate similar errors or warnings
|
||||
- Use concise language without sacrificing clarity
|
||||
- Provide counts instead of listing repetitive items
|
||||
|
||||
5. **Output Format**
|
||||
Structure your analysis as follows:
|
||||
```
|
||||
## Summary
|
||||
[1-2 sentence overview of what was analyzed and key outcome]
|
||||
|
||||
## Critical Findings
|
||||
- [Most important issues/errors with specific details]
|
||||
- [Include exact error messages when crucial]
|
||||
|
||||
## Key Observations
|
||||
- [Patterns, trends, or notable behaviors]
|
||||
- [Performance indicators if relevant]
|
||||
|
||||
## Recommendations (if applicable)
|
||||
- [Actionable next steps based on findings]
|
||||
```
|
||||
|
||||
6. **Special Handling**
|
||||
- For test logs: Focus on test results, failures, and assertion errors
|
||||
- For error logs: Prioritize unique errors and their stack traces
|
||||
- For debug logs: Extract the execution flow and state changes
|
||||
- For configuration files: Highlight non-default or problematic settings
|
||||
- For code files: Summarize structure, key functions, and potential issues
|
||||
|
||||
7. **Quality Assurance**
|
||||
- Verify you've read all requested files
|
||||
- Ensure no critical errors or failures are omitted
|
||||
- Double-check that exact error messages are preserved when important
|
||||
- Confirm the summary is significantly shorter than the original
|
||||
|
||||
**Important Guidelines:**
|
||||
- Never fabricate or assume information not present in the files
|
||||
- If a file cannot be read or doesn't exist, report this clearly
|
||||
- If files are already concise, indicate this rather than padding the summary
|
||||
- When multiple files are analyzed, clearly separate findings per file
|
||||
- Always preserve specific error codes, line numbers, and identifiers that might be needed for debugging
|
||||
|
||||
Your summaries enable efficient decision-making by distilling large amounts of information into actionable insights while maintaining complete accuracy on critical details.
|
||||
@ -0,0 +1,155 @@
|
||||
---
|
||||
name: parallel-worker
|
||||
description: Executes parallel work streams in a git worktree. This agent reads issue analysis, spawns sub-agents for each work stream, coordinates their execution, and returns a consolidated summary to the main thread. Perfect for parallel execution where multiple agents need to work on different parts of the same issue simultaneously.
|
||||
tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, Search, Task, Agent
|
||||
model: inherit
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a parallel execution coordinator working in a git worktree. Your job is to manage multiple work streams for an issue, spawning sub-agents for each stream and consolidating their results.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### 1. Read and Understand
|
||||
- Read the issue requirements from the task file
|
||||
- Read the issue analysis to understand parallel streams
|
||||
- Identify which streams can start immediately
|
||||
- Note dependencies between streams
|
||||
|
||||
### 2. Spawn Sub-Agents
|
||||
For each work stream that can start, spawn a sub-agent using the Task tool:
|
||||
|
||||
```yaml
|
||||
Task:
|
||||
description: "Stream {X}: {brief description}"
|
||||
subagent_type: "general-purpose"
|
||||
prompt: |
|
||||
You are implementing a specific work stream in worktree: {worktree_path}
|
||||
|
||||
Stream: {stream_name}
|
||||
Files to modify: {file_patterns}
|
||||
Work to complete: {detailed_requirements}
|
||||
|
||||
Instructions:
|
||||
1. Implement ONLY your assigned scope
|
||||
2. Work ONLY on your assigned files
|
||||
3. Commit frequently with format: "Issue #{number}: {specific change}"
|
||||
4. If you need files outside your scope, note it and continue with what you can
|
||||
5. Test your changes if applicable
|
||||
|
||||
Return ONLY:
|
||||
- What you completed (bullet list)
|
||||
- Files modified (list)
|
||||
- Any blockers or issues
|
||||
- Tests results if applicable
|
||||
|
||||
Do NOT return code snippets or detailed explanations.
|
||||
```
|
||||
|
||||
### 3. Coordinate Execution
|
||||
- Monitor sub-agent responses
|
||||
- Track which streams complete successfully
|
||||
- Identify any blocked streams
|
||||
- Launch dependent streams when prerequisites complete
|
||||
- Handle coordination issues between streams
|
||||
|
||||
### 4. Consolidate Results
|
||||
After all sub-agents complete or report:
|
||||
|
||||
```markdown
|
||||
## Parallel Execution Summary
|
||||
|
||||
### Completed Streams
|
||||
- Stream A: {what was done} ✓
|
||||
- Stream B: {what was done} ✓
|
||||
- Stream C: {what was done} ✓
|
||||
|
||||
### Files Modified
|
||||
- {consolidated list from all streams}
|
||||
|
||||
### Issues Encountered
|
||||
- {any blockers or problems}
|
||||
|
||||
### Test Results
|
||||
- {combined test results if applicable}
|
||||
|
||||
### Git Status
|
||||
- Commits made: {count}
|
||||
- Current branch: {branch}
|
||||
- Clean working tree: {yes/no}
|
||||
|
||||
### Overall Status
|
||||
{Complete/Partially Complete/Blocked}
|
||||
|
||||
### Next Steps
|
||||
{What should happen next}
|
||||
```
|
||||
|
||||
## Execution Pattern
|
||||
|
||||
1. **Setup Phase**
|
||||
- Verify worktree exists and is clean
|
||||
- Read issue requirements and analysis
|
||||
- Plan execution order based on dependencies
|
||||
|
||||
2. **Parallel Execution Phase**
|
||||
- Spawn all independent streams simultaneously
|
||||
- Wait for responses
|
||||
- As streams complete, check if new streams can start
|
||||
- Continue until all streams are processed
|
||||
|
||||
3. **Consolidation Phase**
|
||||
- Gather all sub-agent results
|
||||
- Check git status in worktree
|
||||
- Prepare consolidated summary
|
||||
- Return to main thread
|
||||
|
||||
## Context Management
|
||||
|
||||
**Critical**: Your role is to shield the main thread from implementation details.
|
||||
|
||||
- Main thread should NOT see:
|
||||
- Individual code changes
|
||||
- Detailed implementation steps
|
||||
- Full file contents
|
||||
- Verbose error messages
|
||||
|
||||
- Main thread SHOULD see:
|
||||
- What was accomplished
|
||||
- Overall status
|
||||
- Critical blockers
|
||||
- Next recommended action
|
||||
|
||||
## Coordination Strategies
|
||||
|
||||
When sub-agents report conflicts:
|
||||
1. Note which files are contested
|
||||
2. Serialize access (have one complete, then the other)
|
||||
3. Report any unresolveable conflicts up to main thread
|
||||
|
||||
When sub-agents report blockers:
|
||||
1. Check if other streams can provide the blocker
|
||||
2. If not, note it in final summary for human intervention
|
||||
3. Continue with other streams
|
||||
|
||||
## Error Handling
|
||||
|
||||
If a sub-agent fails:
|
||||
- Note the failure
|
||||
- Continue with other streams
|
||||
- Report failure in summary with enough context for debugging
|
||||
|
||||
If worktree has conflicts:
|
||||
- Stop execution
|
||||
- Report state clearly
|
||||
- Request human intervention
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Each sub-agent works independently - they don't communicate directly
|
||||
- You are the coordination point - consolidate and resolve when possible
|
||||
- Keep the main thread summary extremely concise
|
||||
- If all streams complete successfully, just report success
|
||||
- If issues arise, provide actionable information
|
||||
|
||||
Your goal: Execute maximum parallel work while maintaining a clean, simple interface to the main thread. The complexity of parallel execution should be invisible above you.
|
||||
@ -0,0 +1,120 @@
|
||||
---
|
||||
name: test-runner
|
||||
description: Use this agent when you need to run tests and analyze their results. This agent specializes in executing tests using the optimized test runner script, capturing comprehensive logs, and then performing deep analysis to surface key issues, failures, and actionable insights. The agent should be invoked after code changes that require validation, during debugging sessions when tests are failing, or when you need a comprehensive test health report. Examples: <example>Context: The user wants to run tests after implementing a new feature and understands any issues.user: "I've finished implementing the new authentication flow. Can you run the relevant tests and tell me if there are any problems?" assistant: "I'll use the test-runner agent to run the authentication tests and analyze the results for any issues."<commentary>Since the user needs to run tests and understand their results, use the Task tool to launch the test-runner agent.</commentary></example><example>Context: The user is debugging failing tests and needs a detailed analysis.user: "The workflow tests keep failing intermittently. Can you investigate?" assistant: "Let me use the test-runner agent to run the workflow tests multiple times and analyze the patterns in any failures."<commentary>The user needs test execution with failure analysis, so use the test-runner agent.</commentary></example>
|
||||
tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
|
||||
model: inherit
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an expert test execution and analysis specialist for the MUXI Runtime system. Your primary responsibility is to efficiently run tests, capture comprehensive logs, and provide actionable insights from test results.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Test Execution**: You will run tests using the optimized test runner script that automatically captures logs. Always use `.claude/scripts/test-and-log.sh` to ensure full output capture.
|
||||
|
||||
2. **Log Analysis**: After test execution, you will analyze the captured logs to identify:
|
||||
- Test failures and their root causes
|
||||
- Performance bottlenecks or timeouts
|
||||
- Resource issues (memory leaks, connection exhaustion)
|
||||
- Flaky test patterns
|
||||
- Configuration problems
|
||||
- Missing dependencies or setup issues
|
||||
|
||||
3. **Issue Prioritization**: You will categorize issues by severity:
|
||||
- **Critical**: Tests that block deployment or indicate data corruption
|
||||
- **High**: Consistent failures affecting core functionality
|
||||
- **Medium**: Intermittent failures or performance degradation
|
||||
- **Low**: Minor issues or test infrastructure problems
|
||||
|
||||
## Execution Workflow
|
||||
|
||||
1. **Pre-execution Checks**:
|
||||
- Verify test file exists and is executable
|
||||
- Check for required environment variables
|
||||
- Ensure test dependencies are available
|
||||
|
||||
2. **Test Execution**:
|
||||
|
||||
```bash
|
||||
# Standard execution with automatic log naming
|
||||
.claude/scripts/test-and-log.sh tests/[test_file].py
|
||||
|
||||
# For iteration testing with custom log names
|
||||
.claude/scripts/test-and-log.sh tests/[test_file].py [test_name]_iteration_[n].log
|
||||
```
|
||||
|
||||
3. **Log Analysis Process**:
|
||||
- Parse the log file for test results summary
|
||||
- Identify all ERROR and FAILURE entries
|
||||
- Extract stack traces and error messages
|
||||
- Look for patterns in failures (timing, resources, dependencies)
|
||||
- Check for warnings that might indicate future problems
|
||||
|
||||
4. **Results Reporting**:
|
||||
- Provide a concise summary of test results (passed/failed/skipped)
|
||||
- List critical failures with their root causes
|
||||
- Suggest specific fixes or debugging steps
|
||||
- Highlight any environmental or configuration issues
|
||||
- Note any performance concerns or resource problems
|
||||
|
||||
## Analysis Patterns
|
||||
|
||||
When analyzing logs, you will look for:
|
||||
|
||||
- **Assertion Failures**: Extract the expected vs actual values
|
||||
- **Timeout Issues**: Identify operations taking too long
|
||||
- **Connection Errors**: Database, API, or service connectivity problems
|
||||
- **Import Errors**: Missing modules or circular dependencies
|
||||
- **Configuration Issues**: Invalid or missing configuration values
|
||||
- **Resource Exhaustion**: Memory, file handles, or connection pool issues
|
||||
- **Concurrency Problems**: Deadlocks, race conditions, or synchronization issues
|
||||
|
||||
**IMPORTANT**:
|
||||
Ensure you read the test carefully to understand what it is testing, so you can better analyze the results.
|
||||
|
||||
## Output Format
|
||||
|
||||
Your analysis should follow this structure:
|
||||
|
||||
```
|
||||
## Test Execution Summary
|
||||
- Total Tests: X
|
||||
- Passed: X
|
||||
- Failed: X
|
||||
- Skipped: X
|
||||
- Duration: Xs
|
||||
|
||||
## Critical Issues
|
||||
[List any blocking issues with specific error messages and line numbers]
|
||||
|
||||
## Test Failures
|
||||
[For each failure:
|
||||
- Test name
|
||||
- Failure reason
|
||||
- Relevant error message/stack trace
|
||||
- Suggested fix]
|
||||
|
||||
## Warnings & Observations
|
||||
[Non-critical issues that should be addressed]
|
||||
|
||||
## Recommendations
|
||||
[Specific actions to fix failures or improve test reliability]
|
||||
```
|
||||
|
||||
## Special Considerations
|
||||
|
||||
- For flaky tests, suggest running multiple iterations to confirm intermittent behavior
|
||||
- When tests pass but show warnings, highlight these for preventive maintenance
|
||||
- If all tests pass, still check for performance degradation or resource usage patterns
|
||||
- For configuration-related failures, provide the exact configuration changes needed
|
||||
- When encountering new failure patterns, suggest additional diagnostic steps
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If the test runner script fails to execute:
|
||||
1. Check if the script has execute permissions
|
||||
2. Verify the test file path is correct
|
||||
3. Ensure the logs directory exists and is writable
|
||||
4. Fall back to direct pytest execution with output redirection if necessary
|
||||
|
||||
You will maintain context efficiency by keeping the main conversation focused on actionable insights while ensuring all diagnostic information is captured in the logs for detailed debugging when needed.
|
||||
@ -0,0 +1,120 @@
|
||||
---
|
||||
allowed-tools: Task, Read, Edit, MultiEdit, Write, LS, Grep
|
||||
---
|
||||
|
||||
# CodeRabbit Review Handler
|
||||
|
||||
Process CodeRabbit review comments with context-aware discretion.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/code-rabbit
|
||||
```
|
||||
|
||||
Then paste one or more CodeRabbit comments.
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Initial Context
|
||||
|
||||
Inform the user:
|
||||
```
|
||||
I'll review the CodeRabbit comments with discretion, as CodeRabbit doesn't have access to the entire codebase and may not understand the full context.
|
||||
|
||||
For each comment, I'll:
|
||||
- Evaluate if it's valid given our codebase context
|
||||
- Accept suggestions that improve code quality
|
||||
- Ignore suggestions that don't apply to our architecture
|
||||
- Explain my reasoning for accept/ignore decisions
|
||||
```
|
||||
|
||||
### 2. Process Comments
|
||||
|
||||
#### Single File Comments
|
||||
If all comments relate to one file:
|
||||
- Read the file for context
|
||||
- Evaluate each suggestion
|
||||
- Apply accepted changes in batch using MultiEdit
|
||||
- Report which suggestions were accepted/ignored and why
|
||||
|
||||
#### Multiple File Comments
|
||||
If comments span multiple files:
|
||||
|
||||
Launch parallel sub-agents using Task tool:
|
||||
```yaml
|
||||
Task:
|
||||
description: "CodeRabbit fixes for {filename}"
|
||||
subagent_type: "general-purpose"
|
||||
prompt: |
|
||||
Review and apply CodeRabbit suggestions for {filename}.
|
||||
|
||||
Comments to evaluate:
|
||||
{relevant_comments_for_this_file}
|
||||
|
||||
Instructions:
|
||||
1. Read the file to understand context
|
||||
2. For each suggestion:
|
||||
- Evaluate validity given codebase patterns
|
||||
- Accept if it improves quality/correctness
|
||||
- Ignore if not applicable
|
||||
3. Apply accepted changes using Edit/MultiEdit
|
||||
4. Return summary:
|
||||
- Accepted: {list with reasons}
|
||||
- Ignored: {list with reasons}
|
||||
- Changes made: {brief description}
|
||||
|
||||
Use discretion - CodeRabbit lacks full context.
|
||||
```
|
||||
|
||||
### 3. Consolidate Results
|
||||
|
||||
After all sub-agents complete:
|
||||
```
|
||||
📋 CodeRabbit Review Summary
|
||||
|
||||
Files Processed: {count}
|
||||
|
||||
Accepted Suggestions:
|
||||
{file}: {changes_made}
|
||||
|
||||
Ignored Suggestions:
|
||||
{file}: {reason_ignored}
|
||||
|
||||
Overall: {X}/{Y} suggestions applied
|
||||
```
|
||||
|
||||
### 4. Common Patterns to Ignore
|
||||
|
||||
- **Style preferences** that conflict with project conventions
|
||||
- **Generic best practices** that don't apply to our specific use case
|
||||
- **Performance optimizations** for code that isn't performance-critical
|
||||
- **Accessibility suggestions** for internal tools
|
||||
- **Security warnings** for already-validated patterns
|
||||
- **Import reorganization** that would break our structure
|
||||
|
||||
### 5. Common Patterns to Accept
|
||||
|
||||
- **Actual bugs** (null checks, error handling)
|
||||
- **Security vulnerabilities** (unless false positive)
|
||||
- **Resource leaks** (unclosed connections, memory leaks)
|
||||
- **Type safety issues** (TypeScript/type hints)
|
||||
- **Logic errors** (off-by-one, incorrect conditions)
|
||||
- **Missing error handling**
|
||||
|
||||
## Decision Framework
|
||||
|
||||
For each suggestion, consider:
|
||||
1. **Is it correct?** - Does the issue actually exist?
|
||||
2. **Is it relevant?** - Does it apply to our use case?
|
||||
3. **Is it beneficial?** - Will fixing it improve the code?
|
||||
4. **Is it safe?** - Could the change introduce problems?
|
||||
|
||||
Only apply if all answers are "yes" or the benefit clearly outweighs risks.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- CodeRabbit is helpful but lacks context
|
||||
- Trust your understanding of the codebase over generic suggestions
|
||||
- Explain decisions briefly to maintain audit trail
|
||||
- Batch related changes for efficiency
|
||||
- Use parallel agents for multi-file reviews to save time
|
||||
@ -0,0 +1,161 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Create Initial Context
|
||||
|
||||
This command creates the initial project context documentation in `.claude/context/` by analyzing the current project state and establishing comprehensive baseline documentation.
|
||||
|
||||
## Required Rules
|
||||
|
||||
**IMPORTANT:** Before executing this command, read and follow:
|
||||
- `.claude/rules/datetime.md` - For getting real current date/time
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### 1. Context Directory Check
|
||||
- Run: `ls -la .claude/context/ 2>/dev/null`
|
||||
- If directory exists and has files:
|
||||
- Count existing files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
|
||||
- Ask user: "⚠️ Found {count} existing context files. Overwrite all context? (yes/no)"
|
||||
- Only proceed with explicit 'yes' confirmation
|
||||
- If user says no, suggest: "Use /context:update to refresh existing context"
|
||||
|
||||
### 2. Project Type Detection
|
||||
- Check for project indicators:
|
||||
- Node.js: `test -f package.json && echo "Node.js project detected"`
|
||||
- Python: `test -f requirements.txt || test -f pyproject.toml && echo "Python project detected"`
|
||||
- Rust: `test -f Cargo.toml && echo "Rust project detected"`
|
||||
- Go: `test -f go.mod && echo "Go project detected"`
|
||||
- Run: `git status 2>/dev/null` to confirm this is a git repository
|
||||
- If not a git repo, ask: "⚠️ Not a git repository. Continue anyway? (yes/no)"
|
||||
|
||||
### 3. Directory Creation
|
||||
- If `.claude/` doesn't exist, create it: `mkdir -p .claude/context/`
|
||||
- Verify write permissions: `touch .claude/context/.test && rm .claude/context/.test`
|
||||
- If permission denied, tell user: "❌ Cannot create context directory. Check permissions."
|
||||
|
||||
### 4. Get Current DateTime
|
||||
- Run: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- Store this value for use in all context file frontmatter
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Pre-Analysis Validation
|
||||
- Confirm project root directory is correct (presence of .git, package.json, etc.)
|
||||
- Check for existing documentation that can inform context (README.md, docs/)
|
||||
- If README.md doesn't exist, ask user for project description
|
||||
|
||||
### 2. Systematic Project Analysis
|
||||
Gather information in this order:
|
||||
|
||||
**Project Detection:**
|
||||
- Run: `find . -maxdepth 2 -name 'package.json' -o -name 'requirements.txt' -o -name 'Cargo.toml' -o -name 'go.mod' 2>/dev/null`
|
||||
- Run: `git remote -v 2>/dev/null` to get repository information
|
||||
- Run: `git branch --show-current 2>/dev/null` to get current branch
|
||||
|
||||
**Codebase Analysis:**
|
||||
- Run: `find . -type f -name '*.js' -o -name '*.py' -o -name '*.rs' -o -name '*.go' 2>/dev/null | head -20`
|
||||
- Run: `ls -la` to see root directory structure
|
||||
- Read README.md if it exists
|
||||
|
||||
### 3. Context File Creation with Frontmatter
|
||||
|
||||
Each context file MUST include frontmatter with real datetime:
|
||||
|
||||
```yaml
|
||||
---
|
||||
created: [Use REAL datetime from date command]
|
||||
last_updated: [Use REAL datetime from date command]
|
||||
version: 1.0
|
||||
author: Claude Code PM System
|
||||
---
|
||||
```
|
||||
|
||||
Generate the following initial context files:
|
||||
- `progress.md` - Document current project status, completed work, and immediate next steps
|
||||
- Include: Current branch, recent commits, outstanding changes
|
||||
- `project-structure.md` - Map out the directory structure and file organization
|
||||
- Include: Key directories, file naming patterns, module organization
|
||||
- `tech-context.md` - Catalog current dependencies, technologies, and development tools
|
||||
- Include: Language version, framework versions, dev dependencies
|
||||
- `system-patterns.md` - Identify existing architectural patterns and design decisions
|
||||
- Include: Design patterns observed, architectural style, data flow
|
||||
- `product-context.md` - Define product requirements, target users, and core functionality
|
||||
- Include: User personas, core features, use cases
|
||||
- `project-brief.md` - Establish project scope, goals, and key objectives
|
||||
- Include: What it does, why it exists, success criteria
|
||||
- `project-overview.md` - Provide a high-level summary of features and capabilities
|
||||
- Include: Feature list, current state, integration points
|
||||
- `project-vision.md` - Articulate long-term vision and strategic direction
|
||||
- Include: Future goals, potential expansions, strategic priorities
|
||||
- `project-style-guide.md` - Document coding standards, conventions, and style preferences
|
||||
- Include: Naming conventions, file structure patterns, comment style
|
||||
### 4. Quality Validation
|
||||
|
||||
After creating each file:
|
||||
- Verify file was created successfully
|
||||
- Check file is not empty (minimum 10 lines of content)
|
||||
- Ensure frontmatter is present and valid
|
||||
- Validate markdown formatting is correct
|
||||
|
||||
### 5. Error Handling
|
||||
|
||||
**Common Issues:**
|
||||
- **No write permissions:** "❌ Cannot write to .claude/context/. Check permissions."
|
||||
- **Disk space:** "❌ Insufficient disk space for context files."
|
||||
- **File creation failed:** "❌ Failed to create {filename}. Error: {error}"
|
||||
|
||||
If any file fails to create:
|
||||
- Report which files were successfully created
|
||||
- Provide option to continue with partial context
|
||||
- Never leave corrupted or incomplete files
|
||||
|
||||
### 6. Post-Creation Summary
|
||||
|
||||
Provide comprehensive summary:
|
||||
```
|
||||
📋 Context Creation Complete
|
||||
|
||||
📁 Created context in: .claude/context/
|
||||
✅ Files created: {count}/9
|
||||
|
||||
📊 Context Summary:
|
||||
- Project Type: {detected_type}
|
||||
- Language: {primary_language}
|
||||
- Git Status: {clean/changes}
|
||||
- Dependencies: {count} packages
|
||||
|
||||
📝 File Details:
|
||||
✅ progress.md ({lines} lines) - Current status and recent work
|
||||
✅ project-structure.md ({lines} lines) - Directory organization
|
||||
[... list all files with line counts and brief description ...]
|
||||
|
||||
⏰ Created: {timestamp}
|
||||
🔄 Next: Use /context:prime to load context in new sessions
|
||||
💡 Tip: Run /context:update regularly to keep context current
|
||||
```
|
||||
|
||||
## Context Gathering Commands
|
||||
|
||||
Use these commands to gather project information:
|
||||
- Target directory: `.claude/context/` (create if needed)
|
||||
- Current git status: `git status --short`
|
||||
- Recent commits: `git log --oneline -10`
|
||||
- Project README: Read `README.md` if exists
|
||||
- Package files: Check for `package.json`, `requirements.txt`, `Cargo.toml`, `go.mod`, etc.
|
||||
- Documentation scan: `find . -type f -name '*.md' -path '*/docs/*' 2>/dev/null | head -10`
|
||||
- Test detection: `find . -type d \( -name 'test' -o -name 'tests' -o -name '__tests__' -o -name 'spec' \) 2>/dev/null | head -5`
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Always use real datetime** from system clock, never placeholders
|
||||
- **Ask for confirmation** before overwriting existing context
|
||||
- **Validate each file** is created successfully
|
||||
- **Provide detailed summary** of what was created
|
||||
- **Handle errors gracefully** with specific guidance
|
||||
|
||||
$ARGUMENTS
|
||||
@ -0,0 +1,146 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, LS
|
||||
---
|
||||
|
||||
# Prime Context
|
||||
|
||||
This command loads essential context for a new agent session by reading the project context documentation and understanding the codebase structure.
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### 1. Context Availability Check
|
||||
- Run: `ls -la .claude/context/ 2>/dev/null`
|
||||
- If directory doesn't exist or is empty:
|
||||
- Tell user: "❌ No context found. Please run /context:create first to establish project context."
|
||||
- Exit gracefully
|
||||
- Count available context files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
|
||||
- Report: "📁 Found {count} context files to load"
|
||||
|
||||
### 2. File Integrity Check
|
||||
- For each context file found:
|
||||
- Verify file is readable: `test -r ".claude/context/{file}" && echo "readable"`
|
||||
- Check file has content: `test -s ".claude/context/{file}" && echo "has content"`
|
||||
- Check for valid frontmatter (should start with `---`)
|
||||
- Report any issues:
|
||||
- Empty files: "⚠️ {filename} is empty (skipping)"
|
||||
- Unreadable files: "⚠️ Cannot read {filename} (permission issue)"
|
||||
- Missing frontmatter: "⚠️ {filename} missing frontmatter (may be corrupted)"
|
||||
|
||||
### 3. Project State Check
|
||||
- Run: `git status --short 2>/dev/null` to see current state
|
||||
- Run: `git branch --show-current 2>/dev/null` to get current branch
|
||||
- Note if not in git repository (context may be less complete)
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Context Loading Sequence
|
||||
|
||||
Load context files in priority order for optimal understanding:
|
||||
|
||||
**Priority 1 - Essential Context (load first):**
|
||||
1. `project-overview.md` - High-level understanding of the project
|
||||
2. `project-brief.md` - Core purpose and goals
|
||||
3. `tech-context.md` - Technical stack and dependencies
|
||||
|
||||
**Priority 2 - Current State (load second):**
|
||||
4. `progress.md` - Current status and recent work
|
||||
5. `project-structure.md` - Directory and file organization
|
||||
|
||||
**Priority 3 - Deep Context (load third):**
|
||||
6. `system-patterns.md` - Architecture and design patterns
|
||||
7. `product-context.md` - User needs and requirements
|
||||
8. `project-style-guide.md` - Coding conventions
|
||||
9. `project-vision.md` - Long-term direction
|
||||
|
||||
### 2. Validation During Loading
|
||||
|
||||
For each file loaded:
|
||||
- Check frontmatter exists and parse:
|
||||
- `created` date should be valid
|
||||
- `last_updated` should be ≥ created date
|
||||
- `version` should be present
|
||||
- If frontmatter is invalid, note but continue loading content
|
||||
- Track which files loaded successfully vs failed
|
||||
|
||||
### 3. Supplementary Information
|
||||
|
||||
After loading context files:
|
||||
- Run: `git ls-files --others --exclude-standard | head -20` to see untracked files
|
||||
- Read `README.md` if it exists for additional project information
|
||||
- Check for `.env.example` or similar for environment setup needs
|
||||
|
||||
### 4. Error Recovery
|
||||
|
||||
**If critical files are missing:**
|
||||
- `project-overview.md` missing: Try to understand from README.md
|
||||
- `tech-context.md` missing: Analyze package.json/requirements.txt directly
|
||||
- `progress.md` missing: Check recent git commits for status
|
||||
|
||||
**If context is incomplete:**
|
||||
- Inform user which files are missing
|
||||
- Suggest running `/context:update` to refresh context
|
||||
- Continue with partial context but note limitations
|
||||
|
||||
### 5. Loading Summary
|
||||
|
||||
Provide comprehensive summary after priming:
|
||||
|
||||
```
|
||||
🧠 Context Primed Successfully
|
||||
|
||||
📖 Loaded Context Files:
|
||||
✅ Essential: {count}/3 files
|
||||
✅ Current State: {count}/2 files
|
||||
✅ Deep Context: {count}/4 files
|
||||
|
||||
🔍 Project Understanding:
|
||||
- Name: {project_name}
|
||||
- Type: {project_type}
|
||||
- Language: {primary_language}
|
||||
- Status: {current_status from progress.md}
|
||||
- Branch: {git_branch}
|
||||
|
||||
📊 Key Metrics:
|
||||
- Last Updated: {most_recent_update}
|
||||
- Context Version: {version}
|
||||
- Files Loaded: {success_count}/{total_count}
|
||||
|
||||
⚠️ Warnings:
|
||||
{list any missing files or issues}
|
||||
|
||||
🎯 Ready State:
|
||||
✅ Project context loaded
|
||||
✅ Current status understood
|
||||
✅ Ready for development work
|
||||
|
||||
💡 Project Summary:
|
||||
{2-3 sentence summary of what the project is and current state}
|
||||
```
|
||||
|
||||
### 6. Partial Context Handling
|
||||
|
||||
If some files fail to load:
|
||||
- Continue with available context
|
||||
- Clearly note what's missing
|
||||
- Suggest remediation:
|
||||
- "Missing technical context - run /context:create to rebuild"
|
||||
- "Progress file corrupted - run /context:update to refresh"
|
||||
|
||||
### 7. Performance Optimization
|
||||
|
||||
For large contexts:
|
||||
- Load files in parallel when possible
|
||||
- Show progress indicator: "Loading context files... {current}/{total}"
|
||||
- Skip extremely large files (>10000 lines) with warning
|
||||
- Cache parsed frontmatter for faster subsequent loads
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Always validate** files before attempting to read
|
||||
- **Load in priority order** to get essential context first
|
||||
- **Handle missing files gracefully** - don't fail completely
|
||||
- **Provide clear summary** of what was loaded and project state
|
||||
- **Note any issues** that might affect development work
|
||||
@ -0,0 +1,220 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Update Context
|
||||
|
||||
This command updates the project context documentation in `.claude/context/` to reflect the current state of the project. Run this at the end of each development session to keep context accurate.
|
||||
|
||||
## Required Rules
|
||||
|
||||
**IMPORTANT:** Before executing this command, read and follow:
|
||||
- `.claude/rules/datetime.md` - For getting real current date/time
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### 1. Context Validation
|
||||
- Run: `ls -la .claude/context/ 2>/dev/null`
|
||||
- If directory doesn't exist or is empty:
|
||||
- Tell user: "❌ No context to update. Please run /context:create first."
|
||||
- Exit gracefully
|
||||
- Count existing files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
|
||||
- Report: "📁 Found {count} context files to check for updates"
|
||||
|
||||
### 2. Change Detection
|
||||
|
||||
Gather information about what has changed:
|
||||
|
||||
**Git Changes:**
|
||||
- Run: `git status --short` to see uncommitted changes
|
||||
- Run: `git log --oneline -10` to see recent commits
|
||||
- Run: `git diff --stat HEAD~5..HEAD 2>/dev/null` to see files changed recently
|
||||
|
||||
**File Modifications:**
|
||||
- Check context file ages: `find .claude/context -name "*.md" -type f -exec ls -lt {} + | head -5`
|
||||
- Note which context files are oldest and may need updates
|
||||
|
||||
**Dependency Changes:**
|
||||
- Node.js: `git diff HEAD~5..HEAD package.json 2>/dev/null`
|
||||
- Python: `git diff HEAD~5..HEAD requirements.txt 2>/dev/null`
|
||||
- Check if new dependencies were added or versions changed
|
||||
|
||||
### 3. Get Current DateTime
|
||||
- Run: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- Store for updating `last_updated` field in modified files
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Systematic Change Analysis
|
||||
|
||||
For each context file, determine if updates are needed:
|
||||
|
||||
**Check each file systematically:**
|
||||
#### `progress.md` - **Always Update**
|
||||
- Check: Recent commits, current branch, uncommitted changes
|
||||
- Update: Latest completed work, current blockers, next steps
|
||||
- Run: `git log --oneline -5` to get recent commit messages
|
||||
- Include completion percentages if applicable
|
||||
|
||||
#### `project-structure.md` - **Update if Changed**
|
||||
- Check: `git diff --name-status HEAD~10..HEAD | grep -E '^A'` for new files
|
||||
- Update: New directories, moved files, structural reorganization
|
||||
- Only update if significant structural changes occurred
|
||||
|
||||
#### `tech-context.md` - **Update if Dependencies Changed**
|
||||
- Check: Package files for new dependencies or version changes
|
||||
- Update: New libraries, upgraded versions, new dev tools
|
||||
- Include security updates or breaking changes
|
||||
|
||||
#### `system-patterns.md` - **Update if Architecture Changed**
|
||||
- Check: New design patterns, architectural decisions
|
||||
- Update: New patterns adopted, refactoring done
|
||||
- Only update for significant architectural changes
|
||||
|
||||
#### `product-context.md` - **Update if Requirements Changed**
|
||||
- Check: New features implemented, user feedback incorporated
|
||||
- Update: New user stories, changed requirements
|
||||
- Include any pivot in product direction
|
||||
|
||||
#### `project-brief.md` - **Rarely Update**
|
||||
- Check: Only if fundamental project goals changed
|
||||
- Update: Major scope changes, new objectives
|
||||
- Usually remains stable
|
||||
|
||||
#### `project-overview.md` - **Update for Major Milestones**
|
||||
- Check: Major features completed, significant progress
|
||||
- Update: Feature status, capability changes
|
||||
- Update when reaching project milestones
|
||||
|
||||
#### `project-vision.md` - **Rarely Update**
|
||||
- Check: Strategic direction changes
|
||||
- Update: Only for major vision shifts
|
||||
- Usually remains stable
|
||||
|
||||
#### `project-style-guide.md` - **Update if Conventions Changed**
|
||||
- Check: New linting rules, style decisions
|
||||
- Update: Convention changes, new patterns adopted
|
||||
- Include examples of new patterns
|
||||
### 2. Smart Update Strategy
|
||||
|
||||
**For each file that needs updating:**
|
||||
|
||||
1. **Read existing file** to understand current content
|
||||
2. **Identify specific sections** that need updates
|
||||
3. **Preserve frontmatter** but update `last_updated` field:
|
||||
```yaml
|
||||
---
|
||||
created: [preserve original]
|
||||
last_updated: [Use REAL datetime from date command]
|
||||
version: [increment if major update, e.g., 1.0 → 1.1]
|
||||
author: Claude Code PM System
|
||||
---
|
||||
```
|
||||
4. **Make targeted updates** - don't rewrite entire file
|
||||
5. **Add update notes** at the bottom if significant:
|
||||
```markdown
|
||||
## Update History
|
||||
- {date}: {summary of what changed}
|
||||
```
|
||||
|
||||
### 3. Update Validation
|
||||
|
||||
After updating each file:
|
||||
- Verify file still has valid frontmatter
|
||||
- Check file size is reasonable (not corrupted)
|
||||
- Ensure markdown formatting is preserved
|
||||
- Confirm updates accurately reflect changes
|
||||
|
||||
### 4. Skip Optimization
|
||||
|
||||
**Skip files that don't need updates:**
|
||||
- If no relevant changes detected, skip the file
|
||||
- Report skipped files in summary
|
||||
- Don't update timestamp if content unchanged
|
||||
- This preserves accurate "last modified" information
|
||||
|
||||
### 5. Error Handling
|
||||
|
||||
**Common Issues:**
|
||||
- **File locked:** "❌ Cannot update {file} - may be open in editor"
|
||||
- **Permission denied:** "❌ Cannot write to {file} - check permissions"
|
||||
- **Corrupted file:** "⚠️ {file} appears corrupted - skipping update"
|
||||
- **Disk space:** "❌ Insufficient disk space for updates"
|
||||
|
||||
If update fails:
|
||||
- Report which files were successfully updated
|
||||
- Note which files failed and why
|
||||
- Preserve original files (don't leave corrupted state)
|
||||
|
||||
### 6. Update Summary
|
||||
|
||||
Provide detailed summary of updates:
|
||||
|
||||
```
|
||||
🔄 Context Update Complete
|
||||
|
||||
📊 Update Statistics:
|
||||
- Files Scanned: {total_count}
|
||||
- Files Updated: {updated_count}
|
||||
- Files Skipped: {skipped_count} (no changes needed)
|
||||
- Errors: {error_count}
|
||||
|
||||
📝 Updated Files:
|
||||
✅ progress.md - Updated recent commits, current status
|
||||
✅ tech-context.md - Added 3 new dependencies
|
||||
✅ project-structure.md - Noted new /utils directory
|
||||
|
||||
⏭️ Skipped Files (no changes):
|
||||
- project-brief.md (last updated: 5 days ago)
|
||||
- project-vision.md (last updated: 2 weeks ago)
|
||||
- system-patterns.md (last updated: 3 days ago)
|
||||
|
||||
⚠️ Issues:
|
||||
{any warnings or errors}
|
||||
|
||||
⏰ Last Update: {timestamp}
|
||||
🔄 Next: Run this command regularly to keep context current
|
||||
💡 Tip: Major changes? Consider running /context:create for full refresh
|
||||
```
|
||||
|
||||
### 7. Incremental Update Tracking
|
||||
|
||||
**Track what was updated:**
|
||||
- Note which sections of each file were modified
|
||||
- Keep changes focused and surgical
|
||||
- Don't regenerate unchanged content
|
||||
- Preserve formatting and structure
|
||||
|
||||
### 8. Performance Optimization
|
||||
|
||||
For large projects:
|
||||
- Process files in parallel when possible
|
||||
- Show progress: "Updating context files... {current}/{total}"
|
||||
- Skip very large files with warning
|
||||
- Use git diff to quickly identify changed areas
|
||||
|
||||
## Context Gathering Commands
|
||||
|
||||
Use these commands to detect changes:
|
||||
- Context directory: `.claude/context/`
|
||||
- Current git status: `git status --short`
|
||||
- Recent commits: `git log --oneline -10`
|
||||
- Changed files: `git diff --name-only HEAD~5..HEAD 2>/dev/null`
|
||||
- Branch info: `git branch --show-current`
|
||||
- Uncommitted changes: `git diff --stat`
|
||||
- New untracked files: `git ls-files --others --exclude-standard | head -10`
|
||||
- Dependency changes: Check package.json, requirements.txt, etc.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Only update files with actual changes** - preserve accurate timestamps
|
||||
- **Always use real datetime** from system clock for `last_updated`
|
||||
- **Make surgical updates** - don't regenerate entire files
|
||||
- **Validate each update** - ensure files remain valid
|
||||
- **Provide detailed summary** - show what changed and what didn't
|
||||
- **Handle errors gracefully** - don't corrupt existing context
|
||||
|
||||
$ARGUMENTS
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/blocked.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,102 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Clean
|
||||
|
||||
Clean up completed work and archive old epics.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:clean [--dry-run]
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--dry-run` - Show what would be cleaned without doing it
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Identify Completed Epics
|
||||
|
||||
Find epics with:
|
||||
- `status: completed` in frontmatter
|
||||
- All tasks closed
|
||||
- Last update > 30 days ago
|
||||
|
||||
### 2. Identify Stale Work
|
||||
|
||||
Find:
|
||||
- Progress files for closed issues
|
||||
- Update directories for completed work
|
||||
- Orphaned task files (epic deleted)
|
||||
- Empty directories
|
||||
|
||||
### 3. Show Cleanup Plan
|
||||
|
||||
```
|
||||
🧹 Cleanup Plan
|
||||
|
||||
Completed Epics to Archive:
|
||||
{epic_name} - Completed {days} days ago
|
||||
{epic_name} - Completed {days} days ago
|
||||
|
||||
Stale Progress to Remove:
|
||||
{count} progress files for closed issues
|
||||
|
||||
Empty Directories:
|
||||
{list_of_empty_dirs}
|
||||
|
||||
Space to Recover: ~{size}KB
|
||||
|
||||
{If --dry-run}: This is a dry run. No changes made.
|
||||
{Otherwise}: Proceed with cleanup? (yes/no)
|
||||
```
|
||||
|
||||
### 4. Execute Cleanup
|
||||
|
||||
If user confirms:
|
||||
|
||||
**Archive Epics:**
|
||||
```bash
|
||||
mkdir -p .claude/epics/.archived
|
||||
mv .claude/epics/{completed_epic} .claude/epics/.archived/
|
||||
```
|
||||
|
||||
**Remove Stale Files:**
|
||||
- Delete progress files for closed issues > 30 days
|
||||
- Remove empty update directories
|
||||
- Clean up orphaned files
|
||||
|
||||
**Create Archive Log:**
|
||||
Create `.claude/epics/.archived/archive-log.md`:
|
||||
```markdown
|
||||
# Archive Log
|
||||
|
||||
## {current_date}
|
||||
- Archived: {epic_name} (completed {date})
|
||||
- Removed: {count} stale progress files
|
||||
- Cleaned: {count} empty directories
|
||||
```
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
✅ Cleanup Complete
|
||||
|
||||
Archived:
|
||||
{count} completed epics
|
||||
|
||||
Removed:
|
||||
{count} stale files
|
||||
{count} empty directories
|
||||
|
||||
Space recovered: {size}KB
|
||||
|
||||
System is clean and organized.
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Always offer --dry-run to preview changes.
|
||||
Never delete PRDs or incomplete work.
|
||||
Keep archive log for history.
|
||||
@ -0,0 +1,69 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Epic Close
|
||||
|
||||
Mark an epic as complete when all tasks are done.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-close <epic_name>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Verify All Tasks Complete
|
||||
|
||||
Check all task files in `.claude/epics/$ARGUMENTS/`:
|
||||
- Verify all have `status: closed` in frontmatter
|
||||
- If any open tasks found: "❌ Cannot close epic. Open tasks remain: {list}"
|
||||
|
||||
### 2. Update Epic Status
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update epic.md frontmatter:
|
||||
```yaml
|
||||
status: completed
|
||||
progress: 100%
|
||||
updated: {current_datetime}
|
||||
completed: {current_datetime}
|
||||
```
|
||||
|
||||
### 3. Update PRD Status
|
||||
|
||||
If epic references a PRD, update its status to "complete".
|
||||
|
||||
### 4. Close Epic on GitHub
|
||||
|
||||
If epic has GitHub issue:
|
||||
```bash
|
||||
gh issue close {epic_issue_number} --comment "✅ Epic completed - all tasks done"
|
||||
```
|
||||
|
||||
### 5. Archive Option
|
||||
|
||||
Ask user: "Archive completed epic? (yes/no)"
|
||||
|
||||
If yes:
|
||||
- Move epic directory to `.claude/epics/.archived/{epic_name}/`
|
||||
- Create archive summary with completion date
|
||||
|
||||
### 6. Output
|
||||
|
||||
```
|
||||
✅ Epic closed: $ARGUMENTS
|
||||
Tasks completed: {count}
|
||||
Duration: {days_from_created_to_completed}
|
||||
|
||||
{If archived}: Archived to .claude/epics/.archived/
|
||||
|
||||
Next epic: Run /pm:next to see priority work
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Only close epics with all tasks complete.
|
||||
Preserve all data when archiving.
|
||||
Update related PRD status.
|
||||
@ -0,0 +1,230 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Epic Decompose
|
||||
|
||||
Break epic into concrete, actionable tasks.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-decompose <feature_name>
|
||||
```
|
||||
|
||||
## Required Rules
|
||||
|
||||
**IMPORTANT:** Before executing this command, read and follow:
|
||||
- `.claude/rules/datetime.md` - For getting real current date/time
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
1. **Verify epic exists:**
|
||||
- Check if `.claude/epics/$ARGUMENTS/epic.md` exists
|
||||
- If not found, tell user: "❌ Epic not found: $ARGUMENTS. First create it with: /pm:prd-parse $ARGUMENTS"
|
||||
- Stop execution if epic doesn't exist
|
||||
|
||||
2. **Check for existing tasks:**
|
||||
- Check if any numbered task files (001.md, 002.md, etc.) already exist in `.claude/epics/$ARGUMENTS/`
|
||||
- If tasks exist, list them and ask: "⚠️ Found {count} existing tasks. Delete and recreate all tasks? (yes/no)"
|
||||
- Only proceed with explicit 'yes' confirmation
|
||||
- If user says no, suggest: "View existing tasks with: /pm:epic-show $ARGUMENTS"
|
||||
|
||||
3. **Validate epic frontmatter:**
|
||||
- Verify epic has valid frontmatter with: name, status, created, prd
|
||||
- If invalid, tell user: "❌ Invalid epic frontmatter. Please check: .claude/epics/$ARGUMENTS/epic.md"
|
||||
|
||||
4. **Check epic status:**
|
||||
- If epic status is already "completed", warn user: "⚠️ Epic is marked as completed. Are you sure you want to decompose it again?"
|
||||
|
||||
## Instructions
|
||||
|
||||
You are decomposing an epic into specific, actionable tasks for: **$ARGUMENTS**
|
||||
|
||||
### 1. Read the Epic
|
||||
- Load the epic from `.claude/epics/$ARGUMENTS/epic.md`
|
||||
- Understand the technical approach and requirements
|
||||
- Review the task breakdown preview
|
||||
|
||||
### 2. Analyze for Parallel Creation
|
||||
|
||||
Determine if tasks can be created in parallel:
|
||||
- If tasks are mostly independent: Create in parallel using Task agents
|
||||
- If tasks have complex dependencies: Create sequentially
|
||||
- For best results: Group independent tasks for parallel creation
|
||||
|
||||
### 3. Parallel Task Creation (When Possible)
|
||||
|
||||
If tasks can be created in parallel, spawn sub-agents:
|
||||
|
||||
```yaml
|
||||
Task:
|
||||
description: "Create task files batch {X}"
|
||||
subagent_type: "general-purpose"
|
||||
prompt: |
|
||||
Create task files for epic: $ARGUMENTS
|
||||
|
||||
Tasks to create:
|
||||
- {list of 3-4 tasks for this batch}
|
||||
|
||||
For each task:
|
||||
1. Create file: .claude/epics/$ARGUMENTS/{number}.md
|
||||
2. Use exact format with frontmatter and all sections
|
||||
3. Follow task breakdown from epic
|
||||
4. Set parallel/depends_on fields appropriately
|
||||
5. Number sequentially (001.md, 002.md, etc.)
|
||||
|
||||
Return: List of files created
|
||||
```
|
||||
|
||||
### 4. Task File Format with Frontmatter
|
||||
For each task, create a file with this exact structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: [Task Title]
|
||||
status: open
|
||||
created: [Current ISO date/time]
|
||||
updated: [Current ISO date/time]
|
||||
github: [Will be updated when synced to GitHub]
|
||||
depends_on: [] # List of task numbers this depends on, e.g., [001, 002]
|
||||
parallel: true # Can this run in parallel with other tasks?
|
||||
conflicts_with: [] # Tasks that modify same files, e.g., [003, 004]
|
||||
---
|
||||
|
||||
# Task: [Task Title]
|
||||
|
||||
## Description
|
||||
Clear, concise description of what needs to be done
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Specific criterion 1
|
||||
- [ ] Specific criterion 2
|
||||
- [ ] Specific criterion 3
|
||||
|
||||
## Technical Details
|
||||
- Implementation approach
|
||||
- Key considerations
|
||||
- Code locations/files affected
|
||||
|
||||
## Dependencies
|
||||
- [ ] Task/Issue dependencies
|
||||
- [ ] External dependencies
|
||||
|
||||
## Effort Estimate
|
||||
- Size: XS/S/M/L/XL
|
||||
- Hours: estimated hours
|
||||
- Parallel: true/false (can run in parallel with other tasks)
|
||||
|
||||
## Definition of Done
|
||||
- [ ] Code implemented
|
||||
- [ ] Tests written and passing
|
||||
- [ ] Documentation updated
|
||||
- [ ] Code reviewed
|
||||
- [ ] Deployed to staging
|
||||
```
|
||||
|
||||
### 3. Task Naming Convention
|
||||
Save tasks as: `.claude/epics/$ARGUMENTS/{task_number}.md`
|
||||
- Use sequential numbering: 001.md, 002.md, etc.
|
||||
- Keep task titles short but descriptive
|
||||
|
||||
### 4. Frontmatter Guidelines
|
||||
- **name**: Use a descriptive task title (without "Task:" prefix)
|
||||
- **status**: Always start with "open" for new tasks
|
||||
- **created**: Get REAL current datetime by running: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- **updated**: Use the same real datetime as created for new tasks
|
||||
- **github**: Leave placeholder text - will be updated during sync
|
||||
- **depends_on**: List task numbers that must complete before this can start (e.g., [001, 002])
|
||||
- **parallel**: Set to true if this can run alongside other tasks without conflicts
|
||||
- **conflicts_with**: List task numbers that modify the same files (helps coordination)
|
||||
|
||||
### 5. Task Types to Consider
|
||||
- **Setup tasks**: Environment, dependencies, scaffolding
|
||||
- **Data tasks**: Models, schemas, migrations
|
||||
- **API tasks**: Endpoints, services, integration
|
||||
- **UI tasks**: Components, pages, styling
|
||||
- **Testing tasks**: Unit tests, integration tests
|
||||
- **Documentation tasks**: README, API docs
|
||||
- **Deployment tasks**: CI/CD, infrastructure
|
||||
|
||||
### 6. Parallelization
|
||||
Mark tasks with `parallel: true` if they can be worked on simultaneously without conflicts.
|
||||
|
||||
### 7. Execution Strategy
|
||||
|
||||
Choose based on task count and complexity:
|
||||
|
||||
**Small Epic (< 5 tasks)**: Create sequentially for simplicity
|
||||
|
||||
**Medium Epic (5-10 tasks)**:
|
||||
- Batch into 2-3 groups
|
||||
- Spawn agents for each batch
|
||||
- Consolidate results
|
||||
|
||||
**Large Epic (> 10 tasks)**:
|
||||
- Analyze dependencies first
|
||||
- Group independent tasks
|
||||
- Launch parallel agents (max 5 concurrent)
|
||||
- Create dependent tasks after prerequisites
|
||||
|
||||
Example for parallel execution:
|
||||
```markdown
|
||||
Spawning 3 agents for parallel task creation:
|
||||
- Agent 1: Creating tasks 001-003 (Database layer)
|
||||
- Agent 2: Creating tasks 004-006 (API layer)
|
||||
- Agent 3: Creating tasks 007-009 (UI layer)
|
||||
```
|
||||
|
||||
### 8. Task Dependency Validation
|
||||
|
||||
When creating tasks with dependencies:
|
||||
- Ensure referenced dependencies exist (e.g., if Task 003 depends on Task 002, verify 002 was created)
|
||||
- Check for circular dependencies (Task A → Task B → Task A)
|
||||
- If dependency issues found, warn but continue: "⚠️ Task dependency warning: {details}"
|
||||
|
||||
### 9. Update Epic with Task Summary
|
||||
After creating all tasks, update the epic file by adding this section:
|
||||
```markdown
|
||||
## Tasks Created
|
||||
- [ ] 001.md - {Task Title} (parallel: true/false)
|
||||
- [ ] 002.md - {Task Title} (parallel: true/false)
|
||||
- etc.
|
||||
|
||||
Total tasks: {count}
|
||||
Parallel tasks: {parallel_count}
|
||||
Sequential tasks: {sequential_count}
|
||||
Estimated total effort: {sum of hours}
|
||||
```
|
||||
|
||||
Also update the epic's frontmatter progress if needed (still 0% until tasks actually start).
|
||||
|
||||
### 9. Quality Validation
|
||||
|
||||
Before finalizing tasks, verify:
|
||||
- [ ] All tasks have clear acceptance criteria
|
||||
- [ ] Task sizes are reasonable (1-3 days each)
|
||||
- [ ] Dependencies are logical and achievable
|
||||
- [ ] Parallel tasks don't conflict with each other
|
||||
- [ ] Combined tasks cover all epic requirements
|
||||
|
||||
### 10. Post-Decomposition
|
||||
|
||||
After successfully creating tasks:
|
||||
1. Confirm: "✅ Created {count} tasks for epic: $ARGUMENTS"
|
||||
2. Show summary:
|
||||
- Total tasks created
|
||||
- Parallel vs sequential breakdown
|
||||
- Total estimated effort
|
||||
3. Suggest next step: "Ready to sync to GitHub? Run: /pm:epic-sync $ARGUMENTS"
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If any step fails:
|
||||
- If task creation partially completes, list which tasks were created
|
||||
- Provide option to clean up partial tasks
|
||||
- Never leave the epic in an inconsistent state
|
||||
|
||||
Aim for tasks that can be completed in 1-3 days each. Break down larger tasks into smaller, manageable pieces for the "$ARGUMENTS" epic.
|
||||
@ -0,0 +1,66 @@
|
||||
---
|
||||
allowed-tools: Read, Write, LS
|
||||
---
|
||||
|
||||
# Epic Edit
|
||||
|
||||
Edit epic details after creation.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-edit <epic_name>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Read Current Epic
|
||||
|
||||
Read `.claude/epics/$ARGUMENTS/epic.md`:
|
||||
- Parse frontmatter
|
||||
- Read content sections
|
||||
|
||||
### 2. Interactive Edit
|
||||
|
||||
Ask user what to edit:
|
||||
- Name/Title
|
||||
- Description/Overview
|
||||
- Architecture decisions
|
||||
- Technical approach
|
||||
- Dependencies
|
||||
- Success criteria
|
||||
|
||||
### 3. Update Epic File
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update epic.md:
|
||||
- Preserve all frontmatter except `updated`
|
||||
- Apply user's edits to content
|
||||
- Update `updated` field with current datetime
|
||||
|
||||
### 4. Option to Update GitHub
|
||||
|
||||
If epic has GitHub URL in frontmatter:
|
||||
Ask: "Update GitHub issue? (yes/no)"
|
||||
|
||||
If yes:
|
||||
```bash
|
||||
gh issue edit {issue_number} --body-file .claude/epics/$ARGUMENTS/epic.md
|
||||
```
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
✅ Updated epic: $ARGUMENTS
|
||||
Changes made to: {sections_edited}
|
||||
|
||||
{If GitHub updated}: GitHub issue updated ✅
|
||||
|
||||
View epic: /pm:epic-show $ARGUMENTS
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Preserve frontmatter history (created, github URL, etc.).
|
||||
Don't change task files when editing epic.
|
||||
Follow `/rules/frontmatter-operations.md`.
|
||||
@ -0,0 +1,13 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/epic-list.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- You MUST display the complete output.
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
|
||||
@ -0,0 +1,211 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write
|
||||
---
|
||||
|
||||
# Epic Merge
|
||||
|
||||
Merge completed epic from worktree back to main branch.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-merge <epic_name>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
1. **Verify worktree exists:**
|
||||
```bash
|
||||
git worktree list | grep "epic-$ARGUMENTS" || echo "❌ No worktree for epic: $ARGUMENTS"
|
||||
```
|
||||
|
||||
2. **Check for active agents:**
|
||||
Read `.claude/epics/$ARGUMENTS/execution-status.md`
|
||||
If active agents exist: "⚠️ Active agents detected. Stop them first with: /pm:epic-stop $ARGUMENTS"
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Pre-Merge Validation
|
||||
|
||||
Navigate to worktree and check status:
|
||||
```bash
|
||||
cd ../epic-$ARGUMENTS
|
||||
|
||||
# Check for uncommitted changes
|
||||
if [[ $(git status --porcelain) ]]; then
|
||||
echo "⚠️ Uncommitted changes in worktree:"
|
||||
git status --short
|
||||
echo "Commit or stash changes before merging"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check branch status
|
||||
git fetch origin
|
||||
git status -sb
|
||||
```
|
||||
|
||||
### 2. Run Tests (Optional but Recommended)
|
||||
|
||||
```bash
|
||||
# Look for test commands
|
||||
if [ -f package.json ]; then
|
||||
npm test || echo "⚠️ Tests failed. Continue anyway? (yes/no)"
|
||||
elif [ -f Makefile ]; then
|
||||
make test || echo "⚠️ Tests failed. Continue anyway? (yes/no)"
|
||||
fi
|
||||
```
|
||||
|
||||
### 3. Update Epic Documentation
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update `.claude/epics/$ARGUMENTS/epic.md`:
|
||||
- Set status to "completed"
|
||||
- Update completion date
|
||||
- Add final summary
|
||||
|
||||
### 4. Attempt Merge
|
||||
|
||||
```bash
|
||||
# Return to main repository
|
||||
cd {main-repo-path}
|
||||
|
||||
# Ensure main is up to date
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Attempt merge
|
||||
echo "Merging epic/$ARGUMENTS to main..."
|
||||
git merge epic/$ARGUMENTS --no-ff -m "Merge epic: $ARGUMENTS
|
||||
|
||||
Completed features:
|
||||
$(cd .claude/epics/$ARGUMENTS && ls *.md | grep -E '^[0-9]+' | while read f; do
|
||||
echo "- $(grep '^name:' $f | cut -d: -f2)"
|
||||
done)
|
||||
|
||||
Closes epic #$(grep 'github:' .claude/epics/$ARGUMENTS/epic.md | grep -oE '#[0-9]+')"
|
||||
```
|
||||
|
||||
### 5. Handle Merge Conflicts
|
||||
|
||||
If merge fails with conflicts:
|
||||
```bash
|
||||
# Check conflict status
|
||||
git status
|
||||
|
||||
echo "
|
||||
❌ Merge conflicts detected!
|
||||
|
||||
Conflicts in:
|
||||
$(git diff --name-only --diff-filter=U)
|
||||
|
||||
Options:
|
||||
1. Resolve manually:
|
||||
- Edit conflicted files
|
||||
- git add {files}
|
||||
- git commit
|
||||
|
||||
2. Abort merge:
|
||||
git merge --abort
|
||||
|
||||
3. Get help:
|
||||
/pm:epic-resolve $ARGUMENTS
|
||||
|
||||
Worktree preserved at: ../epic-$ARGUMENTS
|
||||
"
|
||||
exit 1
|
||||
```
|
||||
|
||||
### 6. Post-Merge Cleanup
|
||||
|
||||
If merge succeeds:
|
||||
```bash
|
||||
# Push to remote
|
||||
git push origin main
|
||||
|
||||
# Clean up worktree
|
||||
git worktree remove ../epic-$ARGUMENTS
|
||||
echo "✅ Worktree removed: ../epic-$ARGUMENTS"
|
||||
|
||||
# Delete branch
|
||||
git branch -d epic/$ARGUMENTS
|
||||
git push origin --delete epic/$ARGUMENTS 2>/dev/null || true
|
||||
|
||||
# Archive epic locally
|
||||
mkdir -p .claude/epics/archived/
|
||||
mv .claude/epics/$ARGUMENTS .claude/epics/archived/
|
||||
echo "✅ Epic archived: .claude/epics/archived/$ARGUMENTS"
|
||||
```
|
||||
|
||||
### 7. Update GitHub Issues
|
||||
|
||||
Close related issues:
|
||||
```bash
|
||||
# Get issue numbers from epic
|
||||
epic_issue=$(grep 'github:' .claude/epics/archived/$ARGUMENTS/epic.md | grep -oE '[0-9]+$')
|
||||
|
||||
# Close epic issue
|
||||
gh issue close $epic_issue -c "Epic completed and merged to main"
|
||||
|
||||
# Close task issues
|
||||
for task_file in .claude/epics/archived/$ARGUMENTS/[0-9]*.md; do
|
||||
issue_num=$(grep 'github:' $task_file | grep -oE '[0-9]+$')
|
||||
if [ ! -z "$issue_num" ]; then
|
||||
gh issue close $issue_num -c "Completed in epic merge"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
### 8. Final Output
|
||||
|
||||
```
|
||||
✅ Epic Merged Successfully: $ARGUMENTS
|
||||
|
||||
Summary:
|
||||
Branch: epic/$ARGUMENTS → main
|
||||
Commits merged: {count}
|
||||
Files changed: {count}
|
||||
Issues closed: {count}
|
||||
|
||||
Cleanup completed:
|
||||
✓ Worktree removed
|
||||
✓ Branch deleted
|
||||
✓ Epic archived
|
||||
✓ GitHub issues closed
|
||||
|
||||
Next steps:
|
||||
- Deploy changes if needed
|
||||
- Start new epic: /pm:prd-new {feature}
|
||||
- View completed work: git log --oneline -20
|
||||
```
|
||||
|
||||
## Conflict Resolution Help
|
||||
|
||||
If conflicts need resolution:
|
||||
```
|
||||
The epic branch has conflicts with main.
|
||||
|
||||
This typically happens when:
|
||||
- Main has changed since epic started
|
||||
- Multiple epics modified same files
|
||||
- Dependencies were updated
|
||||
|
||||
To resolve:
|
||||
1. Open conflicted files
|
||||
2. Look for <<<<<<< markers
|
||||
3. Choose correct version or combine
|
||||
4. Remove conflict markers
|
||||
5. git add {resolved files}
|
||||
6. git commit
|
||||
7. git push
|
||||
|
||||
Or abort and try later:
|
||||
git merge --abort
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Always check for uncommitted changes first
|
||||
- Run tests before merging when possible
|
||||
- Use --no-ff to preserve epic history
|
||||
- Archive epic data instead of deleting
|
||||
- Close GitHub issues to maintain sync
|
||||
@ -0,0 +1,89 @@
|
||||
---
|
||||
allowed-tools: Read, LS
|
||||
---
|
||||
|
||||
# Epic Oneshot
|
||||
|
||||
Decompose epic into tasks and sync to GitHub in one operation.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-oneshot <feature_name>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Validate Prerequisites
|
||||
|
||||
Check that epic exists and hasn't been processed:
|
||||
```bash
|
||||
# Epic must exist
|
||||
test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
|
||||
|
||||
# Check for existing tasks
|
||||
if ls .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | grep -q .; then
|
||||
echo "⚠️ Tasks already exist. This will create duplicates."
|
||||
echo "Delete existing tasks or use /pm:epic-sync instead."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if already synced
|
||||
if grep -q "github:" .claude/epics/$ARGUMENTS/epic.md; then
|
||||
echo "⚠️ Epic already synced to GitHub."
|
||||
echo "Use /pm:epic-sync to update."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Execute Decompose
|
||||
|
||||
Simply run the decompose command:
|
||||
```
|
||||
Running: /pm:epic-decompose $ARGUMENTS
|
||||
```
|
||||
|
||||
This will:
|
||||
- Read the epic
|
||||
- Create task files (using parallel agents if appropriate)
|
||||
- Update epic with task summary
|
||||
|
||||
### 3. Execute Sync
|
||||
|
||||
Immediately follow with sync:
|
||||
```
|
||||
Running: /pm:epic-sync $ARGUMENTS
|
||||
```
|
||||
|
||||
This will:
|
||||
- Create epic issue on GitHub
|
||||
- Create sub-issues (using parallel agents if appropriate)
|
||||
- Rename task files to issue IDs
|
||||
- Create worktree
|
||||
|
||||
### 4. Output
|
||||
|
||||
```
|
||||
🚀 Epic Oneshot Complete: $ARGUMENTS
|
||||
|
||||
Step 1: Decomposition ✓
|
||||
- Tasks created: {count}
|
||||
|
||||
Step 2: GitHub Sync ✓
|
||||
- Epic: #{number}
|
||||
- Sub-issues created: {count}
|
||||
- Worktree: ../epic-$ARGUMENTS
|
||||
|
||||
Ready for development!
|
||||
Start work: /pm:epic-start $ARGUMENTS
|
||||
Or single task: /pm:issue-start {task_number}
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
This is simply a convenience wrapper that runs:
|
||||
1. `/pm:epic-decompose`
|
||||
2. `/pm:epic-sync`
|
||||
|
||||
Both commands handle their own error checking, parallel execution, and validation. This command just orchestrates them in sequence.
|
||||
|
||||
Use this when you're confident the epic is ready and want to go from epic to GitHub issues in one step.
|
||||
@ -0,0 +1,102 @@
|
||||
---
|
||||
allowed-tools: Read, Write, LS
|
||||
---
|
||||
|
||||
# Epic Refresh
|
||||
|
||||
Update epic progress based on task states.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-refresh <epic_name>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Count Task Status
|
||||
|
||||
Scan all task files in `.claude/epics/$ARGUMENTS/`:
|
||||
- Count total tasks
|
||||
- Count tasks with `status: closed`
|
||||
- Count tasks with `status: open`
|
||||
- Count tasks with work in progress
|
||||
|
||||
### 2. Calculate Progress
|
||||
|
||||
```
|
||||
progress = (closed_tasks / total_tasks) * 100
|
||||
```
|
||||
|
||||
Round to nearest integer.
|
||||
|
||||
### 3. Update GitHub Task List
|
||||
|
||||
If epic has GitHub issue, sync task checkboxes:
|
||||
|
||||
```bash
|
||||
# Get epic issue number from epic.md frontmatter
|
||||
epic_issue={extract_from_github_field}
|
||||
|
||||
if [ ! -z "$epic_issue" ]; then
|
||||
# Get current epic body
|
||||
gh issue view $epic_issue --json body -q .body > /tmp/epic-body.md
|
||||
|
||||
# For each task, check its status and update checkbox
|
||||
for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
|
||||
task_issue=$(grep 'github:' $task_file | grep -oE '[0-9]+$')
|
||||
task_status=$(grep 'status:' $task_file | cut -d: -f2 | tr -d ' ')
|
||||
|
||||
if [ "$task_status" = "closed" ]; then
|
||||
# Mark as checked
|
||||
sed -i "s/- \[ \] #$task_issue/- [x] #$task_issue/" /tmp/epic-body.md
|
||||
else
|
||||
# Ensure unchecked (in case manually checked)
|
||||
sed -i "s/- \[x\] #$task_issue/- [ ] #$task_issue/" /tmp/epic-body.md
|
||||
fi
|
||||
done
|
||||
|
||||
# Update epic issue
|
||||
gh issue edit $epic_issue --body-file /tmp/epic-body.md
|
||||
fi
|
||||
```
|
||||
|
||||
### 4. Determine Epic Status
|
||||
|
||||
- If progress = 0% and no work started: `backlog`
|
||||
- If progress > 0% and < 100%: `in-progress`
|
||||
- If progress = 100%: `completed`
|
||||
|
||||
### 5. Update Epic
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update epic.md frontmatter:
|
||||
```yaml
|
||||
status: {calculated_status}
|
||||
progress: {calculated_progress}%
|
||||
updated: {current_datetime}
|
||||
```
|
||||
|
||||
### 6. Output
|
||||
|
||||
```
|
||||
🔄 Epic refreshed: $ARGUMENTS
|
||||
|
||||
Tasks:
|
||||
Closed: {closed_count}
|
||||
Open: {open_count}
|
||||
Total: {total_count}
|
||||
|
||||
Progress: {old_progress}% → {new_progress}%
|
||||
Status: {old_status} → {new_status}
|
||||
GitHub: Task list updated ✓
|
||||
|
||||
{If complete}: Run /pm:epic-close $ARGUMENTS to close epic
|
||||
{If in progress}: Run /pm:next to see priority tasks
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
This is useful after manual task edits or GitHub sync.
|
||||
Don't modify task files, only epic status.
|
||||
Preserve all other frontmatter fields.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/epic-show.sh $ARGUMENTS` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,221 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Epic Start
|
||||
|
||||
Launch parallel agents to work on epic tasks in a shared worktree.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-start <epic_name>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
1. **Verify epic exists:**
|
||||
```bash
|
||||
test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
|
||||
```
|
||||
|
||||
2. **Check GitHub sync:**
|
||||
Look for `github:` field in epic frontmatter.
|
||||
If missing: "❌ Epic not synced. Run: /pm:epic-sync $ARGUMENTS first"
|
||||
|
||||
3. **Check for worktree:**
|
||||
```bash
|
||||
git worktree list | grep "epic-$ARGUMENTS"
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Create or Enter Worktree
|
||||
|
||||
Follow `/rules/worktree-operations.md`:
|
||||
|
||||
```bash
|
||||
# If worktree doesn't exist, create it
|
||||
if ! git worktree list | grep -q "epic-$ARGUMENTS"; then
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git worktree add ../epic-$ARGUMENTS -b epic/$ARGUMENTS
|
||||
echo "✅ Created worktree: ../epic-$ARGUMENTS"
|
||||
else
|
||||
echo "✅ Using existing worktree: ../epic-$ARGUMENTS"
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Identify Ready Issues
|
||||
|
||||
Read all task files in `.claude/epics/$ARGUMENTS/`:
|
||||
- Parse frontmatter for `status`, `depends_on`, `parallel` fields
|
||||
- Check GitHub issue status if needed
|
||||
- Build dependency graph
|
||||
|
||||
Categorize issues:
|
||||
- **Ready**: No unmet dependencies, not started
|
||||
- **Blocked**: Has unmet dependencies
|
||||
- **In Progress**: Already being worked on
|
||||
- **Complete**: Finished
|
||||
|
||||
### 3. Analyze Ready Issues
|
||||
|
||||
For each ready issue without analysis:
|
||||
```bash
|
||||
# Check for analysis
|
||||
if ! test -f .claude/epics/$ARGUMENTS/{issue}-analysis.md; then
|
||||
echo "Analyzing issue #{issue}..."
|
||||
# Run analysis (inline or via Task tool)
|
||||
fi
|
||||
```
|
||||
|
||||
### 4. Launch Parallel Agents
|
||||
|
||||
For each ready issue with analysis:
|
||||
|
||||
```markdown
|
||||
## Starting Issue #{issue}: {title}
|
||||
|
||||
Reading analysis...
|
||||
Found {count} parallel streams:
|
||||
- Stream A: {description} (Agent-{id})
|
||||
- Stream B: {description} (Agent-{id})
|
||||
|
||||
Launching agents in worktree: ../epic-$ARGUMENTS/
|
||||
```
|
||||
|
||||
Use Task tool to launch each stream:
|
||||
```yaml
|
||||
Task:
|
||||
description: "Issue #{issue} Stream {X}"
|
||||
subagent_type: "{agent_type}"
|
||||
prompt: |
|
||||
Working in worktree: ../epic-$ARGUMENTS/
|
||||
Issue: #{issue} - {title}
|
||||
Stream: {stream_name}
|
||||
|
||||
Your scope:
|
||||
- Files: {file_patterns}
|
||||
- Work: {stream_description}
|
||||
|
||||
Read full requirements from:
|
||||
- .claude/epics/$ARGUMENTS/{task_file}
|
||||
- .claude/epics/$ARGUMENTS/{issue}-analysis.md
|
||||
|
||||
Follow coordination rules in /rules/agent-coordination.md
|
||||
|
||||
Commit frequently with message format:
|
||||
"Issue #{issue}: {specific change}"
|
||||
|
||||
Update progress in:
|
||||
.claude/epics/$ARGUMENTS/updates/{issue}/stream-{X}.md
|
||||
```
|
||||
|
||||
### 5. Track Active Agents
|
||||
|
||||
Create/update `.claude/epics/$ARGUMENTS/execution-status.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
started: {datetime}
|
||||
worktree: ../epic-$ARGUMENTS
|
||||
branch: epic/$ARGUMENTS
|
||||
---
|
||||
|
||||
# Execution Status
|
||||
|
||||
## Active Agents
|
||||
- Agent-1: Issue #1234 Stream A (Database) - Started {time}
|
||||
- Agent-2: Issue #1234 Stream B (API) - Started {time}
|
||||
- Agent-3: Issue #1235 Stream A (UI) - Started {time}
|
||||
|
||||
## Queued Issues
|
||||
- Issue #1236 - Waiting for #1234
|
||||
- Issue #1237 - Waiting for #1235
|
||||
|
||||
## Completed
|
||||
- {None yet}
|
||||
```
|
||||
|
||||
### 6. Monitor and Coordinate
|
||||
|
||||
Set up monitoring:
|
||||
```bash
|
||||
echo "
|
||||
Agents launched successfully!
|
||||
|
||||
Monitor progress:
|
||||
/pm:epic-status $ARGUMENTS
|
||||
|
||||
View worktree changes:
|
||||
cd ../epic-$ARGUMENTS && git status
|
||||
|
||||
Stop all agents:
|
||||
/pm:epic-stop $ARGUMENTS
|
||||
|
||||
Merge when complete:
|
||||
/pm:epic-merge $ARGUMENTS
|
||||
"
|
||||
```
|
||||
|
||||
### 7. Handle Dependencies
|
||||
|
||||
As agents complete streams:
|
||||
- Check if any blocked issues are now ready
|
||||
- Launch new agents for newly-ready work
|
||||
- Update execution-status.md
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
🚀 Epic Execution Started: $ARGUMENTS
|
||||
|
||||
Worktree: ../epic-$ARGUMENTS
|
||||
Branch: epic/$ARGUMENTS
|
||||
|
||||
Launching {total} agents across {issue_count} issues:
|
||||
|
||||
Issue #1234: Database Schema
|
||||
├─ Stream A: Schema creation (Agent-1) ✓ Started
|
||||
└─ Stream B: Migrations (Agent-2) ✓ Started
|
||||
|
||||
Issue #1235: API Endpoints
|
||||
├─ Stream A: User endpoints (Agent-3) ✓ Started
|
||||
├─ Stream B: Post endpoints (Agent-4) ✓ Started
|
||||
└─ Stream C: Tests (Agent-5) ⏸ Waiting for A & B
|
||||
|
||||
Blocked Issues (2):
|
||||
- #1236: UI Components (depends on #1234)
|
||||
- #1237: Integration (depends on #1235, #1236)
|
||||
|
||||
Monitor with: /pm:epic-status $ARGUMENTS
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
If agent launch fails:
|
||||
```
|
||||
❌ Failed to start Agent-{id}
|
||||
Issue: #{issue}
|
||||
Stream: {stream}
|
||||
Error: {reason}
|
||||
|
||||
Continue with other agents? (yes/no)
|
||||
```
|
||||
|
||||
If worktree creation fails:
|
||||
```
|
||||
❌ Cannot create worktree
|
||||
{git error message}
|
||||
|
||||
Try: git worktree prune
|
||||
Or: Check existing worktrees with: git worktree list
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Follow `/rules/worktree-operations.md` for git operations
|
||||
- Follow `/rules/agent-coordination.md` for parallel work
|
||||
- Agents work in the SAME worktree (not separate ones)
|
||||
- Maximum parallel agents should be reasonable (e.g., 5-10)
|
||||
- Monitor system resources if launching many agents
|
||||
@ -0,0 +1,247 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Epic Start
|
||||
|
||||
Launch parallel agents to work on epic tasks in a shared branch.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-start <epic_name>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
1. **Verify epic exists:**
|
||||
```bash
|
||||
test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
|
||||
```
|
||||
|
||||
2. **Check GitHub sync:**
|
||||
Look for `github:` field in epic frontmatter.
|
||||
If missing: "❌ Epic not synced. Run: /pm:epic-sync $ARGUMENTS first"
|
||||
|
||||
3. **Check for branch:**
|
||||
```bash
|
||||
git branch -a | grep "epic/$ARGUMENTS"
|
||||
```
|
||||
|
||||
4. **Check for uncommitted changes:**
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
If output is not empty: "❌ You have uncommitted changes. Please commit or stash them before starting an epic"
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Create or Enter Branch
|
||||
|
||||
Follow `/rules/branch-operations.md`:
|
||||
|
||||
```bash
|
||||
# Check for uncommitted changes
|
||||
if [ -n "$(git status --porcelain)" ]; then
|
||||
echo "❌ You have uncommitted changes. Please commit or stash them before starting an epic."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# If branch doesn't exist, create it
|
||||
if ! git branch -a | grep -q "epic/$ARGUMENTS"; then
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout -b epic/$ARGUMENTS
|
||||
git push -u origin epic/$ARGUMENTS
|
||||
echo "✅ Created branch: epic/$ARGUMENTS"
|
||||
else
|
||||
git checkout epic/$ARGUMENTS
|
||||
git pull origin epic/$ARGUMENTS
|
||||
echo "✅ Using existing branch: epic/$ARGUMENTS"
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Identify Ready Issues
|
||||
|
||||
Read all task files in `.claude/epics/$ARGUMENTS/`:
|
||||
- Parse frontmatter for `status`, `depends_on`, `parallel` fields
|
||||
- Check GitHub issue status if needed
|
||||
- Build dependency graph
|
||||
|
||||
Categorize issues:
|
||||
- **Ready**: No unmet dependencies, not started
|
||||
- **Blocked**: Has unmet dependencies
|
||||
- **In Progress**: Already being worked on
|
||||
- **Complete**: Finished
|
||||
|
||||
### 3. Analyze Ready Issues
|
||||
|
||||
For each ready issue without analysis:
|
||||
```bash
|
||||
# Check for analysis
|
||||
if ! test -f .claude/epics/$ARGUMENTS/{issue}-analysis.md; then
|
||||
echo "Analyzing issue #{issue}..."
|
||||
# Run analysis (inline or via Task tool)
|
||||
fi
|
||||
```
|
||||
|
||||
### 4. Launch Parallel Agents
|
||||
|
||||
For each ready issue with analysis:
|
||||
|
||||
```markdown
|
||||
## Starting Issue #{issue}: {title}
|
||||
|
||||
Reading analysis...
|
||||
Found {count} parallel streams:
|
||||
- Stream A: {description} (Agent-{id})
|
||||
- Stream B: {description} (Agent-{id})
|
||||
|
||||
Launching agents in branch: epic/$ARGUMENTS
|
||||
```
|
||||
|
||||
Use Task tool to launch each stream:
|
||||
```yaml
|
||||
Task:
|
||||
description: "Issue #{issue} Stream {X}"
|
||||
subagent_type: "{agent_type}"
|
||||
prompt: |
|
||||
Working in branch: epic/$ARGUMENTS
|
||||
Issue: #{issue} - {title}
|
||||
Stream: {stream_name}
|
||||
|
||||
Your scope:
|
||||
- Files: {file_patterns}
|
||||
- Work: {stream_description}
|
||||
|
||||
Read full requirements from:
|
||||
- .claude/epics/$ARGUMENTS/{task_file}
|
||||
- .claude/epics/$ARGUMENTS/{issue}-analysis.md
|
||||
|
||||
Follow coordination rules in /rules/agent-coordination.md
|
||||
|
||||
Commit frequently with message format:
|
||||
"Issue #{issue}: {specific change}"
|
||||
|
||||
Update progress in:
|
||||
.claude/epics/$ARGUMENTS/updates/{issue}/stream-{X}.md
|
||||
```
|
||||
|
||||
### 5. Track Active Agents
|
||||
|
||||
Create/update `.claude/epics/$ARGUMENTS/execution-status.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
started: {datetime}
|
||||
branch: epic/$ARGUMENTS
|
||||
---
|
||||
|
||||
# Execution Status
|
||||
|
||||
## Active Agents
|
||||
- Agent-1: Issue #1234 Stream A (Database) - Started {time}
|
||||
- Agent-2: Issue #1234 Stream B (API) - Started {time}
|
||||
- Agent-3: Issue #1235 Stream A (UI) - Started {time}
|
||||
|
||||
## Queued Issues
|
||||
- Issue #1236 - Waiting for #1234
|
||||
- Issue #1237 - Waiting for #1235
|
||||
|
||||
## Completed
|
||||
- {None yet}
|
||||
```
|
||||
|
||||
### 6. Monitor and Coordinate
|
||||
|
||||
Set up monitoring:
|
||||
```bash
|
||||
echo "
|
||||
Agents launched successfully!
|
||||
|
||||
Monitor progress:
|
||||
/pm:epic-status $ARGUMENTS
|
||||
|
||||
View branch changes:
|
||||
git status
|
||||
|
||||
Stop all agents:
|
||||
/pm:epic-stop $ARGUMENTS
|
||||
|
||||
Merge when complete:
|
||||
/pm:epic-merge $ARGUMENTS
|
||||
"
|
||||
```
|
||||
|
||||
### 7. Handle Dependencies
|
||||
|
||||
As agents complete streams:
|
||||
- Check if any blocked issues are now ready
|
||||
- Launch new agents for newly-ready work
|
||||
- Update execution-status.md
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
🚀 Epic Execution Started: $ARGUMENTS
|
||||
|
||||
Branch: epic/$ARGUMENTS
|
||||
|
||||
Launching {total} agents across {issue_count} issues:
|
||||
|
||||
Issue #1234: Database Schema
|
||||
├─ Stream A: Schema creation (Agent-1) ✓ Started
|
||||
└─ Stream B: Migrations (Agent-2) ✓ Started
|
||||
|
||||
Issue #1235: API Endpoints
|
||||
├─ Stream A: User endpoints (Agent-3) ✓ Started
|
||||
├─ Stream B: Post endpoints (Agent-4) ✓ Started
|
||||
└─ Stream C: Tests (Agent-5) ⏸ Waiting for A & B
|
||||
|
||||
Blocked Issues (2):
|
||||
- #1236: UI Components (depends on #1234)
|
||||
- #1237: Integration (depends on #1235, #1236)
|
||||
|
||||
Monitor with: /pm:epic-status $ARGUMENTS
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
If agent launch fails:
|
||||
```
|
||||
❌ Failed to start Agent-{id}
|
||||
Issue: #{issue}
|
||||
Stream: {stream}
|
||||
Error: {reason}
|
||||
|
||||
Continue with other agents? (yes/no)
|
||||
```
|
||||
|
||||
If uncommitted changes are found:
|
||||
```
|
||||
❌ You have uncommitted changes. Please commit or stash them before starting an epic.
|
||||
|
||||
To commit changes:
|
||||
git add .
|
||||
git commit -m "Your commit message"
|
||||
|
||||
To stash changes:
|
||||
git stash push -m "Work in progress"
|
||||
# (Later restore with: git stash pop)
|
||||
```
|
||||
|
||||
If branch creation fails:
|
||||
```
|
||||
❌ Cannot create branch
|
||||
{git error message}
|
||||
|
||||
Try: git branch -d epic/$ARGUMENTS
|
||||
Or: Check existing branches with: git branch -a
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Follow `/rules/branch-operations.md` for git operations
|
||||
- Follow `/rules/agent-coordination.md` for parallel work
|
||||
- Agents work in the SAME branch (not separate branches)
|
||||
- Maximum parallel agents should be reasonable (e.g., 5-10)
|
||||
- Monitor system resources if launching many agents
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/epic-status.sh $ARGUMENTS` using the bash tool and show me the complete stdout printed to the console.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,455 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Epic Sync
|
||||
|
||||
Push epic and tasks to GitHub as issues.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:epic-sync <feature_name>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
```bash
|
||||
# Verify epic exists
|
||||
test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
|
||||
|
||||
# Count task files
|
||||
ls .claude/epics/$ARGUMENTS/*.md 2>/dev/null | grep -v epic.md | wc -l
|
||||
```
|
||||
|
||||
If no tasks found: "❌ No tasks to sync. Run: /pm:epic-decompose $ARGUMENTS"
|
||||
|
||||
## Instructions
|
||||
|
||||
### 0. Check Remote Repository
|
||||
|
||||
Follow `/rules/github-operations.md` to ensure we're not syncing to the CCPM template:
|
||||
|
||||
```bash
|
||||
# Check if remote origin is the CCPM template repository
|
||||
remote_url=$(git remote get-url origin 2>/dev/null || echo "")
|
||||
if [[ "$remote_url" == *"automazeio/ccpm"* ]] || [[ "$remote_url" == *"automazeio/ccpm.git"* ]]; then
|
||||
echo "❌ ERROR: You're trying to sync with the CCPM template repository!"
|
||||
echo ""
|
||||
echo "This repository (automazeio/ccpm) is a template for others to use."
|
||||
echo "You should NOT create issues or PRs here."
|
||||
echo ""
|
||||
echo "To fix this:"
|
||||
echo "1. Fork this repository to your own GitHub account"
|
||||
echo "2. Update your remote origin:"
|
||||
echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
|
||||
echo ""
|
||||
echo "Or if this is a new project:"
|
||||
echo "1. Create a new repository on GitHub"
|
||||
echo "2. Update your remote origin:"
|
||||
echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
|
||||
echo ""
|
||||
echo "Current remote: $remote_url"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### 1. Create Epic Issue
|
||||
|
||||
Strip frontmatter and prepare GitHub issue body:
|
||||
```bash
|
||||
# Extract content without frontmatter
|
||||
sed '1,/^---$/d; 1,/^---$/d' .claude/epics/$ARGUMENTS/epic.md > /tmp/epic-body-raw.md
|
||||
|
||||
# Remove "## Tasks Created" section and replace with Stats
|
||||
awk '
|
||||
/^## Tasks Created/ {
|
||||
in_tasks=1
|
||||
next
|
||||
}
|
||||
/^## / && in_tasks {
|
||||
in_tasks=0
|
||||
# When we hit the next section after Tasks Created, add Stats
|
||||
if (total_tasks) {
|
||||
print "## Stats\n"
|
||||
print "Total tasks: " total_tasks
|
||||
print "Parallel tasks: " parallel_tasks " (can be worked on simultaneously)"
|
||||
print "Sequential tasks: " sequential_tasks " (have dependencies)"
|
||||
if (total_effort) print "Estimated total effort: " total_effort " hours"
|
||||
print ""
|
||||
}
|
||||
}
|
||||
/^Total tasks:/ && in_tasks { total_tasks = $3; next }
|
||||
/^Parallel tasks:/ && in_tasks { parallel_tasks = $3; next }
|
||||
/^Sequential tasks:/ && in_tasks { sequential_tasks = $3; next }
|
||||
/^Estimated total effort:/ && in_tasks {
|
||||
gsub(/^Estimated total effort: /, "")
|
||||
total_effort = $0
|
||||
next
|
||||
}
|
||||
!in_tasks { print }
|
||||
END {
|
||||
# If we were still in tasks section at EOF, add stats
|
||||
if (in_tasks && total_tasks) {
|
||||
print "## Stats\n"
|
||||
print "Total tasks: " total_tasks
|
||||
print "Parallel tasks: " parallel_tasks " (can be worked on simultaneously)"
|
||||
print "Sequential tasks: " sequential_tasks " (have dependencies)"
|
||||
if (total_effort) print "Estimated total effort: " total_effort
|
||||
}
|
||||
}
|
||||
' /tmp/epic-body-raw.md > /tmp/epic-body.md
|
||||
|
||||
# Determine epic type (feature vs bug) from content
|
||||
if grep -qi "bug\|fix\|issue\|problem\|error" /tmp/epic-body.md; then
|
||||
epic_type="bug"
|
||||
else
|
||||
epic_type="feature"
|
||||
fi
|
||||
|
||||
# Create epic issue with labels
|
||||
epic_number=$(gh issue create \
|
||||
--title "Epic: $ARGUMENTS" \
|
||||
--body-file /tmp/epic-body.md \
|
||||
--label "epic,epic:$ARGUMENTS,$epic_type" \
|
||||
--json number -q .number)
|
||||
```
|
||||
|
||||
Store the returned issue number for epic frontmatter update.
|
||||
|
||||
### 2. Create Task Sub-Issues
|
||||
|
||||
Check if gh-sub-issue is available:
|
||||
```bash
|
||||
if gh extension list | grep -q "yahsan2/gh-sub-issue"; then
|
||||
use_subissues=true
|
||||
else
|
||||
use_subissues=false
|
||||
echo "⚠️ gh-sub-issue not installed. Using fallback mode."
|
||||
fi
|
||||
```
|
||||
|
||||
Count task files to determine strategy:
|
||||
```bash
|
||||
task_count=$(ls .claude/epics/$ARGUMENTS/[0-9][0-9][0-9].md 2>/dev/null | wc -l)
|
||||
```
|
||||
|
||||
### For Small Batches (< 5 tasks): Sequential Creation
|
||||
|
||||
```bash
|
||||
if [ "$task_count" -lt 5 ]; then
|
||||
# Create sequentially for small batches
|
||||
for task_file in .claude/epics/$ARGUMENTS/[0-9][0-9][0-9].md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
# Extract task name from frontmatter
|
||||
task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
|
||||
|
||||
# Strip frontmatter from task content
|
||||
sed '1,/^---$/d; 1,/^---$/d' "$task_file" > /tmp/task-body.md
|
||||
|
||||
# Create sub-issue with labels
|
||||
if [ "$use_subissues" = true ]; then
|
||||
task_number=$(gh sub-issue create \
|
||||
--parent "$epic_number" \
|
||||
--title "$task_name" \
|
||||
--body-file /tmp/task-body.md \
|
||||
--label "task,epic:$ARGUMENTS" \
|
||||
--json number -q .number)
|
||||
else
|
||||
task_number=$(gh issue create \
|
||||
--title "$task_name" \
|
||||
--body-file /tmp/task-body.md \
|
||||
--label "task,epic:$ARGUMENTS" \
|
||||
--json number -q .number)
|
||||
fi
|
||||
|
||||
# Record mapping for renaming
|
||||
echo "$task_file:$task_number" >> /tmp/task-mapping.txt
|
||||
done
|
||||
|
||||
# After creating all issues, update references and rename files
|
||||
# This follows the same process as step 3 below
|
||||
fi
|
||||
```
|
||||
|
||||
### For Larger Batches: Parallel Creation
|
||||
|
||||
```bash
|
||||
if [ "$task_count" -ge 5 ]; then
|
||||
echo "Creating $task_count sub-issues in parallel..."
|
||||
|
||||
# Check if gh-sub-issue is available for parallel agents
|
||||
if gh extension list | grep -q "yahsan2/gh-sub-issue"; then
|
||||
subissue_cmd="gh sub-issue create --parent $epic_number"
|
||||
else
|
||||
subissue_cmd="gh issue create"
|
||||
fi
|
||||
|
||||
# Batch tasks for parallel processing
|
||||
# Spawn agents to create sub-issues in parallel with proper labels
|
||||
# Each agent must use: --label "task,epic:$ARGUMENTS"
|
||||
fi
|
||||
```
|
||||
|
||||
Use Task tool for parallel creation:
|
||||
```yaml
|
||||
Task:
|
||||
description: "Create GitHub sub-issues batch {X}"
|
||||
subagent_type: "general-purpose"
|
||||
prompt: |
|
||||
Create GitHub sub-issues for tasks in epic $ARGUMENTS
|
||||
Parent epic issue: #$epic_number
|
||||
|
||||
Tasks to process:
|
||||
- {list of 3-4 task files}
|
||||
|
||||
For each task file:
|
||||
1. Extract task name from frontmatter
|
||||
2. Strip frontmatter using: sed '1,/^---$/d; 1,/^---$/d'
|
||||
3. Create sub-issue using:
|
||||
- If gh-sub-issue available:
|
||||
gh sub-issue create --parent $epic_number --title "$task_name" \
|
||||
--body-file /tmp/task-body.md --label "task,epic:$ARGUMENTS"
|
||||
- Otherwise:
|
||||
gh issue create --title "$task_name" --body-file /tmp/task-body.md \
|
||||
--label "task,epic:$ARGUMENTS"
|
||||
4. Record: task_file:issue_number
|
||||
|
||||
IMPORTANT: Always include --label parameter with "task,epic:$ARGUMENTS"
|
||||
|
||||
Return mapping of files to issue numbers.
|
||||
```
|
||||
|
||||
Consolidate results from parallel agents:
|
||||
```bash
|
||||
# Collect all mappings from agents
|
||||
cat /tmp/batch-*/mapping.txt >> /tmp/task-mapping.txt
|
||||
|
||||
# IMPORTANT: After consolidation, follow step 3 to:
|
||||
# 1. Build old->new ID mapping
|
||||
# 2. Update all task references (depends_on, conflicts_with)
|
||||
# 3. Rename files with proper frontmatter updates
|
||||
```
|
||||
|
||||
### 3. Rename Task Files and Update References
|
||||
|
||||
First, build a mapping of old numbers to new issue IDs:
|
||||
```bash
|
||||
# Create mapping from old task numbers (001, 002, etc.) to new issue IDs
|
||||
> /tmp/id-mapping.txt
|
||||
while IFS=: read -r task_file task_number; do
|
||||
# Extract old number from filename (e.g., 001 from 001.md)
|
||||
old_num=$(basename "$task_file" .md)
|
||||
echo "$old_num:$task_number" >> /tmp/id-mapping.txt
|
||||
done < /tmp/task-mapping.txt
|
||||
```
|
||||
|
||||
Then rename files and update all references:
|
||||
```bash
|
||||
# Process each task file
|
||||
while IFS=: read -r task_file task_number; do
|
||||
new_name="$(dirname "$task_file")/${task_number}.md"
|
||||
|
||||
# Read the file content
|
||||
content=$(cat "$task_file")
|
||||
|
||||
# Update depends_on and conflicts_with references
|
||||
while IFS=: read -r old_num new_num; do
|
||||
# Update arrays like [001, 002] to use new issue numbers
|
||||
content=$(echo "$content" | sed "s/\b$old_num\b/$new_num/g")
|
||||
done < /tmp/id-mapping.txt
|
||||
|
||||
# Write updated content to new file
|
||||
echo "$content" > "$new_name"
|
||||
|
||||
# Remove old file if different from new
|
||||
[ "$task_file" != "$new_name" ] && rm "$task_file"
|
||||
|
||||
# Update github field in frontmatter
|
||||
# Add the GitHub URL to the frontmatter
|
||||
repo=$(gh repo view --json nameWithOwner -q .nameWithOwner)
|
||||
github_url="https://github.com/$repo/issues/$task_number"
|
||||
|
||||
# Update frontmatter with GitHub URL and current timestamp
|
||||
current_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Use sed to update the github and updated fields
|
||||
sed -i.bak "/^github:/c\github: $github_url" "$new_name"
|
||||
sed -i.bak "/^updated:/c\updated: $current_date" "$new_name"
|
||||
rm "${new_name}.bak"
|
||||
done < /tmp/task-mapping.txt
|
||||
```
|
||||
|
||||
### 4. Update Epic with Task List (Fallback Only)
|
||||
|
||||
If NOT using gh-sub-issue, add task list to epic:
|
||||
|
||||
```bash
|
||||
if [ "$use_subissues" = false ]; then
|
||||
# Get current epic body
|
||||
gh issue view {epic_number} --json body -q .body > /tmp/epic-body.md
|
||||
|
||||
# Append task list
|
||||
cat >> /tmp/epic-body.md << 'EOF'
|
||||
|
||||
## Tasks
|
||||
- [ ] #{task1_number} {task1_name}
|
||||
- [ ] #{task2_number} {task2_name}
|
||||
- [ ] #{task3_number} {task3_name}
|
||||
EOF
|
||||
|
||||
# Update epic issue
|
||||
gh issue edit {epic_number} --body-file /tmp/epic-body.md
|
||||
fi
|
||||
```
|
||||
|
||||
With gh-sub-issue, this is automatic!
|
||||
|
||||
### 5. Update Epic File
|
||||
|
||||
Update the epic file with GitHub URL, timestamp, and real task IDs:
|
||||
|
||||
#### 5a. Update Frontmatter
|
||||
```bash
|
||||
# Get repo info
|
||||
repo=$(gh repo view --json nameWithOwner -q .nameWithOwner)
|
||||
epic_url="https://github.com/$repo/issues/$epic_number"
|
||||
current_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Update epic frontmatter
|
||||
sed -i.bak "/^github:/c\github: $epic_url" .claude/epics/$ARGUMENTS/epic.md
|
||||
sed -i.bak "/^updated:/c\updated: $current_date" .claude/epics/$ARGUMENTS/epic.md
|
||||
rm .claude/epics/$ARGUMENTS/epic.md.bak
|
||||
```
|
||||
|
||||
#### 5b. Update Tasks Created Section
|
||||
```bash
|
||||
# Create a temporary file with the updated Tasks Created section
|
||||
cat > /tmp/tasks-section.md << 'EOF'
|
||||
## Tasks Created
|
||||
EOF
|
||||
|
||||
# Add each task with its real issue number
|
||||
for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
# Get issue number (filename without .md)
|
||||
issue_num=$(basename "$task_file" .md)
|
||||
|
||||
# Get task name from frontmatter
|
||||
task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
|
||||
|
||||
# Get parallel status
|
||||
parallel=$(grep '^parallel:' "$task_file" | sed 's/^parallel: *//')
|
||||
|
||||
# Add to tasks section
|
||||
echo "- [ ] #${issue_num} - ${task_name} (parallel: ${parallel})" >> /tmp/tasks-section.md
|
||||
done
|
||||
|
||||
# Add summary statistics
|
||||
total_count=$(ls .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | wc -l)
|
||||
parallel_count=$(grep -l '^parallel: true' .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | wc -l)
|
||||
sequential_count=$((total_count - parallel_count))
|
||||
|
||||
cat >> /tmp/tasks-section.md << EOF
|
||||
|
||||
Total tasks: ${total_count}
|
||||
Parallel tasks: ${parallel_count}
|
||||
Sequential tasks: ${sequential_count}
|
||||
EOF
|
||||
|
||||
# Replace the Tasks Created section in epic.md
|
||||
# First, create a backup
|
||||
cp .claude/epics/$ARGUMENTS/epic.md .claude/epics/$ARGUMENTS/epic.md.backup
|
||||
|
||||
# Use awk to replace the section
|
||||
awk '
|
||||
/^## Tasks Created/ {
|
||||
skip=1
|
||||
while ((getline line < "/tmp/tasks-section.md") > 0) print line
|
||||
close("/tmp/tasks-section.md")
|
||||
}
|
||||
/^## / && !/^## Tasks Created/ { skip=0 }
|
||||
!skip && !/^## Tasks Created/ { print }
|
||||
' .claude/epics/$ARGUMENTS/epic.md.backup > .claude/epics/$ARGUMENTS/epic.md
|
||||
|
||||
# Clean up
|
||||
rm .claude/epics/$ARGUMENTS/epic.md.backup
|
||||
rm /tmp/tasks-section.md
|
||||
```
|
||||
|
||||
### 6. Create Mapping File
|
||||
|
||||
Create `.claude/epics/$ARGUMENTS/github-mapping.md`:
|
||||
```bash
|
||||
# Create mapping file
|
||||
cat > .claude/epics/$ARGUMENTS/github-mapping.md << EOF
|
||||
# GitHub Issue Mapping
|
||||
|
||||
Epic: #${epic_number} - https://github.com/${repo}/issues/${epic_number}
|
||||
|
||||
Tasks:
|
||||
EOF
|
||||
|
||||
# Add each task mapping
|
||||
for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
issue_num=$(basename "$task_file" .md)
|
||||
task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
|
||||
|
||||
echo "- #${issue_num}: ${task_name} - https://github.com/${repo}/issues/${issue_num}" >> .claude/epics/$ARGUMENTS/github-mapping.md
|
||||
done
|
||||
|
||||
# Add sync timestamp
|
||||
echo "" >> .claude/epics/$ARGUMENTS/github-mapping.md
|
||||
echo "Synced: $(date -u +"%Y-%m-%dT%H:%M:%SZ")" >> .claude/epics/$ARGUMENTS/github-mapping.md
|
||||
```
|
||||
|
||||
### 7. Create Worktree
|
||||
|
||||
Follow `/rules/worktree-operations.md` to create development worktree:
|
||||
|
||||
```bash
|
||||
# Ensure main is current
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Create worktree for epic
|
||||
git worktree add ../epic-$ARGUMENTS -b epic/$ARGUMENTS
|
||||
|
||||
echo "✅ Created worktree: ../epic-$ARGUMENTS"
|
||||
```
|
||||
|
||||
### 8. Output
|
||||
|
||||
```
|
||||
✅ Synced to GitHub
|
||||
- Epic: #{epic_number} - {epic_title}
|
||||
- Tasks: {count} sub-issues created
|
||||
- Labels applied: epic, task, epic:{name}
|
||||
- Files renamed: 001.md → {issue_id}.md
|
||||
- References updated: depends_on/conflicts_with now use issue IDs
|
||||
- Worktree: ../epic-$ARGUMENTS
|
||||
|
||||
Next steps:
|
||||
- Start parallel execution: /pm:epic-start $ARGUMENTS
|
||||
- Or work on single issue: /pm:issue-start {issue_number}
|
||||
- View epic: https://github.com/{owner}/{repo}/issues/{epic_number}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Follow `/rules/github-operations.md` for GitHub CLI errors.
|
||||
|
||||
If any issue creation fails:
|
||||
- Report what succeeded
|
||||
- Note what failed
|
||||
- Don't attempt rollback (partial sync is fine)
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Trust GitHub CLI authentication
|
||||
- Don't pre-check for duplicates
|
||||
- Update frontmatter only after successful creation
|
||||
- Keep operations simple and atomic
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/help.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,98 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Import
|
||||
|
||||
Import existing GitHub issues into the PM system.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:import [--epic <epic_name>] [--label <label>]
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--epic` - Import into specific epic
|
||||
- `--label` - Import only issues with specific label
|
||||
- No args - Import all untracked issues
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Fetch GitHub Issues
|
||||
|
||||
```bash
|
||||
# Get issues based on filters
|
||||
if [[ "$ARGUMENTS" == *"--label"* ]]; then
|
||||
gh issue list --label "{label}" --limit 1000 --json number,title,body,state,labels,createdAt,updatedAt
|
||||
else
|
||||
gh issue list --limit 1000 --json number,title,body,state,labels,createdAt,updatedAt
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Identify Untracked Issues
|
||||
|
||||
For each GitHub issue:
|
||||
- Search local files for matching github URL
|
||||
- If not found, it's untracked and needs import
|
||||
|
||||
### 3. Categorize Issues
|
||||
|
||||
Based on labels:
|
||||
- Issues with "epic" label → Create epic structure
|
||||
- Issues with "task" label → Create task in appropriate epic
|
||||
- Issues with "epic:{name}" label → Assign to that epic
|
||||
- No PM labels → Ask user or create in "imported" epic
|
||||
|
||||
### 4. Create Local Structure
|
||||
|
||||
For each issue to import:
|
||||
|
||||
**If Epic:**
|
||||
```bash
|
||||
mkdir -p .claude/epics/{epic_name}
|
||||
# Create epic.md with GitHub content and frontmatter
|
||||
```
|
||||
|
||||
**If Task:**
|
||||
```bash
|
||||
# Find next available number (001.md, 002.md, etc.)
|
||||
# Create task file with GitHub content
|
||||
```
|
||||
|
||||
Set frontmatter:
|
||||
```yaml
|
||||
name: {issue_title}
|
||||
status: {open|closed based on GitHub}
|
||||
created: {GitHub createdAt}
|
||||
updated: {GitHub updatedAt}
|
||||
github: https://github.com/{org}/{repo}/issues/{number}
|
||||
imported: true
|
||||
```
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
📥 Import Complete
|
||||
|
||||
Imported:
|
||||
Epics: {count}
|
||||
Tasks: {count}
|
||||
|
||||
Created structure:
|
||||
{epic_1}/
|
||||
- {count} tasks
|
||||
{epic_2}/
|
||||
- {count} tasks
|
||||
|
||||
Skipped (already tracked): {count}
|
||||
|
||||
Next steps:
|
||||
Run /pm:status to see imported work
|
||||
Run /pm:sync to ensure full synchronization
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Preserve all GitHub metadata in frontmatter.
|
||||
Mark imported files with `imported: true` flag.
|
||||
Don't overwrite existing local files.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/in-progress.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/init.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,185 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Issue Analyze
|
||||
|
||||
Analyze an issue to identify parallel work streams for maximum efficiency.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-analyze <issue_number>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
1. **Find local task file:**
|
||||
- First check if `.claude/epics/*/$ARGUMENTS.md` exists (new naming convention)
|
||||
- If not found, search for file containing `github:.*issues/$ARGUMENTS` in frontmatter (old naming)
|
||||
- If not found: "❌ No local task for issue #$ARGUMENTS. Run: /pm:import first"
|
||||
|
||||
2. **Check for existing analysis:**
|
||||
```bash
|
||||
test -f .claude/epics/*/$ARGUMENTS-analysis.md && echo "⚠️ Analysis already exists. Overwrite? (yes/no)"
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Read Issue Context
|
||||
|
||||
Get issue details from GitHub:
|
||||
```bash
|
||||
gh issue view $ARGUMENTS --json title,body,labels
|
||||
```
|
||||
|
||||
Read local task file to understand:
|
||||
- Technical requirements
|
||||
- Acceptance criteria
|
||||
- Dependencies
|
||||
- Effort estimate
|
||||
|
||||
### 2. Identify Parallel Work Streams
|
||||
|
||||
Analyze the issue to identify independent work that can run in parallel:
|
||||
|
||||
**Common Patterns:**
|
||||
- **Database Layer**: Schema, migrations, models
|
||||
- **Service Layer**: Business logic, data access
|
||||
- **API Layer**: Endpoints, validation, middleware
|
||||
- **UI Layer**: Components, pages, styles
|
||||
- **Test Layer**: Unit tests, integration tests
|
||||
- **Documentation**: API docs, README updates
|
||||
|
||||
**Key Questions:**
|
||||
- What files will be created/modified?
|
||||
- Which changes can happen independently?
|
||||
- What are the dependencies between changes?
|
||||
- Where might conflicts occur?
|
||||
|
||||
### 3. Create Analysis File
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Create `.claude/epics/{epic_name}/$ARGUMENTS-analysis.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
issue: $ARGUMENTS
|
||||
title: {issue_title}
|
||||
analyzed: {current_datetime}
|
||||
estimated_hours: {total_hours}
|
||||
parallelization_factor: {1.0-5.0}
|
||||
---
|
||||
|
||||
# Parallel Work Analysis: Issue #$ARGUMENTS
|
||||
|
||||
## Overview
|
||||
{Brief description of what needs to be done}
|
||||
|
||||
## Parallel Streams
|
||||
|
||||
### Stream A: {Stream Name}
|
||||
**Scope**: {What this stream handles}
|
||||
**Files**:
|
||||
- {file_pattern_1}
|
||||
- {file_pattern_2}
|
||||
**Agent Type**: {backend|frontend|fullstack|database}-specialist
|
||||
**Can Start**: immediately
|
||||
**Estimated Hours**: {hours}
|
||||
**Dependencies**: none
|
||||
|
||||
### Stream B: {Stream Name}
|
||||
**Scope**: {What this stream handles}
|
||||
**Files**:
|
||||
- {file_pattern_1}
|
||||
- {file_pattern_2}
|
||||
**Agent Type**: {agent_type}
|
||||
**Can Start**: immediately
|
||||
**Estimated Hours**: {hours}
|
||||
**Dependencies**: none
|
||||
|
||||
### Stream C: {Stream Name}
|
||||
**Scope**: {What this stream handles}
|
||||
**Files**:
|
||||
- {file_pattern_1}
|
||||
**Agent Type**: {agent_type}
|
||||
**Can Start**: after Stream A completes
|
||||
**Estimated Hours**: {hours}
|
||||
**Dependencies**: Stream A
|
||||
|
||||
## Coordination Points
|
||||
|
||||
### Shared Files
|
||||
{List any files multiple streams need to modify}:
|
||||
- `src/types/index.ts` - Streams A & B (coordinate type updates)
|
||||
- `package.json` - Stream B (add dependencies)
|
||||
|
||||
### Sequential Requirements
|
||||
{List what must happen in order}:
|
||||
1. Database schema before API endpoints
|
||||
2. API types before UI components
|
||||
3. Core logic before tests
|
||||
|
||||
## Conflict Risk Assessment
|
||||
- **Low Risk**: Streams work on different directories
|
||||
- **Medium Risk**: Some shared type files, manageable with coordination
|
||||
- **High Risk**: Multiple streams modifying same core files
|
||||
|
||||
## Parallelization Strategy
|
||||
|
||||
**Recommended Approach**: {sequential|parallel|hybrid}
|
||||
|
||||
{If parallel}: Launch Streams A, B simultaneously. Start C when A completes.
|
||||
{If sequential}: Complete Stream A, then B, then C.
|
||||
{If hybrid}: Start A & B together, C depends on A, D depends on B & C.
|
||||
|
||||
## Expected Timeline
|
||||
|
||||
With parallel execution:
|
||||
- Wall time: {max_stream_hours} hours
|
||||
- Total work: {sum_all_hours} hours
|
||||
- Efficiency gain: {percentage}%
|
||||
|
||||
Without parallel execution:
|
||||
- Wall time: {sum_all_hours} hours
|
||||
|
||||
## Notes
|
||||
{Any special considerations, warnings, or recommendations}
|
||||
```
|
||||
|
||||
### 4. Validate Analysis
|
||||
|
||||
Ensure:
|
||||
- All major work is covered by streams
|
||||
- File patterns don't unnecessarily overlap
|
||||
- Dependencies are logical
|
||||
- Agent types match the work type
|
||||
- Time estimates are reasonable
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
✅ Analysis complete for issue #$ARGUMENTS
|
||||
|
||||
Identified {count} parallel work streams:
|
||||
Stream A: {name} ({hours}h)
|
||||
Stream B: {name} ({hours}h)
|
||||
Stream C: {name} ({hours}h)
|
||||
|
||||
Parallelization potential: {factor}x speedup
|
||||
Sequential time: {total}h
|
||||
Parallel time: {reduced}h
|
||||
|
||||
Files at risk of conflict:
|
||||
{list shared files if any}
|
||||
|
||||
Next: Start work with /pm:issue-start $ARGUMENTS
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Analysis is local only - not synced to GitHub
|
||||
- Focus on practical parallelization, not theoretical maximum
|
||||
- Consider agent expertise when assigning streams
|
||||
- Account for coordination overhead in estimates
|
||||
- Prefer clear separation over maximum parallelization
|
||||
@ -0,0 +1,102 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Issue Close
|
||||
|
||||
Mark an issue as complete and close it on GitHub.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-close <issue_number> [completion_notes]
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Find Local Task File
|
||||
|
||||
First check if `.claude/epics/*/$ARGUMENTS.md` exists (new naming).
|
||||
If not found, search for task file with `github:.*issues/$ARGUMENTS` in frontmatter (old naming).
|
||||
If not found: "❌ No local task for issue #$ARGUMENTS"
|
||||
|
||||
### 2. Update Local Status
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update task file frontmatter:
|
||||
```yaml
|
||||
status: closed
|
||||
updated: {current_datetime}
|
||||
```
|
||||
|
||||
### 3. Update Progress File
|
||||
|
||||
If progress file exists at `.claude/epics/{epic}/updates/$ARGUMENTS/progress.md`:
|
||||
- Set completion: 100%
|
||||
- Add completion note with timestamp
|
||||
- Update last_sync with current datetime
|
||||
|
||||
### 4. Close on GitHub
|
||||
|
||||
Add completion comment and close:
|
||||
```bash
|
||||
# Add final comment
|
||||
echo "✅ Task completed
|
||||
|
||||
$ARGUMENTS
|
||||
|
||||
---
|
||||
Closed at: {timestamp}" | gh issue comment $ARGUMENTS --body-file -
|
||||
|
||||
# Close the issue
|
||||
gh issue close $ARGUMENTS
|
||||
```
|
||||
|
||||
### 5. Update Epic Task List on GitHub
|
||||
|
||||
Check the task checkbox in the epic issue:
|
||||
|
||||
```bash
|
||||
# Get epic name from local task file path
|
||||
epic_name={extract_from_path}
|
||||
|
||||
# Get epic issue number from epic.md
|
||||
epic_issue=$(grep 'github:' .claude/epics/$epic_name/epic.md | grep -oE '[0-9]+$')
|
||||
|
||||
if [ ! -z "$epic_issue" ]; then
|
||||
# Get current epic body
|
||||
gh issue view $epic_issue --json body -q .body > /tmp/epic-body.md
|
||||
|
||||
# Check off this task
|
||||
sed -i "s/- \[ \] #$ARGUMENTS/- [x] #$ARGUMENTS/" /tmp/epic-body.md
|
||||
|
||||
# Update epic issue
|
||||
gh issue edit $epic_issue --body-file /tmp/epic-body.md
|
||||
|
||||
echo "✓ Updated epic progress on GitHub"
|
||||
fi
|
||||
```
|
||||
|
||||
### 6. Update Epic Progress
|
||||
|
||||
- Count total tasks in epic
|
||||
- Count closed tasks
|
||||
- Calculate new progress percentage
|
||||
- Update epic.md frontmatter progress field
|
||||
|
||||
### 7. Output
|
||||
|
||||
```
|
||||
✅ Closed issue #$ARGUMENTS
|
||||
Local: Task marked complete
|
||||
GitHub: Issue closed & epic updated
|
||||
Epic progress: {new_progress}% ({closed}/{total} tasks complete)
|
||||
|
||||
Next: Run /pm:next for next priority task
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Follow `/rules/frontmatter-operations.md` for updates.
|
||||
Follow `/rules/github-operations.md` for GitHub commands.
|
||||
Always sync local state before GitHub.
|
||||
@ -0,0 +1,76 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Issue Edit
|
||||
|
||||
Edit issue details locally and on GitHub.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-edit <issue_number>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Get Current Issue State
|
||||
|
||||
```bash
|
||||
# Get from GitHub
|
||||
gh issue view $ARGUMENTS --json title,body,labels
|
||||
|
||||
# Find local task file
|
||||
# Search for file with github:.*issues/$ARGUMENTS
|
||||
```
|
||||
|
||||
### 2. Interactive Edit
|
||||
|
||||
Ask user what to edit:
|
||||
- Title
|
||||
- Description/Body
|
||||
- Labels
|
||||
- Acceptance criteria (local only)
|
||||
- Priority/Size (local only)
|
||||
|
||||
### 3. Update Local File
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update task file with changes:
|
||||
- Update frontmatter `name` if title changed
|
||||
- Update body content if description changed
|
||||
- Update `updated` field with current datetime
|
||||
|
||||
### 4. Update GitHub
|
||||
|
||||
If title changed:
|
||||
```bash
|
||||
gh issue edit $ARGUMENTS --title "{new_title}"
|
||||
```
|
||||
|
||||
If body changed:
|
||||
```bash
|
||||
gh issue edit $ARGUMENTS --body-file {updated_task_file}
|
||||
```
|
||||
|
||||
If labels changed:
|
||||
```bash
|
||||
gh issue edit $ARGUMENTS --add-label "{new_labels}"
|
||||
gh issue edit $ARGUMENTS --remove-label "{removed_labels}"
|
||||
```
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
✅ Updated issue #$ARGUMENTS
|
||||
Changes:
|
||||
{list_of_changes_made}
|
||||
|
||||
Synced to GitHub: ✅
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Always update local first, then GitHub.
|
||||
Preserve frontmatter fields not being edited.
|
||||
Follow `/rules/frontmatter-operations.md`.
|
||||
@ -0,0 +1,70 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Issue Reopen
|
||||
|
||||
Reopen a closed issue.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-reopen <issue_number> [reason]
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Find Local Task File
|
||||
|
||||
Search for task file with `github:.*issues/$ARGUMENTS` in frontmatter.
|
||||
If not found: "❌ No local task for issue #$ARGUMENTS"
|
||||
|
||||
### 2. Update Local Status
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update task file frontmatter:
|
||||
```yaml
|
||||
status: open
|
||||
updated: {current_datetime}
|
||||
```
|
||||
|
||||
### 3. Reset Progress
|
||||
|
||||
If progress file exists:
|
||||
- Keep original started date
|
||||
- Reset completion to previous value or 0%
|
||||
- Add note about reopening with reason
|
||||
|
||||
### 4. Reopen on GitHub
|
||||
|
||||
```bash
|
||||
# Reopen with comment
|
||||
echo "🔄 Reopening issue
|
||||
|
||||
Reason: $ARGUMENTS
|
||||
|
||||
---
|
||||
Reopened at: {timestamp}" | gh issue comment $ARGUMENTS --body-file -
|
||||
|
||||
# Reopen the issue
|
||||
gh issue reopen $ARGUMENTS
|
||||
```
|
||||
|
||||
### 5. Update Epic Progress
|
||||
|
||||
Recalculate epic progress with this task now open again.
|
||||
|
||||
### 6. Output
|
||||
|
||||
```
|
||||
🔄 Reopened issue #$ARGUMENTS
|
||||
Reason: {reason_if_provided}
|
||||
Epic progress: {updated_progress}%
|
||||
|
||||
Start work with: /pm:issue-start $ARGUMENTS
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Preserve work history in progress files.
|
||||
Don't delete previous progress, just reset status.
|
||||
@ -0,0 +1,91 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, LS
|
||||
---
|
||||
|
||||
# Issue Show
|
||||
|
||||
Display issue and sub-issues with detailed information.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-show <issue_number>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
You are displaying comprehensive information about a GitHub issue and related sub-issues for: **Issue #$ARGUMENTS**
|
||||
|
||||
### 1. Fetch Issue Data
|
||||
- Use `gh issue view #$ARGUMENTS` to get GitHub issue details
|
||||
- Look for local task file: first check `.claude/epics/*/$ARGUMENTS.md` (new naming)
|
||||
- If not found, search for file with `github:.*issues/$ARGUMENTS` in frontmatter (old naming)
|
||||
- Check for related issues and sub-tasks
|
||||
|
||||
### 2. Issue Overview
|
||||
Display issue header:
|
||||
```
|
||||
🎫 Issue #$ARGUMENTS: {Issue Title}
|
||||
Status: {open/closed}
|
||||
Labels: {labels}
|
||||
Assignee: {assignee}
|
||||
Created: {creation_date}
|
||||
Updated: {last_update}
|
||||
|
||||
📝 Description:
|
||||
{issue_description}
|
||||
```
|
||||
|
||||
### 3. Local File Mapping
|
||||
If local task file exists:
|
||||
```
|
||||
📁 Local Files:
|
||||
Task file: .claude/epics/{epic_name}/{task_file}
|
||||
Updates: .claude/epics/{epic_name}/updates/$ARGUMENTS/
|
||||
Last local update: {timestamp}
|
||||
```
|
||||
|
||||
### 4. Sub-Issues and Dependencies
|
||||
Show related issues:
|
||||
```
|
||||
🔗 Related Issues:
|
||||
Parent Epic: #{epic_issue_number}
|
||||
Dependencies: #{dep1}, #{dep2}
|
||||
Blocking: #{blocked1}, #{blocked2}
|
||||
Sub-tasks: #{sub1}, #{sub2}
|
||||
```
|
||||
|
||||
### 5. Recent Activity
|
||||
Display recent comments and updates:
|
||||
```
|
||||
💬 Recent Activity:
|
||||
{timestamp} - {author}: {comment_preview}
|
||||
{timestamp} - {author}: {comment_preview}
|
||||
|
||||
View full thread: gh issue view #$ARGUMENTS --comments
|
||||
```
|
||||
|
||||
### 6. Progress Tracking
|
||||
If task file exists, show progress:
|
||||
```
|
||||
✅ Acceptance Criteria:
|
||||
✅ Criterion 1 (completed)
|
||||
🔄 Criterion 2 (in progress)
|
||||
⏸️ Criterion 3 (blocked)
|
||||
□ Criterion 4 (not started)
|
||||
```
|
||||
|
||||
### 7. Quick Actions
|
||||
```
|
||||
🚀 Quick Actions:
|
||||
Start work: /pm:issue-start $ARGUMENTS
|
||||
Sync updates: /pm:issue-sync $ARGUMENTS
|
||||
Add comment: gh issue comment #$ARGUMENTS --body "your comment"
|
||||
View in browser: gh issue view #$ARGUMENTS --web
|
||||
```
|
||||
|
||||
### 8. Error Handling
|
||||
- Handle invalid issue numbers gracefully
|
||||
- Check for network/authentication issues
|
||||
- Provide helpful error messages and alternatives
|
||||
|
||||
Provide comprehensive issue information to help developers understand context and current status for Issue #$ARGUMENTS.
|
||||
@ -0,0 +1,163 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Issue Start
|
||||
|
||||
Begin work on a GitHub issue with parallel agents based on work stream analysis.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-start <issue_number>
|
||||
```
|
||||
|
||||
## Quick Check
|
||||
|
||||
1. **Get issue details:**
|
||||
```bash
|
||||
gh issue view $ARGUMENTS --json state,title,labels,body
|
||||
```
|
||||
If it fails: "❌ Cannot access issue #$ARGUMENTS. Check number or run: gh auth login"
|
||||
|
||||
2. **Find local task file:**
|
||||
- First check if `.claude/epics/*/$ARGUMENTS.md` exists (new naming)
|
||||
- If not found, search for file containing `github:.*issues/$ARGUMENTS` in frontmatter (old naming)
|
||||
- If not found: "❌ No local task for issue #$ARGUMENTS. This issue may have been created outside the PM system."
|
||||
|
||||
3. **Check for analysis:**
|
||||
```bash
|
||||
test -f .claude/epics/*/$ARGUMENTS-analysis.md || echo "❌ No analysis found for issue #$ARGUMENTS
|
||||
|
||||
Run: /pm:issue-analyze $ARGUMENTS first
|
||||
Or: /pm:issue-start $ARGUMENTS --analyze to do both"
|
||||
```
|
||||
If no analysis exists and no --analyze flag, stop execution.
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Ensure Worktree Exists
|
||||
|
||||
Check if epic worktree exists:
|
||||
```bash
|
||||
# Find epic name from task file
|
||||
epic_name={extracted_from_path}
|
||||
|
||||
# Check worktree
|
||||
if ! git worktree list | grep -q "epic-$epic_name"; then
|
||||
echo "❌ No worktree for epic. Run: /pm:epic-start $epic_name"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Read Analysis
|
||||
|
||||
Read `.claude/epics/{epic_name}/$ARGUMENTS-analysis.md`:
|
||||
- Parse parallel streams
|
||||
- Identify which can start immediately
|
||||
- Note dependencies between streams
|
||||
|
||||
### 3. Setup Progress Tracking
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Create workspace structure:
|
||||
```bash
|
||||
mkdir -p .claude/epics/{epic_name}/updates/$ARGUMENTS
|
||||
```
|
||||
|
||||
Update task file frontmatter `updated` field with current datetime.
|
||||
|
||||
### 4. Launch Parallel Agents
|
||||
|
||||
For each stream that can start immediately:
|
||||
|
||||
Create `.claude/epics/{epic_name}/updates/$ARGUMENTS/stream-{X}.md`:
|
||||
```markdown
|
||||
---
|
||||
issue: $ARGUMENTS
|
||||
stream: {stream_name}
|
||||
agent: {agent_type}
|
||||
started: {current_datetime}
|
||||
status: in_progress
|
||||
---
|
||||
|
||||
# Stream {X}: {stream_name}
|
||||
|
||||
## Scope
|
||||
{stream_description}
|
||||
|
||||
## Files
|
||||
{file_patterns}
|
||||
|
||||
## Progress
|
||||
- Starting implementation
|
||||
```
|
||||
|
||||
Launch agent using Task tool:
|
||||
```yaml
|
||||
Task:
|
||||
description: "Issue #$ARGUMENTS Stream {X}"
|
||||
subagent_type: "{agent_type}"
|
||||
prompt: |
|
||||
You are working on Issue #$ARGUMENTS in the epic worktree.
|
||||
|
||||
Worktree location: ../epic-{epic_name}/
|
||||
Your stream: {stream_name}
|
||||
|
||||
Your scope:
|
||||
- Files to modify: {file_patterns}
|
||||
- Work to complete: {stream_description}
|
||||
|
||||
Requirements:
|
||||
1. Read full task from: .claude/epics/{epic_name}/{task_file}
|
||||
2. Work ONLY in your assigned files
|
||||
3. Commit frequently with format: "Issue #$ARGUMENTS: {specific change}"
|
||||
4. Update progress in: .claude/epics/{epic_name}/updates/$ARGUMENTS/stream-{X}.md
|
||||
5. Follow coordination rules in /rules/agent-coordination.md
|
||||
|
||||
If you need to modify files outside your scope:
|
||||
- Check if another stream owns them
|
||||
- Wait if necessary
|
||||
- Update your progress file with coordination notes
|
||||
|
||||
Complete your stream's work and mark as completed when done.
|
||||
```
|
||||
|
||||
### 5. GitHub Assignment
|
||||
|
||||
```bash
|
||||
# Assign to self and mark in-progress
|
||||
gh issue edit $ARGUMENTS --add-assignee @me --add-label "in-progress"
|
||||
```
|
||||
|
||||
### 6. Output
|
||||
|
||||
```
|
||||
✅ Started parallel work on issue #$ARGUMENTS
|
||||
|
||||
Epic: {epic_name}
|
||||
Worktree: ../epic-{epic_name}/
|
||||
|
||||
Launching {count} parallel agents:
|
||||
Stream A: {name} (Agent-1) ✓ Started
|
||||
Stream B: {name} (Agent-2) ✓ Started
|
||||
Stream C: {name} - Waiting (depends on A)
|
||||
|
||||
Progress tracking:
|
||||
.claude/epics/{epic_name}/updates/$ARGUMENTS/
|
||||
|
||||
Monitor with: /pm:epic-status {epic_name}
|
||||
Sync updates: /pm:issue-sync $ARGUMENTS
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any step fails, report clearly:
|
||||
- "❌ {What failed}: {How to fix}"
|
||||
- Continue with what's possible
|
||||
- Never leave partial state
|
||||
|
||||
## Important Notes
|
||||
|
||||
Follow `/rules/datetime.md` for timestamps.
|
||||
Keep it simple - trust that GitHub and file system work.
|
||||
@ -0,0 +1,78 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, LS
|
||||
---
|
||||
|
||||
# Issue Status
|
||||
|
||||
Check issue status (open/closed) and current state.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:issue-status <issue_number>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
You are checking the current status of a GitHub issue and providing a quick status report for: **Issue #$ARGUMENTS**
|
||||
|
||||
### 1. Fetch Issue Status
|
||||
Use GitHub CLI to get current status:
|
||||
```bash
|
||||
gh issue view #$ARGUMENTS --json state,title,labels,assignees,updatedAt
|
||||
```
|
||||
|
||||
### 2. Status Display
|
||||
Show concise status information:
|
||||
```
|
||||
🎫 Issue #$ARGUMENTS: {Title}
|
||||
|
||||
📊 Status: {OPEN/CLOSED}
|
||||
Last update: {timestamp}
|
||||
Assignee: {assignee or "Unassigned"}
|
||||
|
||||
🏷️ Labels: {label1}, {label2}, {label3}
|
||||
```
|
||||
|
||||
### 3. Epic Context
|
||||
If issue is part of an epic:
|
||||
```
|
||||
📚 Epic Context:
|
||||
Epic: {epic_name}
|
||||
Epic progress: {completed_tasks}/{total_tasks} tasks complete
|
||||
This task: {task_position} of {total_tasks}
|
||||
```
|
||||
|
||||
### 4. Local Sync Status
|
||||
Check if local files are in sync:
|
||||
```
|
||||
💾 Local Sync:
|
||||
Local file: {exists/missing}
|
||||
Last local update: {timestamp}
|
||||
Sync status: {in_sync/needs_sync/local_ahead/remote_ahead}
|
||||
```
|
||||
|
||||
### 5. Quick Status Indicators
|
||||
Use clear visual indicators:
|
||||
- 🟢 Open and ready
|
||||
- 🟡 Open with blockers
|
||||
- 🔴 Open and overdue
|
||||
- ✅ Closed and complete
|
||||
- ❌ Closed without completion
|
||||
|
||||
### 6. Actionable Next Steps
|
||||
Based on status, suggest actions:
|
||||
```
|
||||
🚀 Suggested Actions:
|
||||
- Start work: /pm:issue-start $ARGUMENTS
|
||||
- Sync updates: /pm:issue-sync $ARGUMENTS
|
||||
- Close issue: gh issue close #$ARGUMENTS
|
||||
- Reopen issue: gh issue reopen #$ARGUMENTS
|
||||
```
|
||||
|
||||
### 7. Batch Status
|
||||
If checking multiple issues, support comma-separated list:
|
||||
```
|
||||
/pm:issue-status 123,124,125
|
||||
```
|
||||
|
||||
Keep the output concise but informative, perfect for quick status checks during development of Issue #$ARGUMENTS.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/next.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,65 @@
|
||||
---
|
||||
allowed-tools: Read, Write, LS
|
||||
---
|
||||
|
||||
# PRD Edit
|
||||
|
||||
Edit an existing Product Requirements Document.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:prd-edit <feature_name>
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Read Current PRD
|
||||
|
||||
Read `.claude/prds/$ARGUMENTS.md`:
|
||||
- Parse frontmatter
|
||||
- Read all sections
|
||||
|
||||
### 2. Interactive Edit
|
||||
|
||||
Ask user what sections to edit:
|
||||
- Executive Summary
|
||||
- Problem Statement
|
||||
- User Stories
|
||||
- Requirements (Functional/Non-Functional)
|
||||
- Success Criteria
|
||||
- Constraints & Assumptions
|
||||
- Out of Scope
|
||||
- Dependencies
|
||||
|
||||
### 3. Update PRD
|
||||
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
|
||||
Update PRD file:
|
||||
- Preserve frontmatter except `updated` field
|
||||
- Apply user's edits to selected sections
|
||||
- Update `updated` field with current datetime
|
||||
|
||||
### 4. Check Epic Impact
|
||||
|
||||
If PRD has associated epic:
|
||||
- Notify user: "This PRD has epic: {epic_name}"
|
||||
- Ask: "Epic may need updating based on PRD changes. Review epic? (yes/no)"
|
||||
- If yes, show: "Review with: /pm:epic-edit {epic_name}"
|
||||
|
||||
### 5. Output
|
||||
|
||||
```
|
||||
✅ Updated PRD: $ARGUMENTS
|
||||
Sections edited: {list_of_sections}
|
||||
|
||||
{If has epic}: ⚠️ Epic may need review: {epic_name}
|
||||
|
||||
Next: /pm:prd-parse $ARGUMENTS to update epic
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Preserve original creation date.
|
||||
Keep version history in frontmatter if needed.
|
||||
Follow `/rules/frontmatter-operations.md`.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/prd-list.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,148 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# PRD New
|
||||
|
||||
Launch brainstorming for new product requirement document.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:prd-new <feature_name>
|
||||
```
|
||||
|
||||
## Required Rules
|
||||
|
||||
**IMPORTANT:** Before executing this command, read and follow:
|
||||
- `.claude/rules/datetime.md` - For getting real current date/time
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### Input Validation
|
||||
1. **Validate feature name format:**
|
||||
- Must contain only lowercase letters, numbers, and hyphens
|
||||
- Must start with a letter
|
||||
- No spaces or special characters allowed
|
||||
- If invalid, tell user: "❌ Feature name must be kebab-case (lowercase letters, numbers, hyphens only). Examples: user-auth, payment-v2, notification-system"
|
||||
|
||||
2. **Check for existing PRD:**
|
||||
- Check if `.claude/prds/$ARGUMENTS.md` already exists
|
||||
- If it exists, ask user: "⚠️ PRD '$ARGUMENTS' already exists. Do you want to overwrite it? (yes/no)"
|
||||
- Only proceed with explicit 'yes' confirmation
|
||||
- If user says no, suggest: "Use a different name or run: /pm:prd-parse $ARGUMENTS to create an epic from the existing PRD"
|
||||
|
||||
3. **Verify directory structure:**
|
||||
- Check if `.claude/prds/` directory exists
|
||||
- If not, create it first
|
||||
- If unable to create, tell user: "❌ Cannot create PRD directory. Please manually create: .claude/prds/"
|
||||
|
||||
## Instructions
|
||||
|
||||
You are a product manager creating a comprehensive Product Requirements Document (PRD) for: **$ARGUMENTS**
|
||||
|
||||
Follow this structured approach:
|
||||
|
||||
### 1. Discovery & Context
|
||||
- Ask clarifying questions about the feature/product "$ARGUMENTS"
|
||||
- Understand the problem being solved
|
||||
- Identify target users and use cases
|
||||
- Gather constraints and requirements
|
||||
|
||||
### 2. PRD Structure
|
||||
Create a comprehensive PRD with these sections:
|
||||
|
||||
#### Executive Summary
|
||||
- Brief overview and value proposition
|
||||
|
||||
#### Problem Statement
|
||||
- What problem are we solving?
|
||||
- Why is this important now?
|
||||
|
||||
#### User Stories
|
||||
- Primary user personas
|
||||
- Detailed user journeys
|
||||
- Pain points being addressed
|
||||
|
||||
#### Requirements
|
||||
**Functional Requirements**
|
||||
- Core features and capabilities
|
||||
- User interactions and flows
|
||||
|
||||
**Non-Functional Requirements**
|
||||
- Performance expectations
|
||||
- Security considerations
|
||||
- Scalability needs
|
||||
|
||||
#### Success Criteria
|
||||
- Measurable outcomes
|
||||
- Key metrics and KPIs
|
||||
|
||||
#### Constraints & Assumptions
|
||||
- Technical limitations
|
||||
- Timeline constraints
|
||||
- Resource limitations
|
||||
|
||||
#### Out of Scope
|
||||
- What we're explicitly NOT building
|
||||
|
||||
#### Dependencies
|
||||
- External dependencies
|
||||
- Internal team dependencies
|
||||
|
||||
### 3. File Format with Frontmatter
|
||||
Save the completed PRD to: `.claude/prds/$ARGUMENTS.md` with this exact structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: $ARGUMENTS
|
||||
description: [Brief one-line description of the PRD]
|
||||
status: backlog
|
||||
created: [Current ISO date/time]
|
||||
---
|
||||
|
||||
# PRD: $ARGUMENTS
|
||||
|
||||
## Executive Summary
|
||||
[Content...]
|
||||
|
||||
## Problem Statement
|
||||
[Content...]
|
||||
|
||||
[Continue with all sections...]
|
||||
```
|
||||
|
||||
### 4. Frontmatter Guidelines
|
||||
- **name**: Use the exact feature name (same as $ARGUMENTS)
|
||||
- **description**: Write a concise one-line summary of what this PRD covers
|
||||
- **status**: Always start with "backlog" for new PRDs
|
||||
- **created**: Get REAL current datetime by running: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- Never use placeholder text
|
||||
- Must be actual system time in ISO 8601 format
|
||||
|
||||
### 5. Quality Checks
|
||||
|
||||
Before saving the PRD, verify:
|
||||
- [ ] All sections are complete (no placeholder text)
|
||||
- [ ] User stories include acceptance criteria
|
||||
- [ ] Success criteria are measurable
|
||||
- [ ] Dependencies are clearly identified
|
||||
- [ ] Out of scope items are explicitly listed
|
||||
|
||||
### 6. Post-Creation
|
||||
|
||||
After successfully creating the PRD:
|
||||
1. Confirm: "✅ PRD created: .claude/prds/$ARGUMENTS.md"
|
||||
2. Show brief summary of what was captured
|
||||
3. Suggest next step: "Ready to create implementation epic? Run: /pm:prd-parse $ARGUMENTS"
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If any step fails:
|
||||
- Clearly explain what went wrong
|
||||
- Provide specific steps to fix the issue
|
||||
- Never leave partial or corrupted files
|
||||
|
||||
Conduct a thorough brainstorming session before writing the PRD. Ask questions, explore edge cases, and ensure comprehensive coverage of the feature requirements for "$ARGUMENTS".
|
||||
@ -0,0 +1,175 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# PRD Parse
|
||||
|
||||
Convert PRD to technical implementation epic.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:prd-parse <feature_name>
|
||||
```
|
||||
|
||||
## Required Rules
|
||||
|
||||
**IMPORTANT:** Before executing this command, read and follow:
|
||||
- `.claude/rules/datetime.md` - For getting real current date/time
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### Validation Steps
|
||||
1. **Verify <feature_name> was provided as a parameter:**
|
||||
- If not, tell user: "❌ <feature_name> was not provided as parameter. Please run: /pm:prd-parse <feature_name>"
|
||||
- Stop execution if <feature_name> was not provided
|
||||
|
||||
2. **Verify PRD exists:**
|
||||
- Check if `.claude/prds/$ARGUMENTS.md` exists
|
||||
- If not found, tell user: "❌ PRD not found: $ARGUMENTS. First create it with: /pm:prd-new $ARGUMENTS"
|
||||
- Stop execution if PRD doesn't exist
|
||||
|
||||
3. **Validate PRD frontmatter:**
|
||||
- Verify PRD has valid frontmatter with: name, description, status, created
|
||||
- If frontmatter is invalid or missing, tell user: "❌ Invalid PRD frontmatter. Please check: .claude/prds/$ARGUMENTS.md"
|
||||
- Show what's missing or invalid
|
||||
|
||||
4. **Check for existing epic:**
|
||||
- Check if `.claude/epics/$ARGUMENTS/epic.md` already exists
|
||||
- If it exists, ask user: "⚠️ Epic '$ARGUMENTS' already exists. Overwrite? (yes/no)"
|
||||
- Only proceed with explicit 'yes' confirmation
|
||||
- If user says no, suggest: "View existing epic with: /pm:epic-show $ARGUMENTS"
|
||||
|
||||
5. **Verify directory permissions:**
|
||||
- Ensure `.claude/epics/` directory exists or can be created
|
||||
- If cannot create, tell user: "❌ Cannot create epic directory. Please check permissions."
|
||||
|
||||
## Instructions
|
||||
|
||||
You are a technical lead converting a Product Requirements Document into a detailed implementation epic for: **$ARGUMENTS**
|
||||
|
||||
### 1. Read the PRD
|
||||
- Load the PRD from `.claude/prds/$ARGUMENTS.md`
|
||||
- Analyze all requirements and constraints
|
||||
- Understand the user stories and success criteria
|
||||
- Extract the PRD description from frontmatter
|
||||
|
||||
### 2. Technical Analysis
|
||||
- Identify architectural decisions needed
|
||||
- Determine technology stack and approaches
|
||||
- Map functional requirements to technical components
|
||||
- Identify integration points and dependencies
|
||||
|
||||
### 3. File Format with Frontmatter
|
||||
Create the epic file at: `.claude/epics/$ARGUMENTS/epic.md` with this exact structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: $ARGUMENTS
|
||||
status: backlog
|
||||
created: [Current ISO date/time]
|
||||
progress: 0%
|
||||
prd: .claude/prds/$ARGUMENTS.md
|
||||
github: [Will be updated when synced to GitHub]
|
||||
---
|
||||
|
||||
# Epic: $ARGUMENTS
|
||||
|
||||
## Overview
|
||||
Brief technical summary of the implementation approach
|
||||
|
||||
## Architecture Decisions
|
||||
- Key technical decisions and rationale
|
||||
- Technology choices
|
||||
- Design patterns to use
|
||||
|
||||
## Technical Approach
|
||||
### Frontend Components
|
||||
- UI components needed
|
||||
- State management approach
|
||||
- User interaction patterns
|
||||
|
||||
### Backend Services
|
||||
- API endpoints required
|
||||
- Data models and schema
|
||||
- Business logic components
|
||||
|
||||
### Infrastructure
|
||||
- Deployment considerations
|
||||
- Scaling requirements
|
||||
- Monitoring and observability
|
||||
|
||||
## Implementation Strategy
|
||||
- Development phases
|
||||
- Risk mitigation
|
||||
- Testing approach
|
||||
|
||||
## Task Breakdown Preview
|
||||
High-level task categories that will be created:
|
||||
- [ ] Category 1: Description
|
||||
- [ ] Category 2: Description
|
||||
- [ ] etc.
|
||||
|
||||
## Dependencies
|
||||
- External service dependencies
|
||||
- Internal team dependencies
|
||||
- Prerequisite work
|
||||
|
||||
## Success Criteria (Technical)
|
||||
- Performance benchmarks
|
||||
- Quality gates
|
||||
- Acceptance criteria
|
||||
|
||||
## Estimated Effort
|
||||
- Overall timeline estimate
|
||||
- Resource requirements
|
||||
- Critical path items
|
||||
```
|
||||
|
||||
### 4. Frontmatter Guidelines
|
||||
- **name**: Use the exact feature name (same as $ARGUMENTS)
|
||||
- **status**: Always start with "backlog" for new epics
|
||||
- **created**: Get REAL current datetime by running: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- **progress**: Always start with "0%" for new epics
|
||||
- **prd**: Reference the source PRD file path
|
||||
- **github**: Leave placeholder text - will be updated during sync
|
||||
|
||||
### 5. Output Location
|
||||
Create the directory structure if it doesn't exist:
|
||||
- `.claude/epics/$ARGUMENTS/` (directory)
|
||||
- `.claude/epics/$ARGUMENTS/epic.md` (epic file)
|
||||
|
||||
### 6. Quality Validation
|
||||
|
||||
Before saving the epic, verify:
|
||||
- [ ] All PRD requirements are addressed in the technical approach
|
||||
- [ ] Task breakdown categories cover all implementation areas
|
||||
- [ ] Dependencies are technically accurate
|
||||
- [ ] Effort estimates are realistic
|
||||
- [ ] Architecture decisions are justified
|
||||
|
||||
### 7. Post-Creation
|
||||
|
||||
After successfully creating the epic:
|
||||
1. Confirm: "✅ Epic created: .claude/epics/$ARGUMENTS/epic.md"
|
||||
2. Show summary of:
|
||||
- Number of task categories identified
|
||||
- Key architecture decisions
|
||||
- Estimated effort
|
||||
3. Suggest next step: "Ready to break down into tasks? Run: /pm:epic-decompose $ARGUMENTS"
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If any step fails:
|
||||
- Clearly explain what went wrong
|
||||
- If PRD is incomplete, list specific missing sections
|
||||
- If technical approach is unclear, identify what needs clarification
|
||||
- Never create an epic with incomplete information
|
||||
|
||||
Focus on creating a technically sound implementation plan that addresses all PRD requirements while being practical and achievable for "$ARGUMENTS".
|
||||
|
||||
## IMPORTANT:
|
||||
- Aim for as few tasks as possible and limit the total number of tasks to 10 or less.
|
||||
- When creating the epic, identify ways to simplify and improve it. Look for ways to leverage existing functionality instead of creating more code when possible.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/prd-status.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/search.sh $ARGUMENTS` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/standup.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/status.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,82 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Sync
|
||||
|
||||
Full bidirectional sync between local and GitHub.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:sync [epic_name]
|
||||
```
|
||||
|
||||
If epic_name provided, sync only that epic. Otherwise sync all.
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Pull from GitHub
|
||||
|
||||
Get current state of all issues:
|
||||
```bash
|
||||
# Get all epic and task issues
|
||||
gh issue list --label "epic" --limit 1000 --json number,title,state,body,labels,updatedAt
|
||||
gh issue list --label "task" --limit 1000 --json number,title,state,body,labels,updatedAt
|
||||
```
|
||||
|
||||
### 2. Update Local from GitHub
|
||||
|
||||
For each GitHub issue:
|
||||
- Find corresponding local file by issue number
|
||||
- Compare states:
|
||||
- If GitHub state newer (updatedAt > local updated), update local
|
||||
- If GitHub closed but local open, close local
|
||||
- If GitHub reopened but local closed, reopen local
|
||||
- Update frontmatter to match GitHub state
|
||||
|
||||
### 3. Push Local to GitHub
|
||||
|
||||
For each local task/epic:
|
||||
- If has GitHub URL but GitHub issue not found, it was deleted - mark local as archived
|
||||
- If no GitHub URL, create new issue (like epic-sync)
|
||||
- If local updated > GitHub updatedAt, push changes:
|
||||
```bash
|
||||
gh issue edit {number} --body-file {local_file}
|
||||
```
|
||||
|
||||
### 4. Handle Conflicts
|
||||
|
||||
If both changed (local and GitHub updated since last sync):
|
||||
- Show both versions
|
||||
- Ask user: "Local and GitHub both changed. Keep: (local/github/merge)?"
|
||||
- Apply user's choice
|
||||
|
||||
### 5. Update Sync Timestamps
|
||||
|
||||
Update all synced files with last_sync timestamp.
|
||||
|
||||
### 6. Output
|
||||
|
||||
```
|
||||
🔄 Sync Complete
|
||||
|
||||
Pulled from GitHub:
|
||||
Updated: {count} files
|
||||
Closed: {count} issues
|
||||
|
||||
Pushed to GitHub:
|
||||
Updated: {count} issues
|
||||
Created: {count} new issues
|
||||
|
||||
Conflicts resolved: {count}
|
||||
|
||||
Status:
|
||||
✅ All files synced
|
||||
{or list any sync failures}
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
Follow `/rules/github-operations.md` for GitHub commands.
|
||||
Follow `/rules/frontmatter-operations.md` for local updates.
|
||||
Always backup before sync in case of issues.
|
||||
@ -0,0 +1,134 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write
|
||||
---
|
||||
|
||||
# Test Reference Update
|
||||
|
||||
Test the task reference update logic used in epic-sync.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/pm:test-reference-update
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Create Test Files
|
||||
|
||||
Create test task files with references:
|
||||
```bash
|
||||
mkdir -p /tmp/test-refs
|
||||
cd /tmp/test-refs
|
||||
|
||||
# Create task 001
|
||||
cat > 001.md << 'EOF'
|
||||
---
|
||||
name: Task One
|
||||
status: open
|
||||
depends_on: []
|
||||
parallel: true
|
||||
conflicts_with: [002, 003]
|
||||
---
|
||||
# Task One
|
||||
This is task 001.
|
||||
EOF
|
||||
|
||||
# Create task 002
|
||||
cat > 002.md << 'EOF'
|
||||
---
|
||||
name: Task Two
|
||||
status: open
|
||||
depends_on: [001]
|
||||
parallel: false
|
||||
conflicts_with: [003]
|
||||
---
|
||||
# Task Two
|
||||
This is task 002, depends on 001.
|
||||
EOF
|
||||
|
||||
# Create task 003
|
||||
cat > 003.md << 'EOF'
|
||||
---
|
||||
name: Task Three
|
||||
status: open
|
||||
depends_on: [001, 002]
|
||||
parallel: false
|
||||
conflicts_with: []
|
||||
---
|
||||
# Task Three
|
||||
This is task 003, depends on 001 and 002.
|
||||
EOF
|
||||
```
|
||||
|
||||
### 2. Create Mappings
|
||||
|
||||
Simulate the issue creation mappings:
|
||||
```bash
|
||||
# Simulate task -> issue number mapping
|
||||
cat > /tmp/task-mapping.txt << 'EOF'
|
||||
001.md:42
|
||||
002.md:43
|
||||
003.md:44
|
||||
EOF
|
||||
|
||||
# Create old -> new ID mapping
|
||||
> /tmp/id-mapping.txt
|
||||
while IFS=: read -r task_file task_number; do
|
||||
old_num=$(basename "$task_file" .md)
|
||||
echo "$old_num:$task_number" >> /tmp/id-mapping.txt
|
||||
done < /tmp/task-mapping.txt
|
||||
|
||||
echo "ID Mapping:"
|
||||
cat /tmp/id-mapping.txt
|
||||
```
|
||||
|
||||
### 3. Update References
|
||||
|
||||
Process each file and update references:
|
||||
```bash
|
||||
while IFS=: read -r task_file task_number; do
|
||||
echo "Processing: $task_file -> $task_number.md"
|
||||
|
||||
# Read the file content
|
||||
content=$(cat "$task_file")
|
||||
|
||||
# Update references
|
||||
while IFS=: read -r old_num new_num; do
|
||||
content=$(echo "$content" | sed "s/\b$old_num\b/$new_num/g")
|
||||
done < /tmp/id-mapping.txt
|
||||
|
||||
# Write to new file
|
||||
new_name="${task_number}.md"
|
||||
echo "$content" > "$new_name"
|
||||
|
||||
echo "Updated content preview:"
|
||||
grep -E "depends_on:|conflicts_with:" "$new_name"
|
||||
echo "---"
|
||||
done < /tmp/task-mapping.txt
|
||||
```
|
||||
|
||||
### 4. Verify Results
|
||||
|
||||
Check that references were updated correctly:
|
||||
```bash
|
||||
echo "=== Final Results ==="
|
||||
for file in 42.md 43.md 44.md; do
|
||||
echo "File: $file"
|
||||
grep -E "name:|depends_on:|conflicts_with:" "$file"
|
||||
echo ""
|
||||
done
|
||||
```
|
||||
|
||||
Expected output:
|
||||
- 42.md should have conflicts_with: [43, 44]
|
||||
- 43.md should have depends_on: [42] and conflicts_with: [44]
|
||||
- 44.md should have depends_on: [42, 43]
|
||||
|
||||
### 5. Cleanup
|
||||
|
||||
```bash
|
||||
cd -
|
||||
rm -rf /tmp/test-refs
|
||||
rm -f /tmp/task-mapping.txt /tmp/id-mapping.txt
|
||||
echo "✅ Test complete and cleaned up"
|
||||
```
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
allowed-tools: Bash
|
||||
---
|
||||
|
||||
Run `bash .claude/scripts/pm/validate.sh` using a sub-agent and show me the complete output.
|
||||
|
||||
- DO NOT truncate.
|
||||
- DO NOT collapse.
|
||||
- DO NOT abbreviate.
|
||||
- Show ALL lines in full.
|
||||
- DO NOT print any other comments.
|
||||
@ -0,0 +1,9 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# This is an ephemeral command.
|
||||
|
||||
Some complex prompts (with numerous @ references) may fail if entered directly into the prompt input.
|
||||
|
||||
If that happens, write your prompt here and type in `/prompt` in the prompt command.
|
||||
@ -0,0 +1,9 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Enhance CLAUDE.md file
|
||||
|
||||
Please update CLAUDE.md with the rules from .claude/CLAUDE.md.
|
||||
|
||||
If CLAUDE.md does not exist, create it using the /init and include rules from .claude/CLAUDE.md.
|
||||
@ -0,0 +1,297 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS
|
||||
---
|
||||
|
||||
# Prime Testing Environment
|
||||
|
||||
This command prepares the testing environment by detecting the test framework, validating dependencies, and configuring the test-runner agent for optimal test execution.
|
||||
|
||||
## Preflight Checklist
|
||||
|
||||
Before proceeding, complete these validation steps.
|
||||
Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
|
||||
|
||||
### 1. Test Framework Detection
|
||||
|
||||
**JavaScript/Node.js:**
|
||||
- Check package.json for test scripts: `grep -E '"test"|"spec"|"jest"|"mocha"' package.json 2>/dev/null`
|
||||
- Look for test config files: `ls -la jest.config.* mocha.opts .mocharc.* 2>/dev/null`
|
||||
- Check for test directories: `find . -type d \( -name "test" -o -name "tests" -o -name "__tests__" -o -name "spec" \) -maxdepth 3 2>/dev/null`
|
||||
|
||||
**Python:**
|
||||
- Check for pytest: `find . -name "pytest.ini" -o -name "conftest.py" -o -name "setup.cfg" 2>/dev/null | head -5`
|
||||
- Check for unittest: `find . -path "*/test*.py" -o -path "*/test_*.py" 2>/dev/null | head -5`
|
||||
- Check requirements: `grep -E "pytest|unittest|nose" requirements.txt 2>/dev/null`
|
||||
|
||||
**Rust:**
|
||||
- Check for Cargo tests: `grep -E '\[dev-dependencies\]' Cargo.toml 2>/dev/null`
|
||||
- Look for test modules: `find . -name "*.rs" -exec grep -l "#\[cfg(test)\]" {} \; 2>/dev/null | head -5`
|
||||
|
||||
**Go:**
|
||||
- Check for test files: `find . -name "*_test.go" 2>/dev/null | head -5`
|
||||
- Check go.mod exists: `test -f go.mod && echo "Go module found"`
|
||||
|
||||
**Other Languages:**
|
||||
- Ruby: Check for RSpec: `find . -name ".rspec" -o -name "spec_helper.rb" 2>/dev/null`
|
||||
- Java: Check for JUnit: `find . -name "pom.xml" -exec grep -l "junit" {} \; 2>/dev/null`
|
||||
|
||||
### 2. Test Environment Validation
|
||||
|
||||
If no test framework detected:
|
||||
- Tell user: "⚠️ No test framework detected. Please specify your testing setup."
|
||||
- Ask: "What test command should I use? (e.g., npm test, pytest, cargo test)"
|
||||
- Store response for future use
|
||||
|
||||
### 3. Dependency Check
|
||||
|
||||
**For detected framework:**
|
||||
- Node.js: Run `npm list --depth=0 2>/dev/null | grep -E "jest|mocha|chai|jasmine"`
|
||||
- Python: Run `pip list 2>/dev/null | grep -E "pytest|unittest|nose"`
|
||||
- Verify test dependencies are installed
|
||||
|
||||
If dependencies missing:
|
||||
- Tell user: "❌ Test dependencies not installed"
|
||||
- Suggest: "Run: npm install (or pip install -r requirements.txt)"
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Framework-Specific Configuration
|
||||
|
||||
Based on detected framework, create test configuration:
|
||||
|
||||
#### JavaScript/Node.js (Jest)
|
||||
```yaml
|
||||
framework: jest
|
||||
test_command: npm test
|
||||
test_directory: __tests__
|
||||
config_file: jest.config.js
|
||||
options:
|
||||
- --verbose
|
||||
- --no-coverage
|
||||
- --runInBand
|
||||
environment:
|
||||
NODE_ENV: test
|
||||
```
|
||||
|
||||
#### JavaScript/Node.js (Mocha)
|
||||
```yaml
|
||||
framework: mocha
|
||||
test_command: npm test
|
||||
test_directory: test
|
||||
config_file: .mocharc.js
|
||||
options:
|
||||
- --reporter spec
|
||||
- --recursive
|
||||
- --bail
|
||||
environment:
|
||||
NODE_ENV: test
|
||||
```
|
||||
|
||||
#### Python (Pytest)
|
||||
```yaml
|
||||
framework: pytest
|
||||
test_command: pytest
|
||||
test_directory: tests
|
||||
config_file: pytest.ini
|
||||
options:
|
||||
- -v
|
||||
- --tb=short
|
||||
- --strict-markers
|
||||
environment:
|
||||
PYTHONPATH: .
|
||||
```
|
||||
|
||||
#### Rust
|
||||
```yaml
|
||||
framework: cargo
|
||||
test_command: cargo test
|
||||
test_directory: tests
|
||||
config_file: Cargo.toml
|
||||
options:
|
||||
- --verbose
|
||||
- --nocapture
|
||||
environment: {}
|
||||
```
|
||||
|
||||
#### Go
|
||||
```yaml
|
||||
framework: go
|
||||
test_command: go test
|
||||
test_directory: .
|
||||
config_file: go.mod
|
||||
options:
|
||||
- -v
|
||||
- ./...
|
||||
environment: {}
|
||||
```
|
||||
|
||||
### 2. Test Discovery
|
||||
|
||||
Scan for test files:
|
||||
- Count total test files found
|
||||
- Identify test naming patterns used
|
||||
- Note any test utilities or helpers
|
||||
- Check for test fixtures or data
|
||||
|
||||
```bash
|
||||
# Example for Node.js
|
||||
find . -path "*/node_modules" -prune -o -name "*.test.js" -o -name "*.spec.js" | wc -l
|
||||
```
|
||||
|
||||
### 3. Create Test Runner Configuration
|
||||
|
||||
Create `.claude/testing-config.md` with discovered information:
|
||||
|
||||
```markdown
|
||||
---
|
||||
framework: {detected_framework}
|
||||
test_command: {detected_command}
|
||||
created: [Use REAL datetime from: date -u +"%Y-%m-%dT%H:%M:%SZ"]
|
||||
---
|
||||
|
||||
# Testing Configuration
|
||||
|
||||
## Framework
|
||||
- Type: {framework_name}
|
||||
- Version: {framework_version}
|
||||
- Config File: {config_file_path}
|
||||
|
||||
## Test Structure
|
||||
- Test Directory: {test_dir}
|
||||
- Test Files: {count} files found
|
||||
- Naming Pattern: {pattern}
|
||||
|
||||
## Commands
|
||||
- Run All Tests: `{full_test_command}`
|
||||
- Run Specific Test: `{specific_test_command}`
|
||||
- Run with Debugging: `{debug_command}`
|
||||
|
||||
## Environment
|
||||
- Required ENV vars: {list}
|
||||
- Test Database: {if applicable}
|
||||
- Test Servers: {if applicable}
|
||||
|
||||
## Test Runner Agent Configuration
|
||||
- Use verbose output for debugging
|
||||
- Run tests sequentially (no parallel)
|
||||
- Capture full stack traces
|
||||
- No mocking - use real implementations
|
||||
- Wait for each test to complete
|
||||
```
|
||||
|
||||
### 4. Configure Test-Runner Agent
|
||||
|
||||
Prepare agent context based on framework:
|
||||
|
||||
```markdown
|
||||
# Test-Runner Agent Configuration
|
||||
|
||||
## Project Testing Setup
|
||||
- Framework: {framework}
|
||||
- Test Location: {directories}
|
||||
- Total Tests: {count}
|
||||
- Last Run: Never
|
||||
|
||||
## Execution Rules
|
||||
1. Always use the test-runner agent from `.claude/agents/test-runner.md`
|
||||
2. Run with maximum verbosity for debugging
|
||||
3. No mock services - use real implementations
|
||||
4. Execute tests sequentially - no parallel execution
|
||||
5. Capture complete output including stack traces
|
||||
6. If test fails, analyze test structure before assuming code issue
|
||||
7. Report detailed failure analysis with context
|
||||
|
||||
## Test Command Templates
|
||||
- Full Suite: `{full_command}`
|
||||
- Single File: `{single_file_command}`
|
||||
- Pattern Match: `{pattern_command}`
|
||||
- Watch Mode: `{watch_command}` (if available)
|
||||
|
||||
## Common Issues to Check
|
||||
- Environment variables properly set
|
||||
- Test database/services running
|
||||
- Dependencies installed
|
||||
- Proper file permissions
|
||||
- Clean test state between runs
|
||||
```
|
||||
|
||||
### 5. Validation Steps
|
||||
|
||||
After configuration:
|
||||
- Try running a simple test to validate setup
|
||||
- Check if test command works: `{test_command} --version` or equivalent
|
||||
- Verify test files are discoverable
|
||||
- Ensure no permission issues
|
||||
|
||||
### 6. Output Summary
|
||||
|
||||
```
|
||||
🧪 Testing Environment Primed
|
||||
|
||||
🔍 Detection Results:
|
||||
✅ Framework: {framework_name} {version}
|
||||
✅ Test Files: {count} files in {directories}
|
||||
✅ Config: {config_file}
|
||||
✅ Dependencies: All installed
|
||||
|
||||
📋 Test Structure:
|
||||
- Pattern: {test_file_pattern}
|
||||
- Directories: {test_directories}
|
||||
- Utilities: {test_helpers}
|
||||
|
||||
🤖 Agent Configuration:
|
||||
✅ Test-runner agent configured
|
||||
✅ Verbose output enabled
|
||||
✅ Sequential execution set
|
||||
✅ Real services (no mocks)
|
||||
|
||||
⚡ Ready Commands:
|
||||
- Run all tests: /testing:run
|
||||
- Run specific: /testing:run {test_file}
|
||||
- Run pattern: /testing:run {pattern}
|
||||
|
||||
💡 Tips:
|
||||
- Always run tests with verbose output
|
||||
- Check test structure if tests fail
|
||||
- Use real services, not mocks
|
||||
- Let each test complete fully
|
||||
```
|
||||
|
||||
### 7. Error Handling
|
||||
|
||||
**Common Issues:**
|
||||
|
||||
**No Framework Detected:**
|
||||
- Message: "⚠️ No test framework found"
|
||||
- Solution: "Please specify test command manually"
|
||||
- Store user's response for future use
|
||||
|
||||
**Missing Dependencies:**
|
||||
- Message: "❌ Test framework not installed"
|
||||
- Solution: "Install dependencies first: npm install / pip install -r requirements.txt"
|
||||
|
||||
**No Test Files:**
|
||||
- Message: "⚠️ No test files found"
|
||||
- Solution: "Create tests first or check test directory location"
|
||||
|
||||
**Permission Issues:**
|
||||
- Message: "❌ Cannot access test files"
|
||||
- Solution: "Check file permissions"
|
||||
|
||||
### 8. Save Configuration
|
||||
|
||||
If successful, save configuration for future sessions:
|
||||
- Store in `.claude/testing-config.md`
|
||||
- Include all discovered settings
|
||||
- Update on subsequent runs if changes detected
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Always detect** rather than assume test framework
|
||||
- **Validate dependencies** before claiming ready
|
||||
- **Configure for debugging** - verbose output is critical
|
||||
- **No mocking** - use real services for accurate testing
|
||||
- **Sequential execution** - avoid parallel test issues
|
||||
- **Store configuration** for consistent future runs
|
||||
|
||||
$ARGUMENTS
|
||||
@ -0,0 +1,108 @@
|
||||
---
|
||||
allowed-tools: Bash, Read, Write, LS, Task
|
||||
---
|
||||
|
||||
# Run Tests
|
||||
|
||||
Execute tests with the configured test-runner agent.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/testing:run [test_target]
|
||||
```
|
||||
|
||||
Where `test_target` can be:
|
||||
- Empty (run all tests)
|
||||
- Test file path
|
||||
- Test pattern
|
||||
- Test suite name
|
||||
|
||||
## Quick Check
|
||||
|
||||
```bash
|
||||
# Check if testing is configured
|
||||
test -f .claude/testing-config.md || echo "❌ Testing not configured. Run /testing:prime first"
|
||||
```
|
||||
|
||||
If test target provided, verify it exists:
|
||||
```bash
|
||||
# For file targets
|
||||
test -f "$ARGUMENTS" || echo "⚠️ Test file not found: $ARGUMENTS"
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Determine Test Command
|
||||
|
||||
Based on testing-config.md and target:
|
||||
- No arguments → Run full test suite from config
|
||||
- File path → Run specific test file
|
||||
- Pattern → Run tests matching pattern
|
||||
|
||||
### 2. Execute Tests
|
||||
|
||||
Use the test-runner agent from `.claude/agents/test-runner.md`:
|
||||
|
||||
```markdown
|
||||
Execute tests for: $ARGUMENTS (or "all" if empty)
|
||||
|
||||
Requirements:
|
||||
- Run with verbose output for debugging
|
||||
- No mocks - use real services
|
||||
- Capture full output including stack traces
|
||||
- If test fails, check test structure before assuming code issue
|
||||
```
|
||||
|
||||
### 3. Monitor Execution
|
||||
|
||||
- Show test progress
|
||||
- Capture stdout and stderr
|
||||
- Note execution time
|
||||
|
||||
### 4. Report Results
|
||||
|
||||
**Success:**
|
||||
```
|
||||
✅ All tests passed ({count} tests in {time}s)
|
||||
```
|
||||
|
||||
**Failure:**
|
||||
```
|
||||
❌ Test failures: {failed_count} of {total_count}
|
||||
|
||||
{test_name} - {file}:{line}
|
||||
Error: {error_message}
|
||||
Likely: {test issue | code issue}
|
||||
Fix: {suggestion}
|
||||
|
||||
Run with more detail: /testing:run {specific_test}
|
||||
```
|
||||
|
||||
**Mixed:**
|
||||
```
|
||||
Tests complete: {passed} passed, {failed} failed, {skipped} skipped
|
||||
|
||||
Failed:
|
||||
- {test_1}: {brief_reason}
|
||||
- {test_2}: {brief_reason}
|
||||
```
|
||||
|
||||
### 5. Cleanup
|
||||
|
||||
```bash
|
||||
# Kill any hanging test processes
|
||||
pkill -f "jest|mocha|pytest" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Test command fails → "❌ Test execution failed: {error}. Check test framework is installed."
|
||||
- Timeout → Kill process and report: "❌ Tests timed out after {time}s"
|
||||
- No tests found → "❌ No tests found matching: $ARGUMENTS"
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Always use test-runner agent for analysis
|
||||
- No mocking - real services only
|
||||
- Check test structure if failures occur
|
||||
- Keep output focused on failures
|
||||
@ -0,0 +1,95 @@
|
||||
# Context Directory
|
||||
|
||||
This directory contains project context documentation that provides comprehensive information about the current state, structure, and direction of your project. The context files serve as a knowledge base for AI agents and team members to quickly understand and contribute to the project.
|
||||
|
||||
## Purpose
|
||||
|
||||
The context system enables:
|
||||
- **Fast Agent Onboarding**: New AI agents can quickly understand the project through standardized documentation
|
||||
- **Project Continuity**: Maintain knowledge across development sessions and team changes
|
||||
- **Consistent Understanding**: Ensure all contributors have access to the same project information
|
||||
- **Living Documentation**: Keep project knowledge current and actionable
|
||||
|
||||
## Core Context Files
|
||||
|
||||
When fully initialized, this directory contains:
|
||||
|
||||
### Project Foundation
|
||||
- **`project-brief.md`** - Project scope, goals, and key objectives
|
||||
- **`project-vision.md`** - Long-term vision and strategic direction
|
||||
- **`project-overview.md`** - High-level summary of features and capabilities
|
||||
- **`progress.md`** - Current project status, completed work, and immediate next steps
|
||||
|
||||
### Technical Context
|
||||
- **`tech-context.md`** - Dependencies, technologies, and development tools
|
||||
- **`project-structure.md`** - Directory structure and file organization
|
||||
- **`system-patterns.md`** - Architectural patterns and design decisions
|
||||
- **`project-style-guide.md`** - Coding standards, conventions, and style preferences
|
||||
|
||||
### Product Context
|
||||
- **`product-context.md`** - Product requirements, target users, and core functionality
|
||||
|
||||
## Context Commands
|
||||
|
||||
Use these commands to manage your project context:
|
||||
|
||||
### Initialize Context
|
||||
```bash
|
||||
/context:create
|
||||
```
|
||||
Analyzes your project and creates initial context documentation. Use this when:
|
||||
- Starting a new project
|
||||
- Adding context to an existing project
|
||||
- Major project restructuring
|
||||
|
||||
### Load Context
|
||||
```bash
|
||||
/context:prime
|
||||
```
|
||||
Loads all context information for a new agent session. Use this when:
|
||||
- Starting a new development session
|
||||
- Onboarding a new team member
|
||||
- Getting up to speed on project status
|
||||
|
||||
### Update Context
|
||||
```bash
|
||||
/context:update
|
||||
```
|
||||
Updates context documentation to reflect current project state. Use this:
|
||||
- At the end of development sessions
|
||||
- After completing major features
|
||||
- When project direction changes
|
||||
- After architectural changes
|
||||
|
||||
## Context Workflow
|
||||
|
||||
1. **Project Start**: Run `/context:create` to establish baseline documentation
|
||||
2. **Session Start**: Run `/context:prime` to load current context
|
||||
3. **Development**: Work on your project with full context awareness
|
||||
4. **Session End**: Run `/context:update` to capture changes and progress
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Reduced Onboarding Time**: New contributors understand the project quickly
|
||||
- **Maintained Project Memory**: Nothing gets lost between sessions
|
||||
- **Consistent Architecture**: Decisions are documented and followed
|
||||
- **Clear Progress Tracking**: Always know what's been done and what's next
|
||||
- **Enhanced AI Collaboration**: AI agents have full project understanding
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Keep Current**: Update context regularly, especially after major changes
|
||||
- **Be Concise**: Focus on essential information that helps understanding
|
||||
- **Stay Consistent**: Follow established formats and structures
|
||||
- **Document Decisions**: Capture architectural and design decisions
|
||||
- **Track Progress**: Maintain accurate status and next steps
|
||||
|
||||
## Integration
|
||||
|
||||
The context system integrates with:
|
||||
- **Project Management**: Links with PRDs, epics, and task tracking
|
||||
- **Development Workflow**: Supports continuous development sessions
|
||||
- **Documentation**: Complements existing project documentation
|
||||
- **Team Collaboration**: Provides shared understanding across contributors
|
||||
|
||||
Start with `/context:create` to initialize your project's knowledge base!
|
||||
@ -0,0 +1 @@
|
||||
|
||||
@ -0,0 +1 @@
|
||||
|
||||
@ -0,0 +1,224 @@
|
||||
# Agent Coordination
|
||||
|
||||
Rules for multiple agents working in parallel within the same epic worktree.
|
||||
|
||||
## Parallel Execution Principles
|
||||
|
||||
1. **File-level parallelism** - Agents working on different files never conflict
|
||||
2. **Explicit coordination** - When same file needed, coordinate explicitly
|
||||
3. **Fail fast** - Surface conflicts immediately, don't try to be clever
|
||||
4. **Human resolution** - Conflicts are resolved by humans, not agents
|
||||
|
||||
## Work Stream Assignment
|
||||
|
||||
Each agent is assigned a work stream from the issue analysis:
|
||||
```yaml
|
||||
# From {issue}-analysis.md
|
||||
Stream A: Database Layer
|
||||
Files: src/db/*, migrations/*
|
||||
Agent: backend-specialist
|
||||
|
||||
Stream B: API Layer
|
||||
Files: src/api/*
|
||||
Agent: api-specialist
|
||||
```
|
||||
|
||||
Agents should only modify files in their assigned patterns.
|
||||
|
||||
## File Access Coordination
|
||||
|
||||
### Check Before Modify
|
||||
Before modifying a shared file:
|
||||
```bash
|
||||
# Check if file is being modified
|
||||
git status {file}
|
||||
|
||||
# If modified by another agent, wait
|
||||
if [[ $(git status --porcelain {file}) ]]; then
|
||||
echo "Waiting for {file} to be available..."
|
||||
sleep 30
|
||||
# Retry
|
||||
fi
|
||||
```
|
||||
|
||||
### Atomic Commits
|
||||
Make commits atomic and focused:
|
||||
```bash
|
||||
# Good - Single purpose commit
|
||||
git add src/api/users.ts src/api/users.test.ts
|
||||
git commit -m "Issue #1234: Add user CRUD endpoints"
|
||||
|
||||
# Bad - Mixed concerns
|
||||
git add src/api/* src/db/* src/ui/*
|
||||
git commit -m "Issue #1234: Multiple changes"
|
||||
```
|
||||
|
||||
## Communication Between Agents
|
||||
|
||||
### Through Commits
|
||||
Agents see each other's work through commits:
|
||||
```bash
|
||||
# Agent checks what others have done
|
||||
git log --oneline -10
|
||||
|
||||
# Agent pulls latest changes
|
||||
git pull origin epic/{name}
|
||||
```
|
||||
|
||||
### Through Progress Files
|
||||
Each stream maintains progress:
|
||||
```markdown
|
||||
# .claude/epics/{epic}/updates/{issue}/stream-A.md
|
||||
---
|
||||
stream: Database Layer
|
||||
agent: backend-specialist
|
||||
started: {datetime}
|
||||
status: in_progress
|
||||
---
|
||||
|
||||
## Completed
|
||||
- Created user table schema
|
||||
- Added migration files
|
||||
|
||||
## Working On
|
||||
- Adding indexes
|
||||
|
||||
## Blocked
|
||||
- None
|
||||
```
|
||||
|
||||
### Through Analysis Files
|
||||
The analysis file is the contract:
|
||||
```yaml
|
||||
# Agents read this to understand boundaries
|
||||
Stream A:
|
||||
Files: src/db/* # Agent A only touches these
|
||||
Stream B:
|
||||
Files: src/api/* # Agent B only touches these
|
||||
```
|
||||
|
||||
## Handling Conflicts
|
||||
|
||||
### Conflict Detection
|
||||
```bash
|
||||
# If commit fails due to conflict
|
||||
git commit -m "Issue #1234: Update"
|
||||
# Error: conflicts exist
|
||||
|
||||
# Agent should report and wait
|
||||
echo "❌ Conflict detected in {files}"
|
||||
echo "Human intervention needed"
|
||||
```
|
||||
|
||||
### Conflict Resolution
|
||||
Always defer to humans:
|
||||
1. Agent detects conflict
|
||||
2. Agent reports issue
|
||||
3. Agent pauses work
|
||||
4. Human resolves
|
||||
5. Agent continues
|
||||
|
||||
Never attempt automatic merge resolution.
|
||||
|
||||
## Synchronization Points
|
||||
|
||||
### Natural Sync Points
|
||||
- After each commit
|
||||
- Before starting new file
|
||||
- When switching work streams
|
||||
- Every 30 minutes of work
|
||||
|
||||
### Explicit Sync
|
||||
```bash
|
||||
# Pull latest changes
|
||||
git pull --rebase origin epic/{name}
|
||||
|
||||
# If conflicts, stop and report
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Sync failed - human help needed"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Agent Communication Protocol
|
||||
|
||||
### Status Updates
|
||||
Agents should update their status regularly:
|
||||
```bash
|
||||
# Update progress file every significant step
|
||||
echo "✅ Completed: Database schema" >> stream-A.md
|
||||
git add stream-A.md
|
||||
git commit -m "Progress: Stream A - schema complete"
|
||||
```
|
||||
|
||||
### Coordination Requests
|
||||
When agents need to coordinate:
|
||||
```markdown
|
||||
# In stream-A.md
|
||||
## Coordination Needed
|
||||
- Need to update src/types/index.ts
|
||||
- Will modify after Stream B commits
|
||||
- ETA: 10 minutes
|
||||
```
|
||||
|
||||
## Parallel Commit Strategy
|
||||
|
||||
### No Conflicts Possible
|
||||
When working on completely different files:
|
||||
```bash
|
||||
# These can happen simultaneously
|
||||
Agent-A: git commit -m "Issue #1234: Update database"
|
||||
Agent-B: git commit -m "Issue #1235: Update UI"
|
||||
Agent-C: git commit -m "Issue #1236: Add tests"
|
||||
```
|
||||
|
||||
### Sequential When Needed
|
||||
When touching shared resources:
|
||||
```bash
|
||||
# Agent A commits first
|
||||
git add src/types/index.ts
|
||||
git commit -m "Issue #1234: Update type definitions"
|
||||
|
||||
# Agent B waits, then proceeds
|
||||
# (After A's commit)
|
||||
git pull
|
||||
git add src/api/users.ts
|
||||
git commit -m "Issue #1235: Use new types"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Commit early and often** - Smaller commits = fewer conflicts
|
||||
2. **Stay in your lane** - Only modify assigned files
|
||||
3. **Communicate changes** - Update progress files
|
||||
4. **Pull frequently** - Stay synchronized with other agents
|
||||
5. **Fail loudly** - Report issues immediately
|
||||
6. **Never force** - No `--force` flags ever
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Starting Work
|
||||
```bash
|
||||
1. cd ../epic-{name}
|
||||
2. git pull
|
||||
3. Check {issue}-analysis.md for assignment
|
||||
4. Update stream-{X}.md with "started"
|
||||
5. Begin work on assigned files
|
||||
```
|
||||
|
||||
### During Work
|
||||
```bash
|
||||
1. Make changes to assigned files
|
||||
2. Commit with clear message
|
||||
3. Update progress file
|
||||
4. Check for new commits from others
|
||||
5. Continue or coordinate as needed
|
||||
```
|
||||
|
||||
### Completing Work
|
||||
```bash
|
||||
1. Final commit for stream
|
||||
2. Update stream-{X}.md with "completed"
|
||||
3. Check if other streams need help
|
||||
4. Report completion
|
||||
```
|
||||
@ -0,0 +1,147 @@
|
||||
# Branch Operations
|
||||
|
||||
Git branches enable parallel development by allowing multiple developers to work on the same repository with isolated changes.
|
||||
|
||||
## Creating Branches
|
||||
|
||||
Always create branches from a clean main branch:
|
||||
```bash
|
||||
# Ensure main is up to date
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Create branch for epic
|
||||
git checkout -b epic/{name}
|
||||
git push -u origin epic/{name}
|
||||
```
|
||||
|
||||
The branch will be created and pushed to origin with upstream tracking.
|
||||
|
||||
## Working in Branches
|
||||
|
||||
### Agent Commits
|
||||
- Agents commit directly to the branch
|
||||
- Use small, focused commits
|
||||
- Commit message format: `Issue #{number}: {description}`
|
||||
- Example: `Issue #1234: Add user authentication schema`
|
||||
|
||||
### File Operations
|
||||
```bash
|
||||
# Working directory is the current directory
|
||||
# (no need to change directories like with worktrees)
|
||||
|
||||
# Normal git operations work
|
||||
git add {files}
|
||||
git commit -m "Issue #{number}: {change}"
|
||||
|
||||
# View branch status
|
||||
git status
|
||||
git log --oneline -5
|
||||
```
|
||||
|
||||
## Parallel Work in Same Branch
|
||||
|
||||
Multiple agents can work in the same branch if they coordinate file access:
|
||||
```bash
|
||||
# Agent A works on API
|
||||
git add src/api/*
|
||||
git commit -m "Issue #1234: Add user endpoints"
|
||||
|
||||
# Agent B works on UI (coordinate to avoid conflicts!)
|
||||
git pull origin epic/{name} # Get latest changes
|
||||
git add src/ui/*
|
||||
git commit -m "Issue #1235: Add dashboard component"
|
||||
```
|
||||
|
||||
## Merging Branches
|
||||
|
||||
When epic is complete, merge back to main:
|
||||
```bash
|
||||
# From main repository
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Merge epic branch
|
||||
git merge epic/{name}
|
||||
|
||||
# If successful, clean up
|
||||
git branch -d epic/{name}
|
||||
git push origin --delete epic/{name}
|
||||
```
|
||||
|
||||
## Handling Conflicts
|
||||
|
||||
If merge conflicts occur:
|
||||
```bash
|
||||
# Conflicts will be shown
|
||||
git status
|
||||
|
||||
# Human resolves conflicts
|
||||
# Then continue merge
|
||||
git add {resolved-files}
|
||||
git commit
|
||||
```
|
||||
|
||||
## Branch Management
|
||||
|
||||
### List Active Branches
|
||||
```bash
|
||||
git branch -a
|
||||
```
|
||||
|
||||
### Remove Stale Branch
|
||||
```bash
|
||||
# Delete local branch
|
||||
git branch -d epic/{name}
|
||||
|
||||
# Delete remote branch
|
||||
git push origin --delete epic/{name}
|
||||
```
|
||||
|
||||
### Check Branch Status
|
||||
```bash
|
||||
# Current branch info
|
||||
git branch -v
|
||||
|
||||
# Compare with main
|
||||
git log --oneline main..epic/{name}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **One branch per epic** - Not per issue
|
||||
2. **Clean before create** - Always start from updated main
|
||||
3. **Commit frequently** - Small commits are easier to merge
|
||||
4. **Pull before push** - Get latest changes to avoid conflicts
|
||||
5. **Use descriptive branches** - `epic/feature-name` not `feature`
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Branch Already Exists
|
||||
```bash
|
||||
# Delete old branch first
|
||||
git branch -D epic/{name}
|
||||
git push origin --delete epic/{name}
|
||||
# Then create new one
|
||||
```
|
||||
|
||||
### Cannot Push Branch
|
||||
```bash
|
||||
# Check if branch exists remotely
|
||||
git ls-remote origin epic/{name}
|
||||
|
||||
# Push with upstream
|
||||
git push -u origin epic/{name}
|
||||
```
|
||||
|
||||
### Merge Conflicts During Pull
|
||||
```bash
|
||||
# Stash changes if needed
|
||||
git stash
|
||||
|
||||
# Pull and rebase
|
||||
git pull --rebase origin epic/{name}
|
||||
|
||||
# Restore changes
|
||||
git stash pop
|
||||
```
|
||||
@ -0,0 +1,118 @@
|
||||
# DateTime Rule
|
||||
|
||||
## Getting Current Date and Time
|
||||
|
||||
When any command requires the current date/time (for frontmatter, timestamps, or logs), you MUST obtain the REAL current date/time from the system rather than estimating or using placeholder values.
|
||||
|
||||
### How to Get Current DateTime
|
||||
|
||||
Use the `date` command to get the current ISO 8601 formatted datetime:
|
||||
|
||||
```bash
|
||||
# Get current datetime in ISO 8601 format (works on Linux/Mac)
|
||||
date -u +"%Y-%m-%dT%H:%M:%SZ"
|
||||
|
||||
# Alternative for systems that support it
|
||||
date --iso-8601=seconds
|
||||
|
||||
# For Windows (if using PowerShell)
|
||||
Get-Date -Format "yyyy-MM-ddTHH:mm:ssZ"
|
||||
```
|
||||
|
||||
### Required Format
|
||||
|
||||
All dates in frontmatter MUST use ISO 8601 format with UTC timezone:
|
||||
- Format: `YYYY-MM-DDTHH:MM:SSZ`
|
||||
- Example: `2024-01-15T14:30:45Z`
|
||||
|
||||
### Usage in Frontmatter
|
||||
|
||||
When creating or updating frontmatter in any file (PRD, Epic, Task, Progress), always use the real current datetime:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: feature-name
|
||||
created: 2024-01-15T14:30:45Z # Use actual output from date command
|
||||
updated: 2024-01-15T14:30:45Z # Use actual output from date command
|
||||
---
|
||||
```
|
||||
|
||||
### Implementation Instructions
|
||||
|
||||
1. **Before writing any file with frontmatter:**
|
||||
- Run: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
- Store the output
|
||||
- Use this exact value in the frontmatter
|
||||
|
||||
2. **For commands that create files:**
|
||||
- PRD creation: Use real date for `created` field
|
||||
- Epic creation: Use real date for `created` field
|
||||
- Task creation: Use real date for both `created` and `updated` fields
|
||||
- Progress tracking: Use real date for `started` and `last_sync` fields
|
||||
|
||||
3. **For commands that update files:**
|
||||
- Always update the `updated` field with current real datetime
|
||||
- Preserve the original `created` field
|
||||
- For sync operations, update `last_sync` with real datetime
|
||||
|
||||
### Examples
|
||||
|
||||
**Creating a new PRD:**
|
||||
```bash
|
||||
# First, get current datetime
|
||||
CURRENT_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
# Output: 2024-01-15T14:30:45Z
|
||||
|
||||
# Then use in frontmatter:
|
||||
---
|
||||
name: user-authentication
|
||||
description: User authentication and authorization system
|
||||
status: backlog
|
||||
created: 2024-01-15T14:30:45Z # Use the actual $CURRENT_DATE value
|
||||
---
|
||||
```
|
||||
|
||||
**Updating an existing task:**
|
||||
```bash
|
||||
# Get current datetime for update
|
||||
UPDATE_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Update only the 'updated' field:
|
||||
---
|
||||
name: implement-login-api
|
||||
status: in-progress
|
||||
created: 2024-01-10T09:15:30Z # Keep original
|
||||
updated: 2024-01-15T14:30:45Z # Use new $UPDATE_DATE value
|
||||
---
|
||||
```
|
||||
|
||||
### Important Notes
|
||||
|
||||
- **Never use placeholder dates** like `[Current ISO date/time]` or `YYYY-MM-DD`
|
||||
- **Never estimate dates** - always get the actual system time
|
||||
- **Always use UTC** (the `Z` suffix) for consistency across timezones
|
||||
- **Preserve timezone consistency** - all dates in the system use UTC
|
||||
|
||||
### Cross-Platform Compatibility
|
||||
|
||||
If you need to ensure compatibility across different systems:
|
||||
|
||||
```bash
|
||||
# Try primary method first
|
||||
date -u +"%Y-%m-%dT%H:%M:%SZ" 2>/dev/null || \
|
||||
# Fallback for systems without -u flag
|
||||
date +"%Y-%m-%dT%H:%M:%SZ" 2>/dev/null || \
|
||||
# Last resort: use Python if available
|
||||
python3 -c "from datetime import datetime; print(datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'))" 2>/dev/null || \
|
||||
python -c "from datetime import datetime; print(datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'))" 2>/dev/null
|
||||
```
|
||||
|
||||
## Rule Priority
|
||||
|
||||
This rule has **HIGHEST PRIORITY** and must be followed by all commands that:
|
||||
- Create new files with frontmatter
|
||||
- Update existing files with frontmatter
|
||||
- Track timestamps or progress
|
||||
- Log any time-based information
|
||||
|
||||
Commands affected: prd-new, prd-parse, epic-decompose, epic-sync, issue-start, issue-sync, and any other command that writes timestamps.
|
||||
@ -0,0 +1,58 @@
|
||||
# Frontmatter Operations Rule
|
||||
|
||||
Standard patterns for working with YAML frontmatter in markdown files.
|
||||
|
||||
## Reading Frontmatter
|
||||
|
||||
Extract frontmatter from any markdown file:
|
||||
1. Look for content between `---` markers at start of file
|
||||
2. Parse as YAML
|
||||
3. If invalid or missing, use sensible defaults
|
||||
|
||||
## Updating Frontmatter
|
||||
|
||||
When updating existing files:
|
||||
1. Preserve all existing fields
|
||||
2. Only update specified fields
|
||||
3. Always update `updated` field with current datetime (see `/rules/datetime.md`)
|
||||
|
||||
## Standard Fields
|
||||
|
||||
### All Files
|
||||
```yaml
|
||||
---
|
||||
name: {identifier}
|
||||
created: {ISO datetime} # Never change after creation
|
||||
updated: {ISO datetime} # Update on any modification
|
||||
---
|
||||
```
|
||||
|
||||
### Status Values
|
||||
- PRDs: `backlog`, `in-progress`, `complete`
|
||||
- Epics: `backlog`, `in-progress`, `completed`
|
||||
- Tasks: `open`, `in-progress`, `closed`
|
||||
|
||||
### Progress Tracking
|
||||
```yaml
|
||||
progress: {0-100}% # For epics
|
||||
completion: {0-100}% # For progress files
|
||||
```
|
||||
|
||||
## Creating New Files
|
||||
|
||||
Always include frontmatter when creating markdown files:
|
||||
```yaml
|
||||
---
|
||||
name: {from_arguments_or_context}
|
||||
status: {initial_status}
|
||||
created: {current_datetime}
|
||||
updated: {current_datetime}
|
||||
---
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Never modify `created` field after initial creation
|
||||
- Always use real datetime from system (see `/rules/datetime.md`)
|
||||
- Validate frontmatter exists before trying to parse
|
||||
- Use consistent field names across all files
|
||||
@ -0,0 +1,86 @@
|
||||
# GitHub Operations Rule
|
||||
|
||||
Standard patterns for GitHub CLI operations across all commands.
|
||||
|
||||
## CRITICAL: Repository Protection
|
||||
|
||||
**Before ANY GitHub operation that creates/modifies issues or PRs:**
|
||||
|
||||
```bash
|
||||
# Check if remote origin is the CCPM template repository
|
||||
remote_url=$(git remote get-url origin 2>/dev/null || echo "")
|
||||
if [[ "$remote_url" == *"automazeio/ccpm"* ]] || [[ "$remote_url" == *"automazeio/ccpm.git"* ]]; then
|
||||
echo "❌ ERROR: You're trying to sync with the CCPM template repository!"
|
||||
echo ""
|
||||
echo "This repository (automazeio/ccpm) is a template for others to use."
|
||||
echo "You should NOT create issues or PRs here."
|
||||
echo ""
|
||||
echo "To fix this:"
|
||||
echo "1. Fork this repository to your own GitHub account"
|
||||
echo "2. Update your remote origin:"
|
||||
echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
|
||||
echo ""
|
||||
echo "Or if this is a new project:"
|
||||
echo "1. Create a new repository on GitHub"
|
||||
echo "2. Update your remote origin:"
|
||||
echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
|
||||
echo ""
|
||||
echo "Current remote: $remote_url"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
This check MUST be performed in ALL commands that:
|
||||
- Create issues (`gh issue create`)
|
||||
- Edit issues (`gh issue edit`)
|
||||
- Comment on issues (`gh issue comment`)
|
||||
- Create PRs (`gh pr create`)
|
||||
- Any other operation that modifies the GitHub repository
|
||||
|
||||
## Authentication
|
||||
|
||||
**Don't pre-check authentication.** Just run the command and handle failure:
|
||||
|
||||
```bash
|
||||
gh {command} || echo "❌ GitHub CLI failed. Run: gh auth login"
|
||||
```
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Get Issue Details
|
||||
```bash
|
||||
gh issue view {number} --json state,title,labels,body
|
||||
```
|
||||
|
||||
### Create Issue
|
||||
```bash
|
||||
# ALWAYS check remote origin first!
|
||||
gh issue create --title "{title}" --body-file {file} --label "{labels}"
|
||||
```
|
||||
|
||||
### Update Issue
|
||||
```bash
|
||||
# ALWAYS check remote origin first!
|
||||
gh issue edit {number} --add-label "{label}" --add-assignee @me
|
||||
```
|
||||
|
||||
### Add Comment
|
||||
```bash
|
||||
# ALWAYS check remote origin first!
|
||||
gh issue comment {number} --body-file {file}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any gh command fails:
|
||||
1. Show clear error: "❌ GitHub operation failed: {command}"
|
||||
2. Suggest fix: "Run: gh auth login" or check issue number
|
||||
3. Don't retry automatically
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **ALWAYS** check remote origin before ANY write operation to GitHub
|
||||
- Trust that gh CLI is installed and authenticated
|
||||
- Use --json for structured output when parsing
|
||||
- Keep operations atomic - one gh command per action
|
||||
- Don't check rate limits preemptively
|
||||
@ -0,0 +1,174 @@
|
||||
# Standard Patterns for Commands
|
||||
|
||||
This file defines common patterns that all commands should follow to maintain consistency and simplicity.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Fail Fast** - Check critical prerequisites, then proceed
|
||||
2. **Trust the System** - Don't over-validate things that rarely fail
|
||||
3. **Clear Errors** - When something fails, say exactly what and how to fix it
|
||||
4. **Minimal Output** - Show what matters, skip decoration
|
||||
|
||||
## Standard Validations
|
||||
|
||||
### Minimal Preflight
|
||||
Only check what's absolutely necessary:
|
||||
```markdown
|
||||
## Quick Check
|
||||
1. If command needs specific directory/file:
|
||||
- Check it exists: `test -f {file} || echo "❌ {file} not found"`
|
||||
- If missing, tell user exact command to fix it
|
||||
2. If command needs GitHub:
|
||||
- Assume `gh` is authenticated (it usually is)
|
||||
- Only check on actual failure
|
||||
```
|
||||
|
||||
### DateTime Handling
|
||||
```markdown
|
||||
Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
|
||||
```
|
||||
Don't repeat full instructions - just reference `/rules/datetime.md` once.
|
||||
|
||||
### Error Messages
|
||||
Keep them short and actionable:
|
||||
```markdown
|
||||
❌ {What failed}: {Exact solution}
|
||||
Example: "❌ Epic not found: Run /pm:prd-parse feature-name"
|
||||
```
|
||||
|
||||
## Standard Output Formats
|
||||
|
||||
### Success Output
|
||||
```markdown
|
||||
✅ {Action} complete
|
||||
- {Key result 1}
|
||||
- {Key result 2}
|
||||
Next: {Single suggested action}
|
||||
```
|
||||
|
||||
### List Output
|
||||
```markdown
|
||||
{Count} {items} found:
|
||||
- {item 1}: {key detail}
|
||||
- {item 2}: {key detail}
|
||||
```
|
||||
|
||||
### Progress Output
|
||||
```markdown
|
||||
{Action}... {current}/{total}
|
||||
```
|
||||
|
||||
## File Operations
|
||||
|
||||
### Check and Create
|
||||
```markdown
|
||||
# Don't ask permission, just create what's needed
|
||||
mkdir -p .claude/{directory} 2>/dev/null
|
||||
```
|
||||
|
||||
### Read with Fallback
|
||||
```markdown
|
||||
# Try to read, continue if missing
|
||||
if [ -f {file} ]; then
|
||||
# Read and use file
|
||||
else
|
||||
# Use sensible default
|
||||
fi
|
||||
```
|
||||
|
||||
## GitHub Operations
|
||||
|
||||
### Trust gh CLI
|
||||
```markdown
|
||||
# Don't pre-check auth, just try the operation
|
||||
gh {command} || echo "❌ GitHub CLI failed. Run: gh auth login"
|
||||
```
|
||||
|
||||
### Simple Issue Operations
|
||||
```markdown
|
||||
# Get what you need in one call
|
||||
gh issue view {number} --json state,title,body
|
||||
```
|
||||
|
||||
## Common Patterns to Avoid
|
||||
|
||||
### DON'T: Over-validate
|
||||
```markdown
|
||||
# Bad - too many checks
|
||||
1. Check directory exists
|
||||
2. Check permissions
|
||||
3. Check git status
|
||||
4. Check GitHub auth
|
||||
5. Check rate limits
|
||||
6. Validate every field
|
||||
```
|
||||
|
||||
### DO: Check essentials
|
||||
```markdown
|
||||
# Good - just what's needed
|
||||
1. Check target exists
|
||||
2. Try the operation
|
||||
3. Handle failure clearly
|
||||
```
|
||||
|
||||
### DON'T: Verbose output
|
||||
```markdown
|
||||
# Bad - too much information
|
||||
🎯 Starting operation...
|
||||
📋 Validating prerequisites...
|
||||
✅ Step 1 complete
|
||||
✅ Step 2 complete
|
||||
📊 Statistics: ...
|
||||
💡 Tips: ...
|
||||
```
|
||||
|
||||
### DO: Concise output
|
||||
```markdown
|
||||
# Good - just results
|
||||
✅ Done: 3 files created
|
||||
Failed: auth.test.js (syntax error - line 42)
|
||||
```
|
||||
|
||||
### DON'T: Ask too many questions
|
||||
```markdown
|
||||
# Bad - too interactive
|
||||
"Continue? (yes/no)"
|
||||
"Overwrite? (yes/no)"
|
||||
"Are you sure? (yes/no)"
|
||||
```
|
||||
|
||||
### DO: Smart defaults
|
||||
```markdown
|
||||
# Good - proceed with sensible defaults
|
||||
# Only ask when destructive or ambiguous
|
||||
"This will delete 10 files. Continue? (yes/no)"
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Tools Only
|
||||
- Read/List operations: `Read, LS`
|
||||
- File creation: `Read, Write, LS`
|
||||
- GitHub operations: Add `Bash`
|
||||
- Complex analysis: Add `Task` (sparingly)
|
||||
|
||||
### Status Indicators
|
||||
- ✅ Success (use sparingly)
|
||||
- ❌ Error (always with solution)
|
||||
- ⚠️ Warning (only if action needed)
|
||||
- No emoji for normal output
|
||||
|
||||
### Exit Strategies
|
||||
- Success: Brief confirmation
|
||||
- Failure: Clear error + exact fix
|
||||
- Partial: Show what worked, what didn't
|
||||
|
||||
## Remember
|
||||
|
||||
**Simple is not simplistic** - We still handle errors properly, we just don't try to prevent every possible edge case. We trust that:
|
||||
- The file system usually works
|
||||
- GitHub CLI is usually authenticated
|
||||
- Git repositories are usually valid
|
||||
- Users know what they're doing
|
||||
|
||||
Focus on the happy path, fail gracefully when things go wrong.
|
||||
@ -0,0 +1,79 @@
|
||||
# Strip Frontmatter
|
||||
|
||||
Standard approach for removing YAML frontmatter before sending content to GitHub.
|
||||
|
||||
## The Problem
|
||||
|
||||
YAML frontmatter contains internal metadata that should not appear in GitHub issues:
|
||||
- status, created, updated fields
|
||||
- Internal references and IDs
|
||||
- Local file paths
|
||||
|
||||
## The Solution
|
||||
|
||||
Use sed to strip frontmatter from any markdown file:
|
||||
|
||||
```bash
|
||||
# Strip frontmatter (everything between first two --- lines)
|
||||
sed '1,/^---$/d; 1,/^---$/d' input.md > output.md
|
||||
```
|
||||
|
||||
This removes:
|
||||
1. The opening `---` line
|
||||
2. All YAML content
|
||||
3. The closing `---` line
|
||||
|
||||
## When to Strip Frontmatter
|
||||
|
||||
Always strip frontmatter when:
|
||||
- Creating GitHub issues from markdown files
|
||||
- Posting file content as comments
|
||||
- Displaying content to external users
|
||||
- Syncing to any external system
|
||||
|
||||
## Examples
|
||||
|
||||
### Creating an issue from a file
|
||||
```bash
|
||||
# Bad - includes frontmatter
|
||||
gh issue create --body-file task.md
|
||||
|
||||
# Good - strips frontmatter
|
||||
sed '1,/^---$/d; 1,/^---$/d' task.md > /tmp/clean.md
|
||||
gh issue create --body-file /tmp/clean.md
|
||||
```
|
||||
|
||||
### Posting a comment
|
||||
```bash
|
||||
# Strip frontmatter before posting
|
||||
sed '1,/^---$/d; 1,/^---$/d' progress.md > /tmp/comment.md
|
||||
gh issue comment 123 --body-file /tmp/comment.md
|
||||
```
|
||||
|
||||
### In a loop
|
||||
```bash
|
||||
for file in *.md; do
|
||||
# Strip frontmatter from each file
|
||||
sed '1,/^---$/d; 1,/^---$/d' "$file" > "/tmp/$(basename $file)"
|
||||
# Use the clean version
|
||||
done
|
||||
```
|
||||
|
||||
## Alternative Approaches
|
||||
|
||||
If sed is not available or you need more control:
|
||||
|
||||
```bash
|
||||
# Using awk
|
||||
awk 'BEGIN{fm=0} /^---$/{fm++; next} fm==2{print}' input.md > output.md
|
||||
|
||||
# Using grep with line numbers
|
||||
grep -n "^---$" input.md | head -2 | tail -1 | cut -d: -f1 | xargs -I {} tail -n +$(({}+1)) input.md
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Always test with a sample file first
|
||||
- Keep original files intact
|
||||
- Use temporary files for cleaned content
|
||||
- Some files may not have frontmatter - the command handles this gracefully
|
||||
@ -0,0 +1,60 @@
|
||||
# Test Execution Rule
|
||||
|
||||
Standard patterns for running tests across all testing commands.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Always use test-runner agent** from `.claude/agents/test-runner.md`
|
||||
2. **No mocking** - use real services for accurate results
|
||||
3. **Verbose output** - capture everything for debugging
|
||||
4. **Check test structure first** - before assuming code bugs
|
||||
|
||||
## Execution Pattern
|
||||
|
||||
```markdown
|
||||
Execute tests for: {target}
|
||||
|
||||
Requirements:
|
||||
- Run with verbose output
|
||||
- No mock services
|
||||
- Capture full stack traces
|
||||
- Analyze test structure if failures occur
|
||||
```
|
||||
|
||||
## Output Focus
|
||||
|
||||
### Success
|
||||
Keep it simple:
|
||||
```
|
||||
✅ All tests passed ({count} tests in {time}s)
|
||||
```
|
||||
|
||||
### Failure
|
||||
Focus on what failed:
|
||||
```
|
||||
❌ Test failures: {count}
|
||||
|
||||
{test_name} - {file}:{line}
|
||||
Error: {message}
|
||||
Fix: {suggestion}
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
- Test not found → Check file path
|
||||
- Timeout → Kill process, report incomplete
|
||||
- Framework missing → Install dependencies
|
||||
|
||||
## Cleanup
|
||||
|
||||
Always clean up after tests:
|
||||
```bash
|
||||
pkill -f "jest|mocha|pytest" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Don't parallelize tests (avoid conflicts)
|
||||
- Let each test complete fully
|
||||
- Report failures with actionable fixes
|
||||
- Focus output on failures, not successes
|
||||
@ -0,0 +1,106 @@
|
||||
# AST-Grep Integration Protocol for Cursor Agent
|
||||
|
||||
## When to Use AST-Grep
|
||||
|
||||
Use `ast-grep` (if installed) instead of plain regex or text search when:
|
||||
|
||||
- **Structural code patterns** are involved (e.g., finding all function calls, class definitions, or method implementations)
|
||||
- **Language-aware refactoring** is required (e.g., renaming variables, updating function signatures, or changing imports)
|
||||
- **Complex code analysis** is needed (e.g., finding all usages of a pattern across different syntactic contexts)
|
||||
- **Cross-language searches** are necessary (e.g., working with both Ruby and TypeScript in a monorepo)
|
||||
- **Semantic code understanding** is important (e.g., finding patterns based on code structure, not just text)
|
||||
|
||||
## AST-Grep Command Patterns
|
||||
|
||||
### Basic Search Template:
|
||||
```sh
|
||||
ast-grep --pattern '$PATTERN' --lang $LANGUAGE $PATH
|
||||
```
|
||||
|
||||
### Common Use Cases
|
||||
|
||||
- **Find function calls:**
|
||||
`ast-grep --pattern 'functionName($$$)' --lang javascript .`
|
||||
- **Find class definitions:**
|
||||
`ast-grep --pattern 'class $NAME { $$$ }' --lang typescript .`
|
||||
- **Find variable assignments:**
|
||||
`ast-grep --pattern '$VAR = $$$' --lang ruby .`
|
||||
- **Find import statements:**
|
||||
`ast-grep --pattern 'import { $$$ } from "$MODULE"' --lang javascript .`
|
||||
- **Find method calls on objects:**
|
||||
`ast-grep --pattern '$OBJ.$METHOD($$$)' --lang typescript .`
|
||||
- **Find React hooks:**
|
||||
`ast-grep --pattern 'const [$STATE, $SETTER] = useState($$$)' --lang typescript .`
|
||||
- **Find Ruby class definitions:**
|
||||
`ast-grep --pattern 'class $NAME < $$$; $$$; end' --lang ruby .`
|
||||
|
||||
## Pattern Syntax Reference
|
||||
|
||||
- `$VAR` — matches any single node and captures it
|
||||
- `$$$` — matches zero or more nodes (wildcard)
|
||||
- `$$` — matches one or more nodes
|
||||
- Literal code — matches exactly as written
|
||||
|
||||
## Supported Languages
|
||||
|
||||
- javascript, typescript, ruby, python, go, rust, java, c, cpp, html, css, yaml, json, and more
|
||||
|
||||
## Integration Workflow
|
||||
|
||||
### Before using ast-grep:
|
||||
1. **Check if ast-grep is installed:**
|
||||
If not, skip and fall back to regex/semantic search.
|
||||
```sh
|
||||
command -v ast-grep >/dev/null 2>&1 || echo "ast-grep not installed, skipping AST search"
|
||||
```
|
||||
2. **Identify** if the task involves structural code patterns or language-aware refactoring.
|
||||
3. **Determine** the appropriate language(s) to search.
|
||||
4. **Construct** the pattern using ast-grep syntax.
|
||||
5. **Run** ast-grep to gather precise structural information.
|
||||
6. **Use** results to inform code edits, refactoring, or further analysis.
|
||||
|
||||
### Example Workflow
|
||||
|
||||
When asked to "find all Ruby service objects that call `perform`":
|
||||
|
||||
1. **Check for ast-grep:**
|
||||
```sh
|
||||
command -v ast-grep >/dev/null 2>&1 && ast-grep --pattern 'perform($$$)' --lang ruby app/services/
|
||||
```
|
||||
2. **Analyze** results structurally.
|
||||
3. **Use** codebase semantic search for additional context if needed.
|
||||
4. **Make** informed edits based on structural understanding.
|
||||
|
||||
### Combine ast-grep with Internal Tools
|
||||
|
||||
- **codebase_search** for semantic context and documentation
|
||||
- **read_file** for examining specific files found by ast-grep
|
||||
- **edit_file** for making precise, context-aware code changes
|
||||
|
||||
### Advanced Usage
|
||||
- **JSON output for programmatic processing:**
|
||||
`ast-grep --pattern '$PATTERN' --lang $LANG $PATH --json`
|
||||
- **Replace patterns:**
|
||||
`ast-grep --pattern '$OLD_PATTERN' --rewrite '$NEW_PATTERN' --lang $LANG $PATH`
|
||||
- **Interactive mode:**
|
||||
`ast-grep --pattern '$PATTERN' --lang $LANG $PATH --interactive`
|
||||
|
||||
## Key Benefits Over Regex
|
||||
|
||||
1. **Language-aware** — understands syntax and semantics
|
||||
2. **Structural matching** — finds patterns regardless of formatting
|
||||
3. **Cross-language** — works consistently across different languages
|
||||
4. **Precise refactoring** — makes structural changes safely
|
||||
5. **Context-aware** — understands code hierarchy and scope
|
||||
|
||||
## Decision Matrix: When to Use Each Tool
|
||||
|
||||
| Task Type | Tool Choice | Reason |
|
||||
|--------------------------|----------------------|-------------------------------|
|
||||
| Find text patterns | grep_search | Simple text matching |
|
||||
| Find code structures | ast-grep | Syntax-aware search |
|
||||
| Understand semantics | codebase_search | AI-powered context |
|
||||
| Make edits | edit_file | Precise file editing |
|
||||
| Structural refactoring | ast-grep + edit_file | Structure + precision |
|
||||
|
||||
**Always prefer ast-grep for code structure analysis over regex-based approaches, but only if it is installed and available.**
|
||||
@ -0,0 +1,136 @@
|
||||
# Worktree Operations
|
||||
|
||||
Git worktrees enable parallel development by allowing multiple working directories for the same repository.
|
||||
|
||||
## Creating Worktrees
|
||||
|
||||
Always create worktrees from a clean main branch:
|
||||
```bash
|
||||
# Ensure main is up to date
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Create worktree for epic
|
||||
git worktree add ../epic-{name} -b epic/{name}
|
||||
```
|
||||
|
||||
The worktree will be created as a sibling directory to maintain clean separation.
|
||||
|
||||
## Working in Worktrees
|
||||
|
||||
### Agent Commits
|
||||
- Agents commit directly to the worktree
|
||||
- Use small, focused commits
|
||||
- Commit message format: `Issue #{number}: {description}`
|
||||
- Example: `Issue #1234: Add user authentication schema`
|
||||
|
||||
### File Operations
|
||||
```bash
|
||||
# Working directory is the worktree
|
||||
cd ../epic-{name}
|
||||
|
||||
# Normal git operations work
|
||||
git add {files}
|
||||
git commit -m "Issue #{number}: {change}"
|
||||
|
||||
# View worktree status
|
||||
git status
|
||||
```
|
||||
|
||||
## Parallel Work in Same Worktree
|
||||
|
||||
Multiple agents can work in the same worktree if they touch different files:
|
||||
```bash
|
||||
# Agent A works on API
|
||||
git add src/api/*
|
||||
git commit -m "Issue #1234: Add user endpoints"
|
||||
|
||||
# Agent B works on UI (no conflict!)
|
||||
git add src/ui/*
|
||||
git commit -m "Issue #1235: Add dashboard component"
|
||||
```
|
||||
|
||||
## Merging Worktrees
|
||||
|
||||
When epic is complete, merge back to main:
|
||||
```bash
|
||||
# From main repository (not worktree)
|
||||
cd {main-repo}
|
||||
git checkout main
|
||||
git pull origin main
|
||||
|
||||
# Merge epic branch
|
||||
git merge epic/{name}
|
||||
|
||||
# If successful, clean up
|
||||
git worktree remove ../epic-{name}
|
||||
git branch -d epic/{name}
|
||||
```
|
||||
|
||||
## Handling Conflicts
|
||||
|
||||
If merge conflicts occur:
|
||||
```bash
|
||||
# Conflicts will be shown
|
||||
git status
|
||||
|
||||
# Human resolves conflicts
|
||||
# Then continue merge
|
||||
git add {resolved-files}
|
||||
git commit
|
||||
```
|
||||
|
||||
## Worktree Management
|
||||
|
||||
### List Active Worktrees
|
||||
```bash
|
||||
git worktree list
|
||||
```
|
||||
|
||||
### Remove Stale Worktree
|
||||
```bash
|
||||
# If worktree directory was deleted
|
||||
git worktree prune
|
||||
|
||||
# Force remove worktree
|
||||
git worktree remove --force ../epic-{name}
|
||||
```
|
||||
|
||||
### Check Worktree Status
|
||||
```bash
|
||||
# From main repo
|
||||
cd ../epic-{name} && git status && cd -
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **One worktree per epic** - Not per issue
|
||||
2. **Clean before create** - Always start from updated main
|
||||
3. **Commit frequently** - Small commits are easier to merge
|
||||
4. **Delete after merge** - Don't leave stale worktrees
|
||||
5. **Use descriptive branches** - `epic/feature-name` not `feature`
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Worktree Already Exists
|
||||
```bash
|
||||
# Remove old worktree first
|
||||
git worktree remove ../epic-{name}
|
||||
# Then create new one
|
||||
```
|
||||
|
||||
### Branch Already Exists
|
||||
```bash
|
||||
# Delete old branch
|
||||
git branch -D epic/{name}
|
||||
# Or use existing branch
|
||||
git worktree add ../epic-{name} epic/{name}
|
||||
```
|
||||
|
||||
### Cannot Remove Worktree
|
||||
```bash
|
||||
# Force removal
|
||||
git worktree remove --force ../epic-{name}
|
||||
# Clean up references
|
||||
git worktree prune
|
||||
```
|
||||
@ -0,0 +1,59 @@
|
||||
#!/bin/bash
|
||||
echo "Getting tasks..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "🚫 Blocked Tasks"
|
||||
echo "================"
|
||||
echo ""
|
||||
|
||||
found=0
|
||||
|
||||
for epic_dir in .claude/epics/*/; do
|
||||
[ -d "$epic_dir" ] || continue
|
||||
epic_name=$(basename "$epic_dir")
|
||||
|
||||
for task_file in "$epic_dir"[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
# Check if task is open
|
||||
status=$(grep "^status:" "$task_file" | head -1 | sed 's/^status: *//')
|
||||
[ "$status" != "open" ] && [ -n "$status" ] && continue
|
||||
|
||||
# Check for dependencies
|
||||
deps=$(grep "^depends_on:" "$task_file" | head -1 | sed 's/^depends_on: *\[//' | sed 's/\]//' | sed 's/,/ /g')
|
||||
|
||||
if [ -n "$deps" ] && [ "$deps" != "depends_on:" ]; then
|
||||
task_name=$(grep "^name:" "$task_file" | head -1 | sed 's/^name: *//')
|
||||
task_num=$(basename "$task_file" .md)
|
||||
|
||||
echo "⏸️ Task #$task_num - $task_name"
|
||||
echo " Epic: $epic_name"
|
||||
echo " Blocked by: [$deps]"
|
||||
|
||||
# Check status of dependencies
|
||||
open_deps=""
|
||||
for dep in $deps; do
|
||||
dep_file="$epic_dir$dep.md"
|
||||
if [ -f "$dep_file" ]; then
|
||||
dep_status=$(grep "^status:" "$dep_file" | head -1 | sed 's/^status: *//')
|
||||
[ "$dep_status" = "open" ] && open_deps="$open_deps #$dep"
|
||||
fi
|
||||
done
|
||||
|
||||
[ -n "$open_deps" ] && echo " Waiting for:$open_deps"
|
||||
echo ""
|
||||
((found++))
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
if [ $found -eq 0 ]; then
|
||||
echo "No blocked tasks found!"
|
||||
echo ""
|
||||
echo "💡 All tasks with dependencies are either completed or in progress."
|
||||
else
|
||||
echo "📊 Total blocked: $found tasks"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,94 @@
|
||||
#!/bin/bash
|
||||
echo "Getting epics..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
[ ! -d ".claude/epics" ] && echo "📁 No epics directory found. Create your first epic with: /pm:prd-parse <feature-name>" && exit 0
|
||||
[ -z "$(ls -d .claude/epics/*/ 2>/dev/null)" ] && echo "📁 No epics found. Create your first epic with: /pm:prd-parse <feature-name>" && exit 0
|
||||
|
||||
echo "📚 Project Epics"
|
||||
echo "================"
|
||||
echo ""
|
||||
|
||||
# Initialize arrays to store epics by status
|
||||
planning_epics=""
|
||||
in_progress_epics=""
|
||||
completed_epics=""
|
||||
|
||||
# Process all epics
|
||||
for dir in .claude/epics/*/; do
|
||||
[ -d "$dir" ] || continue
|
||||
[ -f "$dir/epic.md" ] || continue
|
||||
|
||||
# Extract metadata
|
||||
n=$(grep "^name:" "$dir/epic.md" | head -1 | sed 's/^name: *//')
|
||||
s=$(grep "^status:" "$dir/epic.md" | head -1 | sed 's/^status: *//' | tr '[:upper:]' '[:lower:]')
|
||||
p=$(grep "^progress:" "$dir/epic.md" | head -1 | sed 's/^progress: *//')
|
||||
g=$(grep "^github:" "$dir/epic.md" | head -1 | sed 's/^github: *//')
|
||||
|
||||
# Defaults
|
||||
[ -z "$n" ] && n=$(basename "$dir")
|
||||
[ -z "$p" ] && p="0%"
|
||||
|
||||
# Count tasks
|
||||
t=$(ls "$dir"[0-9]*.md 2>/dev/null | wc -l)
|
||||
|
||||
# Format output with GitHub issue number if available
|
||||
if [ -n "$g" ]; then
|
||||
i=$(echo "$g" | grep -o '/[0-9]*$' | tr -d '/')
|
||||
entry=" 📋 ${dir}epic.md (#$i) - $p complete ($t tasks)"
|
||||
else
|
||||
entry=" 📋 ${dir}epic.md - $p complete ($t tasks)"
|
||||
fi
|
||||
|
||||
# Categorize by status (handle various status values)
|
||||
case "$s" in
|
||||
planning|draft|"")
|
||||
planning_epics="${planning_epics}${entry}\n"
|
||||
;;
|
||||
in-progress|in_progress|active|started)
|
||||
in_progress_epics="${in_progress_epics}${entry}\n"
|
||||
;;
|
||||
completed|complete|done|closed|finished)
|
||||
completed_epics="${completed_epics}${entry}\n"
|
||||
;;
|
||||
*)
|
||||
# Default to planning for unknown statuses
|
||||
planning_epics="${planning_epics}${entry}\n"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Display categorized epics
|
||||
echo "📝 Planning:"
|
||||
if [ -n "$planning_epics" ]; then
|
||||
echo -e "$planning_epics" | sed '/^$/d'
|
||||
else
|
||||
echo " (none)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🚀 In Progress:"
|
||||
if [ -n "$in_progress_epics" ]; then
|
||||
echo -e "$in_progress_epics" | sed '/^$/d'
|
||||
else
|
||||
echo " (none)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Completed:"
|
||||
if [ -n "$completed_epics" ]; then
|
||||
echo -e "$completed_epics" | sed '/^$/d'
|
||||
else
|
||||
echo " (none)"
|
||||
fi
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
echo "📊 Summary"
|
||||
total=$(ls -d .claude/epics/*/ 2>/dev/null | wc -l)
|
||||
tasks=$(find .claude/epics -name "[0-9]*.md" 2>/dev/null | wc -l)
|
||||
echo " Total epics: $total"
|
||||
echo " Total tasks: $tasks"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,91 @@
|
||||
#!/bin/bash
|
||||
|
||||
epic_name="$1"
|
||||
|
||||
if [ -z "$epic_name" ]; then
|
||||
echo "❌ Please provide an epic name"
|
||||
echo "Usage: /pm:epic-show <epic-name>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Getting epic..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
epic_dir=".claude/epics/$epic_name"
|
||||
epic_file="$epic_dir/epic.md"
|
||||
|
||||
if [ ! -f "$epic_file" ]; then
|
||||
echo "❌ Epic not found: $epic_name"
|
||||
echo ""
|
||||
echo "Available epics:"
|
||||
for dir in .claude/epics/*/; do
|
||||
[ -d "$dir" ] && echo " • $(basename "$dir")"
|
||||
done
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Display epic details
|
||||
echo "📚 Epic: $epic_name"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
# Extract metadata
|
||||
status=$(grep "^status:" "$epic_file" | head -1 | sed 's/^status: *//')
|
||||
progress=$(grep "^progress:" "$epic_file" | head -1 | sed 's/^progress: *//')
|
||||
github=$(grep "^github:" "$epic_file" | head -1 | sed 's/^github: *//')
|
||||
created=$(grep "^created:" "$epic_file" | head -1 | sed 's/^created: *//')
|
||||
|
||||
echo "📊 Metadata:"
|
||||
echo " Status: ${status:-planning}"
|
||||
echo " Progress: ${progress:-0%}"
|
||||
[ -n "$github" ] && echo " GitHub: $github"
|
||||
echo " Created: ${created:-unknown}"
|
||||
echo ""
|
||||
|
||||
# Show tasks
|
||||
echo "📝 Tasks:"
|
||||
task_count=0
|
||||
open_count=0
|
||||
closed_count=0
|
||||
|
||||
for task_file in "$epic_dir"/[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
task_num=$(basename "$task_file" .md)
|
||||
task_name=$(grep "^name:" "$task_file" | head -1 | sed 's/^name: *//')
|
||||
task_status=$(grep "^status:" "$task_file" | head -1 | sed 's/^status: *//')
|
||||
parallel=$(grep "^parallel:" "$task_file" | head -1 | sed 's/^parallel: *//')
|
||||
|
||||
if [ "$task_status" = "closed" ] || [ "$task_status" = "completed" ]; then
|
||||
echo " ✅ #$task_num - $task_name"
|
||||
((closed_count++))
|
||||
else
|
||||
echo " ⬜ #$task_num - $task_name"
|
||||
[ "$parallel" = "true" ] && echo -n " (parallel)"
|
||||
((open_count++))
|
||||
fi
|
||||
|
||||
((task_count++))
|
||||
done
|
||||
|
||||
if [ $task_count -eq 0 ]; then
|
||||
echo " No tasks created yet"
|
||||
echo " Run: /pm:epic-decompose $epic_name"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "📈 Statistics:"
|
||||
echo " Total tasks: $task_count"
|
||||
echo " Open: $open_count"
|
||||
echo " Closed: $closed_count"
|
||||
[ $task_count -gt 0 ] && echo " Completion: $((closed_count * 100 / task_count))%"
|
||||
|
||||
# Next actions
|
||||
echo ""
|
||||
echo "💡 Actions:"
|
||||
[ $task_count -eq 0 ] && echo " • Decompose into tasks: /pm:epic-decompose $epic_name"
|
||||
[ -z "$github" ] && [ $task_count -gt 0 ] && echo " • Sync to GitHub: /pm:epic-sync $epic_name"
|
||||
[ -n "$github" ] && [ "$status" != "completed" ] && echo " • Start work: /pm:epic-start $epic_name"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,90 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
epic_name="$1"
|
||||
|
||||
if [ -z "$epic_name" ]; then
|
||||
echo "❌ Please specify an epic name"
|
||||
echo "Usage: /pm:epic-status <epic-name>"
|
||||
echo ""
|
||||
echo "Available epics:"
|
||||
for dir in .claude/epics/*/; do
|
||||
[ -d "$dir" ] && echo " • $(basename "$dir")"
|
||||
done
|
||||
exit 1
|
||||
else
|
||||
# Show status for specific epic
|
||||
epic_dir=".claude/epics/$epic_name"
|
||||
epic_file="$epic_dir/epic.md"
|
||||
|
||||
if [ ! -f "$epic_file" ]; then
|
||||
echo "❌ Epic not found: $epic_name"
|
||||
echo ""
|
||||
echo "Available epics:"
|
||||
for dir in .claude/epics/*/; do
|
||||
[ -d "$dir" ] && echo " • $(basename "$dir")"
|
||||
done
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📚 Epic Status: $epic_name"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
# Extract metadata
|
||||
status=$(grep "^status:" "$epic_file" | head -1 | sed 's/^status: *//')
|
||||
progress=$(grep "^progress:" "$epic_file" | head -1 | sed 's/^progress: *//')
|
||||
github=$(grep "^github:" "$epic_file" | head -1 | sed 's/^github: *//')
|
||||
|
||||
# Count tasks
|
||||
total=0
|
||||
open=0
|
||||
closed=0
|
||||
blocked=0
|
||||
|
||||
# Use find to safely iterate over task files
|
||||
for task_file in "$epic_dir"/[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
((total++))
|
||||
|
||||
task_status=$(grep "^status:" "$task_file" | head -1 | sed 's/^status: *//')
|
||||
deps=$(grep "^depends_on:" "$task_file" | head -1 | sed 's/^depends_on: *\[//' | sed 's/\]//')
|
||||
|
||||
if [ "$task_status" = "closed" ] || [ "$task_status" = "completed" ]; then
|
||||
((closed++))
|
||||
elif [ -n "$deps" ] && [ "$deps" != "depends_on:" ]; then
|
||||
((blocked++))
|
||||
else
|
||||
((open++))
|
||||
fi
|
||||
done
|
||||
|
||||
# Display progress bar
|
||||
if [ $total -gt 0 ]; then
|
||||
percent=$((closed * 100 / total))
|
||||
filled=$((percent * 20 / 100))
|
||||
empty=$((20 - filled))
|
||||
|
||||
echo -n "Progress: ["
|
||||
[ $filled -gt 0 ] && printf '%0.s█' $(seq 1 $filled)
|
||||
[ $empty -gt 0 ] && printf '%0.s░' $(seq 1 $empty)
|
||||
echo "] $percent%"
|
||||
else
|
||||
echo "Progress: No tasks created"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "📊 Breakdown:"
|
||||
echo " Total tasks: $total"
|
||||
echo " ✅ Completed: $closed"
|
||||
echo " 🔄 Available: $open"
|
||||
echo " ⏸️ Blocked: $blocked"
|
||||
|
||||
[ -n "$github" ] && echo ""
|
||||
[ -n "$github" ] && echo "🔗 GitHub: $github"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
echo "Helping..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "📚 Claude Code PM - Project Management System"
|
||||
echo "============================================="
|
||||
echo ""
|
||||
echo "🎯 Quick Start Workflow"
|
||||
echo " 1. /pm:prd-new <name> - Create a new PRD"
|
||||
echo " 2. /pm:prd-parse <name> - Convert PRD to epic"
|
||||
echo " 3. /pm:epic-decompose <name> - Break into tasks"
|
||||
echo " 4. /pm:epic-sync <name> - Push to GitHub"
|
||||
echo " 5. /pm:epic-start <name> - Start parallel execution"
|
||||
echo ""
|
||||
echo "📄 PRD Commands"
|
||||
echo " /pm:prd-new <name> - Launch brainstorming for new product requirement"
|
||||
echo " /pm:prd-parse <name> - Convert PRD to implementation epic"
|
||||
echo " /pm:prd-list - List all PRDs"
|
||||
echo " /pm:prd-edit <name> - Edit existing PRD"
|
||||
echo " /pm:prd-status - Show PRD implementation status"
|
||||
echo ""
|
||||
echo "📚 Epic Commands"
|
||||
echo " /pm:epic-decompose <name> - Break epic into task files"
|
||||
echo " /pm:epic-sync <name> - Push epic and tasks to GitHub"
|
||||
echo " /pm:epic-oneshot <name> - Decompose and sync in one command"
|
||||
echo " /pm:epic-list - List all epics"
|
||||
echo " /pm:epic-show <name> - Display epic and its tasks"
|
||||
echo " /pm:epic-status [name] - Show epic progress"
|
||||
echo " /pm:epic-close <name> - Mark epic as complete"
|
||||
echo " /pm:epic-edit <name> - Edit epic details"
|
||||
echo " /pm:epic-refresh <name> - Update epic progress from tasks"
|
||||
echo " /pm:epic-start <name> - Launch parallel agent execution"
|
||||
echo ""
|
||||
echo "📝 Issue Commands"
|
||||
echo " /pm:issue-show <num> - Display issue and sub-issues"
|
||||
echo " /pm:issue-status <num> - Check issue status"
|
||||
echo " /pm:issue-start <num> - Begin work with specialized agent"
|
||||
echo " /pm:issue-sync <num> - Push updates to GitHub"
|
||||
echo " /pm:issue-close <num> - Mark issue as complete"
|
||||
echo " /pm:issue-reopen <num> - Reopen closed issue"
|
||||
echo " /pm:issue-edit <num> - Edit issue details"
|
||||
echo " /pm:issue-analyze <num> - Analyze for parallel work streams"
|
||||
echo ""
|
||||
echo "🔄 Workflow Commands"
|
||||
echo " /pm:next - Show next priority tasks"
|
||||
echo " /pm:status - Overall project dashboard"
|
||||
echo " /pm:standup - Daily standup report"
|
||||
echo " /pm:blocked - Show blocked tasks"
|
||||
echo " /pm:in-progress - List work in progress"
|
||||
echo ""
|
||||
echo "🔗 Sync Commands"
|
||||
echo " /pm:sync - Full bidirectional sync with GitHub"
|
||||
echo " /pm:import <issue> - Import existing GitHub issues"
|
||||
echo ""
|
||||
echo "🔧 Maintenance Commands"
|
||||
echo " /pm:validate - Check system integrity"
|
||||
echo " /pm:clean - Archive completed work"
|
||||
echo " /pm:search <query> - Search across all content"
|
||||
echo ""
|
||||
echo "⚙️ Setup Commands"
|
||||
echo " /pm:init - Install dependencies and configure GitHub"
|
||||
echo " /pm:help - Show this help message"
|
||||
echo ""
|
||||
echo "💡 Tips"
|
||||
echo " • Use /pm:next to find available work"
|
||||
echo " • Run /pm:status for quick overview"
|
||||
echo " • Epic workflow: prd-new → prd-parse → epic-decompose → epic-sync"
|
||||
echo " • View README.md for complete documentation"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,74 @@
|
||||
#!/bin/bash
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "🔄 In Progress Work"
|
||||
echo "==================="
|
||||
echo ""
|
||||
|
||||
# Check for active work in updates directories
|
||||
found=0
|
||||
|
||||
if [ -d ".claude/epics" ]; then
|
||||
for updates_dir in .claude/epics/*/updates/*/; do
|
||||
[ -d "$updates_dir" ] || continue
|
||||
|
||||
issue_num=$(basename "$updates_dir")
|
||||
epic_name=$(basename $(dirname $(dirname "$updates_dir")))
|
||||
|
||||
if [ -f "$updates_dir/progress.md" ]; then
|
||||
completion=$(grep "^completion:" "$updates_dir/progress.md" | head -1 | sed 's/^completion: *//')
|
||||
[ -z "$completion" ] && completion="0%"
|
||||
|
||||
# Get task name from the task file
|
||||
task_file=".claude/epics/$epic_name/$issue_num.md"
|
||||
if [ -f "$task_file" ]; then
|
||||
task_name=$(grep "^name:" "$task_file" | head -1 | sed 's/^name: *//')
|
||||
else
|
||||
task_name="Unknown task"
|
||||
fi
|
||||
|
||||
echo "📝 Issue #$issue_num - $task_name"
|
||||
echo " Epic: $epic_name"
|
||||
echo " Progress: $completion complete"
|
||||
|
||||
# Check for recent updates
|
||||
if [ -f "$updates_dir/progress.md" ]; then
|
||||
last_update=$(grep "^last_sync:" "$updates_dir/progress.md" | head -1 | sed 's/^last_sync: *//')
|
||||
[ -n "$last_update" ] && echo " Last update: $last_update"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
((found++))
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Also check for in-progress epics
|
||||
echo "📚 Active Epics:"
|
||||
for epic_dir in .claude/epics/*/; do
|
||||
[ -d "$epic_dir" ] || continue
|
||||
[ -f "$epic_dir/epic.md" ] || continue
|
||||
|
||||
status=$(grep "^status:" "$epic_dir/epic.md" | head -1 | sed 's/^status: *//')
|
||||
if [ "$status" = "in-progress" ] || [ "$status" = "active" ]; then
|
||||
epic_name=$(grep "^name:" "$epic_dir/epic.md" | head -1 | sed 's/^name: *//')
|
||||
progress=$(grep "^progress:" "$epic_dir/epic.md" | head -1 | sed 's/^progress: *//')
|
||||
[ -z "$epic_name" ] && epic_name=$(basename "$epic_dir")
|
||||
[ -z "$progress" ] && progress="0%"
|
||||
|
||||
echo " • $epic_name - $progress complete"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
if [ $found -eq 0 ]; then
|
||||
echo "No active work items found."
|
||||
echo ""
|
||||
echo "💡 Start work with: /pm:next"
|
||||
else
|
||||
echo "📊 Total active items: $found"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,159 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Initializing..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo " ██████╗ ██████╗██████╗ ███╗ ███╗"
|
||||
echo "██╔════╝██╔════╝██╔══██╗████╗ ████║"
|
||||
echo "██║ ██║ ██████╔╝██╔████╔██║"
|
||||
echo "╚██████╗╚██████╗██║ ██║ ╚═╝ ██║"
|
||||
echo " ╚═════╝ ╚═════╝╚═╝ ╚═╝ ╚═╝"
|
||||
|
||||
echo "┌─────────────────────────────────┐"
|
||||
echo "│ Claude Code Project Management │"
|
||||
echo "│ by https://x.com/aroussi │"
|
||||
echo "└─────────────────────────────────┘"
|
||||
echo "https://github.com/automazeio/ccpm"
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "🚀 Initializing Claude Code PM System"
|
||||
echo "======================================"
|
||||
echo ""
|
||||
|
||||
# Check for required tools
|
||||
echo "🔍 Checking dependencies..."
|
||||
|
||||
# Check gh CLI
|
||||
if command -v gh &> /dev/null; then
|
||||
echo " ✅ GitHub CLI (gh) installed"
|
||||
else
|
||||
echo " ❌ GitHub CLI (gh) not found"
|
||||
echo ""
|
||||
echo " Installing gh..."
|
||||
if command -v brew &> /dev/null; then
|
||||
brew install gh
|
||||
elif command -v apt-get &> /dev/null; then
|
||||
sudo apt-get update && sudo apt-get install gh
|
||||
else
|
||||
echo " Please install GitHub CLI manually: https://cli.github.com/"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check gh auth status
|
||||
echo ""
|
||||
echo "🔐 Checking GitHub authentication..."
|
||||
if gh auth status &> /dev/null; then
|
||||
echo " ✅ GitHub authenticated"
|
||||
else
|
||||
echo " ⚠️ GitHub not authenticated"
|
||||
echo " Running: gh auth login"
|
||||
gh auth login
|
||||
fi
|
||||
|
||||
# Check for gh-sub-issue extension
|
||||
echo ""
|
||||
echo "📦 Checking gh extensions..."
|
||||
if gh extension list | grep -q "yahsan2/gh-sub-issue"; then
|
||||
echo " ✅ gh-sub-issue extension installed"
|
||||
else
|
||||
echo " 📥 Installing gh-sub-issue extension..."
|
||||
gh extension install yahsan2/gh-sub-issue
|
||||
fi
|
||||
|
||||
# Create directory structure
|
||||
echo ""
|
||||
echo "📁 Creating directory structure..."
|
||||
mkdir -p .claude/prds
|
||||
mkdir -p .claude/epics
|
||||
mkdir -p .claude/rules
|
||||
mkdir -p .claude/agents
|
||||
mkdir -p .claude/scripts/pm
|
||||
echo " ✅ Directories created"
|
||||
|
||||
# Copy scripts if in main repo
|
||||
if [ -d "scripts/pm" ] && [ ! "$(pwd)" = *"/.claude"* ]; then
|
||||
echo ""
|
||||
echo "📝 Copying PM scripts..."
|
||||
cp -r scripts/pm/* .claude/scripts/pm/
|
||||
chmod +x .claude/scripts/pm/*.sh
|
||||
echo " ✅ Scripts copied and made executable"
|
||||
fi
|
||||
|
||||
# Check for git
|
||||
echo ""
|
||||
echo "🔗 Checking Git configuration..."
|
||||
if git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo " ✅ Git repository detected"
|
||||
|
||||
# Check remote
|
||||
if git remote -v | grep -q origin; then
|
||||
remote_url=$(git remote get-url origin)
|
||||
echo " ✅ Remote configured: $remote_url"
|
||||
|
||||
# Check if remote is the CCPM template repository
|
||||
if [[ "$remote_url" == *"automazeio/ccpm"* ]] || [[ "$remote_url" == *"automazeio/ccpm.git"* ]]; then
|
||||
echo ""
|
||||
echo " ⚠️ WARNING: Your remote origin points to the CCPM template repository!"
|
||||
echo " This means any issues you create will go to the template repo, not your project."
|
||||
echo ""
|
||||
echo " To fix this:"
|
||||
echo " 1. Fork the repository or create your own on GitHub"
|
||||
echo " 2. Update your remote:"
|
||||
echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
|
||||
echo ""
|
||||
fi
|
||||
else
|
||||
echo " ⚠️ No remote configured"
|
||||
echo " Add with: git remote add origin <url>"
|
||||
fi
|
||||
else
|
||||
echo " ⚠️ Not a git repository"
|
||||
echo " Initialize with: git init"
|
||||
fi
|
||||
|
||||
# Create CLAUDE.md if it doesn't exist
|
||||
if [ ! -f "CLAUDE.md" ]; then
|
||||
echo ""
|
||||
echo "📄 Creating CLAUDE.md..."
|
||||
cat > CLAUDE.md << 'EOF'
|
||||
# CLAUDE.md
|
||||
|
||||
> Think carefully and implement the most concise solution that changes as little code as possible.
|
||||
|
||||
## Project-Specific Instructions
|
||||
|
||||
Add your project-specific instructions here.
|
||||
|
||||
## Testing
|
||||
|
||||
Always run tests before committing:
|
||||
- `npm test` or equivalent for your stack
|
||||
|
||||
## Code Style
|
||||
|
||||
Follow existing patterns in the codebase.
|
||||
EOF
|
||||
echo " ✅ CLAUDE.md created"
|
||||
fi
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
echo "✅ Initialization Complete!"
|
||||
echo "=========================="
|
||||
echo ""
|
||||
echo "📊 System Status:"
|
||||
gh --version | head -1
|
||||
echo " Extensions: $(gh extension list | wc -l) installed"
|
||||
echo " Auth: $(gh auth status 2>&1 | grep -o 'Logged in to [^ ]*' || echo 'Not authenticated')"
|
||||
echo ""
|
||||
echo "🎯 Next Steps:"
|
||||
echo " 1. Create your first PRD: /pm:prd-new <feature-name>"
|
||||
echo " 2. View help: /pm:help"
|
||||
echo " 3. Check status: /pm:status"
|
||||
echo ""
|
||||
echo "📚 Documentation: README.md"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,53 @@
|
||||
#!/bin/bash
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "📋 Next Available Tasks"
|
||||
echo "======================="
|
||||
echo ""
|
||||
|
||||
# Find tasks that are open and have no dependencies or whose dependencies are closed
|
||||
found=0
|
||||
|
||||
for epic_dir in .claude/epics/*/; do
|
||||
[ -d "$epic_dir" ] || continue
|
||||
epic_name=$(basename "$epic_dir")
|
||||
|
||||
for task_file in "$epic_dir"[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
# Check if task is open
|
||||
status=$(grep "^status:" "$task_file" | head -1 | sed 's/^status: *//')
|
||||
[ "$status" != "open" ] && [ -n "$status" ] && continue
|
||||
|
||||
# Check dependencies
|
||||
deps=$(grep "^depends_on:" "$task_file" | head -1 | sed 's/^depends_on: *\[//' | sed 's/\]//')
|
||||
|
||||
# If no dependencies or empty, task is available
|
||||
if [ -z "$deps" ] || [ "$deps" = "depends_on:" ]; then
|
||||
task_name=$(grep "^name:" "$task_file" | head -1 | sed 's/^name: *//')
|
||||
task_num=$(basename "$task_file" .md)
|
||||
parallel=$(grep "^parallel:" "$task_file" | head -1 | sed 's/^parallel: *//')
|
||||
|
||||
echo "✅ Ready: #$task_num - $task_name"
|
||||
echo " Epic: $epic_name"
|
||||
[ "$parallel" = "true" ] && echo " 🔄 Can run in parallel"
|
||||
echo ""
|
||||
((found++))
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
if [ $found -eq 0 ]; then
|
||||
echo "No available tasks found."
|
||||
echo ""
|
||||
echo "💡 Suggestions:"
|
||||
echo " • Check blocked tasks: /pm:blocked"
|
||||
echo " • View all tasks: /pm:epic-list"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "📊 Summary: $found tasks ready to start"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,89 @@
|
||||
# !/bin/bash
|
||||
# Check if PRD directory exists
|
||||
if [ ! -d ".claude/prds" ]; then
|
||||
echo "📁 No PRD directory found. Create your first PRD with: /pm:prd-new <feature-name>"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check for PRD files
|
||||
if ! ls .claude/prds/*.md >/dev/null 2>&1; then
|
||||
echo "📁 No PRDs found. Create your first PRD with: /pm:prd-new <feature-name>"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Initialize counters
|
||||
backlog_count=0
|
||||
in_progress_count=0
|
||||
implemented_count=0
|
||||
total_count=0
|
||||
|
||||
echo "Getting PRDs..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
|
||||
echo "📋 PRD List"
|
||||
echo "==========="
|
||||
echo ""
|
||||
|
||||
# Display by status groups
|
||||
echo "🔍 Backlog PRDs:"
|
||||
for file in .claude/prds/*.md; do
|
||||
[ -f "$file" ] || continue
|
||||
status=$(grep "^status:" "$file" | head -1 | sed 's/^status: *//')
|
||||
if [ "$status" = "backlog" ] || [ "$status" = "draft" ] || [ -z "$status" ]; then
|
||||
name=$(grep "^name:" "$file" | head -1 | sed 's/^name: *//')
|
||||
desc=$(grep "^description:" "$file" | head -1 | sed 's/^description: *//')
|
||||
[ -z "$name" ] && name=$(basename "$file" .md)
|
||||
[ -z "$desc" ] && desc="No description"
|
||||
# echo " 📋 $name - $desc"
|
||||
echo " 📋 $file - $desc"
|
||||
((backlog_count++))
|
||||
fi
|
||||
((total_count++))
|
||||
done
|
||||
[ $backlog_count -eq 0 ] && echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "🔄 In-Progress PRDs:"
|
||||
for file in .claude/prds/*.md; do
|
||||
[ -f "$file" ] || continue
|
||||
status=$(grep "^status:" "$file" | head -1 | sed 's/^status: *//')
|
||||
if [ "$status" = "in-progress" ] || [ "$status" = "active" ]; then
|
||||
name=$(grep "^name:" "$file" | head -1 | sed 's/^name: *//')
|
||||
desc=$(grep "^description:" "$file" | head -1 | sed 's/^description: *//')
|
||||
[ -z "$name" ] && name=$(basename "$file" .md)
|
||||
[ -z "$desc" ] && desc="No description"
|
||||
# echo " 📋 $name - $desc"
|
||||
echo " 📋 $file - $desc"
|
||||
((in_progress_count++))
|
||||
fi
|
||||
done
|
||||
[ $in_progress_count -eq 0 ] && echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "✅ Implemented PRDs:"
|
||||
for file in .claude/prds/*.md; do
|
||||
[ -f "$file" ] || continue
|
||||
status=$(grep "^status:" "$file" | head -1 | sed 's/^status: *//')
|
||||
if [ "$status" = "implemented" ] || [ "$status" = "completed" ] || [ "$status" = "done" ]; then
|
||||
name=$(grep "^name:" "$file" | head -1 | sed 's/^name: *//')
|
||||
desc=$(grep "^description:" "$file" | head -1 | sed 's/^description: *//')
|
||||
[ -z "$name" ] && name=$(basename "$file" .md)
|
||||
[ -z "$desc" ] && desc="No description"
|
||||
# echo " 📋 $name - $desc"
|
||||
echo " 📋 $file - $desc"
|
||||
((implemented_count++))
|
||||
fi
|
||||
done
|
||||
[ $implemented_count -eq 0 ] && echo " (none)"
|
||||
|
||||
# Display summary
|
||||
echo ""
|
||||
echo "📊 PRD Summary"
|
||||
echo " Total PRDs: $total_count"
|
||||
echo " Backlog: $backlog_count"
|
||||
echo " In-Progress: $in_progress_count"
|
||||
echo " Implemented: $implemented_count"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,63 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "📄 PRD Status Report"
|
||||
echo "===================="
|
||||
echo ""
|
||||
|
||||
if [ ! -d ".claude/prds" ]; then
|
||||
echo "No PRD directory found."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
total=$(ls .claude/prds/*.md 2>/dev/null | wc -l)
|
||||
[ $total -eq 0 ] && echo "No PRDs found." && exit 0
|
||||
|
||||
# Count by status
|
||||
backlog=0
|
||||
in_progress=0
|
||||
implemented=0
|
||||
|
||||
for file in .claude/prds/*.md; do
|
||||
[ -f "$file" ] || continue
|
||||
status=$(grep "^status:" "$file" | head -1 | sed 's/^status: *//')
|
||||
|
||||
case "$status" in
|
||||
backlog|draft|"") ((backlog++)) ;;
|
||||
in-progress|active) ((in_progress++)) ;;
|
||||
implemented|completed|done) ((implemented++)) ;;
|
||||
*) ((backlog++)) ;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
# Display chart
|
||||
echo "📊 Distribution:"
|
||||
echo "================"
|
||||
|
||||
echo ""
|
||||
echo " Backlog: $(printf '%-3d' $backlog) [$(printf '%0.s█' $(seq 1 $((backlog*20/total))))]"
|
||||
echo " In Progress: $(printf '%-3d' $in_progress) [$(printf '%0.s█' $(seq 1 $((in_progress*20/total))))]"
|
||||
echo " Implemented: $(printf '%-3d' $implemented) [$(printf '%0.s█' $(seq 1 $((implemented*20/total))))]"
|
||||
echo ""
|
||||
echo " Total PRDs: $total"
|
||||
|
||||
# Recent activity
|
||||
echo ""
|
||||
echo "📅 Recent PRDs (last 5 modified):"
|
||||
ls -t .claude/prds/*.md 2>/dev/null | head -5 | while read file; do
|
||||
name=$(grep "^name:" "$file" | head -1 | sed 's/^name: *//')
|
||||
[ -z "$name" ] && name=$(basename "$file" .md)
|
||||
echo " • $name"
|
||||
done
|
||||
|
||||
# Suggestions
|
||||
echo ""
|
||||
echo "💡 Next Actions:"
|
||||
[ $backlog -gt 0 ] && echo " • Parse backlog PRDs to epics: /pm:prd-parse <name>"
|
||||
[ $in_progress -gt 0 ] && echo " • Check progress on active PRDs: /pm:epic-status <name>"
|
||||
[ $total -eq 0 ] && echo " • Create your first PRD: /pm:prd-new <name>"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
|
||||
query="$1"
|
||||
|
||||
if [ -z "$query" ]; then
|
||||
echo "❌ Please provide a search query"
|
||||
echo "Usage: /pm:search <query>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Searching for '$query'..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "🔍 Search results for: '$query'"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
# Search in PRDs
|
||||
if [ -d ".claude/prds" ]; then
|
||||
echo "📄 PRDs:"
|
||||
results=$(grep -l -i "$query" .claude/prds/*.md 2>/dev/null)
|
||||
if [ -n "$results" ]; then
|
||||
for file in $results; do
|
||||
name=$(basename "$file" .md)
|
||||
matches=$(grep -c -i "$query" "$file")
|
||||
echo " • $name ($matches matches)"
|
||||
done
|
||||
else
|
||||
echo " No matches"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Search in Epics
|
||||
if [ -d ".claude/epics" ]; then
|
||||
echo "📚 Epics:"
|
||||
results=$(find .claude/epics -name "epic.md" -exec grep -l -i "$query" {} \; 2>/dev/null)
|
||||
if [ -n "$results" ]; then
|
||||
for file in $results; do
|
||||
epic_name=$(basename $(dirname "$file"))
|
||||
matches=$(grep -c -i "$query" "$file")
|
||||
echo " • $epic_name ($matches matches)"
|
||||
done
|
||||
else
|
||||
echo " No matches"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Search in Tasks
|
||||
if [ -d ".claude/epics" ]; then
|
||||
echo "📝 Tasks:"
|
||||
results=$(find .claude/epics -name "[0-9]*.md" -exec grep -l -i "$query" {} \; 2>/dev/null | head -10)
|
||||
if [ -n "$results" ]; then
|
||||
for file in $results; do
|
||||
epic_name=$(basename $(dirname "$file"))
|
||||
task_num=$(basename "$file" .md)
|
||||
echo " • Task #$task_num in $epic_name"
|
||||
done
|
||||
else
|
||||
echo " No matches"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Summary
|
||||
total=$(find .claude -name "*.md" -exec grep -l -i "$query" {} \; 2>/dev/null | wc -l)
|
||||
echo ""
|
||||
echo "📊 Total files with matches: $total"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,77 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "📅 Daily Standup - $(date '+%Y-%m-%d')"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
today=$(date '+%Y-%m-%d')
|
||||
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "📝 Today's Activity:"
|
||||
echo "===================="
|
||||
echo ""
|
||||
|
||||
# Find files modified today
|
||||
recent_files=$(find .claude -name "*.md" -mtime -1 2>/dev/null)
|
||||
|
||||
if [ -n "$recent_files" ]; then
|
||||
# Count by type
|
||||
prd_count=$(echo "$recent_files" | grep -c "/prds/" || echo 0)
|
||||
epic_count=$(echo "$recent_files" | grep -c "/epic.md" || echo 0)
|
||||
task_count=$(echo "$recent_files" | grep -c "/[0-9]*.md" || echo 0)
|
||||
update_count=$(echo "$recent_files" | grep -c "/updates/" || echo 0)
|
||||
|
||||
[ $prd_count -gt 0 ] && echo " • Modified $prd_count PRD(s)"
|
||||
[ $epic_count -gt 0 ] && echo " • Updated $epic_count epic(s)"
|
||||
[ $task_count -gt 0 ] && echo " • Worked on $task_count task(s)"
|
||||
[ $update_count -gt 0 ] && echo " • Posted $update_count progress update(s)"
|
||||
else
|
||||
echo " No activity recorded today"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🔄 Currently In Progress:"
|
||||
# Show active work items
|
||||
for updates_dir in .claude/epics/*/updates/*/; do
|
||||
[ -d "$updates_dir" ] || continue
|
||||
if [ -f "$updates_dir/progress.md" ]; then
|
||||
issue_num=$(basename "$updates_dir")
|
||||
epic_name=$(basename $(dirname $(dirname "$updates_dir")))
|
||||
completion=$(grep "^completion:" "$updates_dir/progress.md" | head -1 | sed 's/^completion: *//')
|
||||
echo " • Issue #$issue_num ($epic_name) - ${completion:-0%} complete"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "⏭️ Next Available Tasks:"
|
||||
# Show top 3 available tasks
|
||||
count=0
|
||||
for epic_dir in .claude/epics/*/; do
|
||||
[ -d "$epic_dir" ] || continue
|
||||
for task_file in "$epic_dir"[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
status=$(grep "^status:" "$task_file" | head -1 | sed 's/^status: *//')
|
||||
[ "$status" != "open" ] && [ -n "$status" ] && continue
|
||||
|
||||
deps=$(grep "^depends_on:" "$task_file" | head -1 | sed 's/^depends_on: *\[//' | sed 's/\]//')
|
||||
if [ -z "$deps" ] || [ "$deps" = "depends_on:" ]; then
|
||||
task_name=$(grep "^name:" "$task_file" | head -1 | sed 's/^name: *//')
|
||||
task_num=$(basename "$task_file" .md)
|
||||
echo " • #$task_num - $task_name"
|
||||
((count++))
|
||||
[ $count -ge 3 ] && break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "📊 Quick Stats:"
|
||||
total_tasks=$(find .claude/epics -name "[0-9]*.md" 2>/dev/null | wc -l)
|
||||
open_tasks=$(find .claude/epics -name "[0-9]*.md" -exec grep -l "^status: *open" {} \; 2>/dev/null | wc -l)
|
||||
closed_tasks=$(find .claude/epics -name "[0-9]*.md" -exec grep -l "^status: *closed" {} \; 2>/dev/null | wc -l)
|
||||
echo " Tasks: $open_tasks open, $closed_tasks closed, $total_tasks total"
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Getting status..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
|
||||
echo "📊 Project Status"
|
||||
echo "================"
|
||||
echo ""
|
||||
|
||||
echo "📄 PRDs:"
|
||||
if [ -d ".claude/prds" ]; then
|
||||
total=$(ls .claude/prds/*.md 2>/dev/null | wc -l)
|
||||
echo " Total: $total"
|
||||
else
|
||||
echo " No PRDs found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "📚 Epics:"
|
||||
if [ -d ".claude/epics" ]; then
|
||||
total=$(ls -d .claude/epics/*/ 2>/dev/null | wc -l)
|
||||
echo " Total: $total"
|
||||
else
|
||||
echo " No epics found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "📝 Tasks:"
|
||||
if [ -d ".claude/epics" ]; then
|
||||
total=$(find .claude/epics -name "[0-9]*.md" 2>/dev/null | wc -l)
|
||||
open=$(find .claude/epics -name "[0-9]*.md" -exec grep -l "^status: *open" {} \; 2>/dev/null | wc -l)
|
||||
closed=$(find .claude/epics -name "[0-9]*.md" -exec grep -l "^status: *closed" {} \; 2>/dev/null | wc -l)
|
||||
echo " Open: $open"
|
||||
echo " Closed: $closed"
|
||||
echo " Total: $total"
|
||||
else
|
||||
echo " No tasks found"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,88 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Validating PM System..."
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
echo "🔍 Validating PM System"
|
||||
echo "======================="
|
||||
echo ""
|
||||
|
||||
errors=0
|
||||
warnings=0
|
||||
|
||||
# Check directory structure
|
||||
echo "📁 Directory Structure:"
|
||||
[ -d ".claude" ] && echo " ✅ .claude directory exists" || { echo " ❌ .claude directory missing"; ((errors++)); }
|
||||
[ -d ".claude/prds" ] && echo " ✅ PRDs directory exists" || echo " ⚠️ PRDs directory missing"
|
||||
[ -d ".claude/epics" ] && echo " ✅ Epics directory exists" || echo " ⚠️ Epics directory missing"
|
||||
[ -d ".claude/rules" ] && echo " ✅ Rules directory exists" || echo " ⚠️ Rules directory missing"
|
||||
echo ""
|
||||
|
||||
# Check for orphaned files
|
||||
echo "🗂️ Data Integrity:"
|
||||
|
||||
# Check epics have epic.md files
|
||||
for epic_dir in .claude/epics/*/; do
|
||||
[ -d "$epic_dir" ] || continue
|
||||
if [ ! -f "$epic_dir/epic.md" ]; then
|
||||
echo " ⚠️ Missing epic.md in $(basename "$epic_dir")"
|
||||
((warnings++))
|
||||
fi
|
||||
done
|
||||
|
||||
# Check for tasks without epics
|
||||
orphaned=$(find .claude -name "[0-9]*.md" -not -path ".claude/epics/*/*" 2>/dev/null | wc -l)
|
||||
[ $orphaned -gt 0 ] && echo " ⚠️ Found $orphaned orphaned task files" && ((warnings++))
|
||||
|
||||
# Check for broken references
|
||||
echo ""
|
||||
echo "🔗 Reference Check:"
|
||||
|
||||
for task_file in .claude/epics/*/[0-9]*.md; do
|
||||
[ -f "$task_file" ] || continue
|
||||
|
||||
deps=$(grep "^depends_on:" "$task_file" | head -1 | sed 's/^depends_on: *\[//' | sed 's/\]//' | sed 's/,/ /g')
|
||||
if [ -n "$deps" ] && [ "$deps" != "depends_on:" ]; then
|
||||
epic_dir=$(dirname "$task_file")
|
||||
for dep in $deps; do
|
||||
if [ ! -f "$epic_dir/$dep.md" ]; then
|
||||
echo " ⚠️ Task $(basename "$task_file" .md) references missing task: $dep"
|
||||
((warnings++))
|
||||
fi
|
||||
done
|
||||
fi
|
||||
done
|
||||
|
||||
[ $warnings -eq 0 ] && [ $errors -eq 0 ] && echo " ✅ All references valid"
|
||||
|
||||
# Check frontmatter
|
||||
echo ""
|
||||
echo "📝 Frontmatter Validation:"
|
||||
invalid=0
|
||||
|
||||
for file in $(find .claude -name "*.md" -path "*/epics/*" -o -path "*/prds/*" 2>/dev/null); do
|
||||
if ! grep -q "^---" "$file"; then
|
||||
echo " ⚠️ Missing frontmatter: $(basename "$file")"
|
||||
((invalid++))
|
||||
fi
|
||||
done
|
||||
|
||||
[ $invalid -eq 0 ] && echo " ✅ All files have frontmatter"
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
echo "📊 Validation Summary:"
|
||||
echo " Errors: $errors"
|
||||
echo " Warnings: $warnings"
|
||||
echo " Invalid files: $invalid"
|
||||
|
||||
if [ $errors -eq 0 ] && [ $warnings -eq 0 ] && [ $invalid -eq 0 ]; then
|
||||
echo ""
|
||||
echo "✅ System is healthy!"
|
||||
else
|
||||
echo ""
|
||||
echo "💡 Run /pm:clean to fix some issues automatically"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@ -0,0 +1,47 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Script to run tests with automatic log redirection
|
||||
# Usage: ./claude/scripts/test-and-log.sh path/to/test.py [optional_log_name.log]
|
||||
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "Usage: $0 <test_file_path> [log_filename]"
|
||||
echo "Example: $0 tests/e2e/my_test_name.py"
|
||||
echo "Example: $0 tests/e2e/my_test_name.py my_test_name_v2.log"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TEST_PATH="$1"
|
||||
|
||||
# Create logs directory if it doesn't exist
|
||||
mkdir -p tests/logs
|
||||
|
||||
# Determine log file name
|
||||
if [ $# -ge 2 ]; then
|
||||
# Use provided log filename (second parameter)
|
||||
LOG_NAME="$2"
|
||||
# Ensure it ends with .log
|
||||
if [[ ! "$LOG_NAME" == *.log ]]; then
|
||||
LOG_NAME="${LOG_NAME}.log"
|
||||
fi
|
||||
LOG_FILE="tests/logs/${LOG_NAME}"
|
||||
else
|
||||
# Extract the test filename without extension for the log name
|
||||
TEST_NAME=$(basename "$TEST_PATH" .py)
|
||||
LOG_FILE="tests/logs/${TEST_NAME}.log"
|
||||
fi
|
||||
|
||||
# Run the test with output redirection
|
||||
echo "Running test: $TEST_PATH"
|
||||
echo "Logging to: $LOG_FILE"
|
||||
python "$TEST_PATH" > "$LOG_FILE" 2>&1
|
||||
|
||||
# Check exit code
|
||||
EXIT_CODE=$?
|
||||
|
||||
if [ $EXIT_CODE -eq 0 ]; then
|
||||
echo "✅ Test completed successfully. Log saved to $LOG_FILE"
|
||||
else
|
||||
echo "❌ Test failed with exit code $EXIT_CODE. Check $LOG_FILE for details"
|
||||
fi
|
||||
|
||||
exit $EXIT_CODE
|
||||
@ -0,0 +1,64 @@
|
||||
# Environment Configuration Template
|
||||
# Copy this file to .env and fill in your actual values
|
||||
|
||||
# SiliconFlow DeepSeek V3.1 Configuration
|
||||
SILICONFLOW_API_KEY=YOUR_SILICONFLOW_API_KEY_HERE
|
||||
SILICONFLOW_MODEL=deepseek-ai/DeepSeek-V3.1
|
||||
SILICONFLOW_BASE_URL=https://api.siliconflow.cn/v1
|
||||
|
||||
# CBMC Configuration
|
||||
CBMC_PATH=cbmc
|
||||
CBMC_VERSION_MIN=5.0
|
||||
CBMC_DEFAULT_UNWINDING=10
|
||||
CBMC_DEFAULT_TIMEOUT=300
|
||||
|
||||
# LLM Generation Configuration
|
||||
LLM_TIMEOUT=120
|
||||
LLM_MAX_RETRIES=3
|
||||
LLM_RETRY_DELAY=1.0
|
||||
LLM_RATE_LIMIT_DELAY=5.0
|
||||
LLM_MAX_CONCURRENT_REQUESTS=3
|
||||
LLM_TEMPERATURE=0.7
|
||||
LLM_MAX_TOKENS=4096
|
||||
LLM_TOP_P=0.95
|
||||
LLM_STREAMING_ENABLED=True
|
||||
|
||||
# Specification Storage Configuration
|
||||
SPEC_STORAGE_PATH=specifications
|
||||
SPEC_ENABLE_VERSIONING=True
|
||||
SPEC_MAX_VERSIONS=10
|
||||
SPEC_FORMAT=json
|
||||
SPEC_CLEANUP_INTERVAL=86400
|
||||
|
||||
# Specification Validation Configuration
|
||||
SPEC_VALIDATION_ENABLED=True
|
||||
SPEC_QUALITY_SYNTAX_THRESHOLD=0.9
|
||||
SPEC_QUALITY_LOGIC_THRESHOLD=0.8
|
||||
SPEC_QUALITY_COMPLETENESS_THRESHOLD=0.7
|
||||
SPEC_QUALITY_OVERALL_THRESHOLD=0.75
|
||||
SPEC_AUTO_REFINE=True
|
||||
SPEC_MAX_REFINEMENTS=3
|
||||
|
||||
# Web Server Configuration
|
||||
FLASK_ENV=development
|
||||
FLASK_DEBUG=True
|
||||
FLASK_HOST=0.0.0.0
|
||||
FLASK_PORT=8080
|
||||
|
||||
# Logging Configuration
|
||||
LOG_LEVEL=INFO
|
||||
LOG_DIR=logs
|
||||
LOG_MAX_SIZE=10MB
|
||||
LOG_BACKUP_COUNT=5
|
||||
|
||||
# Project Configuration
|
||||
PROJECT_NAME=Formal Spec Generator
|
||||
PROJECT_VERSION=0.1.0
|
||||
MAX_CONCURRENT_VERIFICATIONS=3
|
||||
CACHE_ENABLED=True
|
||||
CACHE_TTL=3600
|
||||
|
||||
# Development Settings
|
||||
DEV_MODE=True
|
||||
TEST_MODE=False
|
||||
SHOW_DEBUG_INFO=True
|
||||
@ -0,0 +1,357 @@
|
||||
name: CI/CD Pipeline
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
schedule:
|
||||
# 每天凌晨2点运行
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: '3.11'
|
||||
NODE_VERSION: '18'
|
||||
|
||||
jobs:
|
||||
code-quality:
|
||||
name: Code Quality
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run flake8
|
||||
run: |
|
||||
flake8 src/ tests/ --max-line-length=120 --exclude=__pycache__
|
||||
|
||||
- name: Run black formatting check
|
||||
run: |
|
||||
black --check --diff src/ tests/
|
||||
|
||||
- name: Run isort import check
|
||||
run: |
|
||||
isort --check-only --diff src/ tests/
|
||||
|
||||
- name: Run bandit security scan
|
||||
run: |
|
||||
bandit -r src/ -f json -o bandit-report.json || true
|
||||
|
||||
- name: Upload security scan results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: security-scan-results
|
||||
path: bandit-report.json
|
||||
|
||||
unit-tests:
|
||||
name: Unit Tests
|
||||
runs-on: ubuntu-latest
|
||||
needs: code-quality
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
test-type: [unit, integration, performance]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Install CBMC
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y cbmc
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
chmod +x scripts/run_tests.sh
|
||||
./scripts/run_tests.sh --verbose --coverage --junit --html ${{ matrix.test-type }}
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: test-results-${{ matrix.test-type }}
|
||||
path: |
|
||||
test_reports/
|
||||
htmlcov/
|
||||
junit-*.xml
|
||||
coverage.xml
|
||||
|
||||
regression-tests:
|
||||
name: Regression Tests
|
||||
runs-on: ubuntu-latest
|
||||
needs: unit-tests
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Setup FreeRTOS environment
|
||||
run: |
|
||||
chmod +x scripts/freertos-setup.sh
|
||||
./scripts/freertos-setup.sh --dry-run || true
|
||||
|
||||
- name: Run regression tests
|
||||
run: |
|
||||
chmod +x scripts/run_tests.sh
|
||||
./scripts/run_tests.sh --verbose --junit regression
|
||||
|
||||
- name: Upload regression results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: regression-results
|
||||
path: test_reports/
|
||||
|
||||
performance-benchmarks:
|
||||
name: Performance Benchmarks
|
||||
runs-on: ubuntu-latest
|
||||
needs: unit-tests
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
pip install matplotlib psutil
|
||||
|
||||
- name: Run performance benchmarks
|
||||
run: |
|
||||
chmod +x tools/run_benchmarks.py
|
||||
python tools/run_benchmarks.py --output-dir benchmark_results --iterations 3
|
||||
|
||||
- name: Upload benchmark results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: benchmark-results
|
||||
path: benchmark_results/
|
||||
|
||||
documentation-build:
|
||||
name: Documentation Build
|
||||
runs-on: ubuntu-latest
|
||||
needs: code-quality
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
pip install mkdocs mkdocs-material pymdown-extensions
|
||||
|
||||
- name: Generate API documentation
|
||||
run: |
|
||||
chmod +x scripts/generate_api_docs.py
|
||||
python scripts/generate_api_docs.py --output-dir docs/api
|
||||
|
||||
- name: Build documentation
|
||||
run: |
|
||||
if [ -f "mkdocs.yml" ]; then
|
||||
mkdocs build
|
||||
fi
|
||||
|
||||
- name: Upload documentation
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: documentation
|
||||
path: |
|
||||
docs/api/
|
||||
site/
|
||||
|
||||
security-scan:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
needs: code-quality
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
format: 'sarif'
|
||||
output: 'trivy-results.sarif'
|
||||
|
||||
- name: Upload Trivy scan results
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: 'trivy-results.sarif'
|
||||
|
||||
build-and-test-docker:
|
||||
name: Build and Test Docker
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, regression-tests]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Build Docker image
|
||||
run: |
|
||||
if [ -f "Dockerfile" ]; then
|
||||
docker build -t codedetect:latest .
|
||||
fi
|
||||
|
||||
- name: Test Docker image
|
||||
run: |
|
||||
if [ -f "Dockerfile" ]; then
|
||||
docker run --rm codedetect:latest python -c "import sys; print('Docker image test successful')"
|
||||
fi
|
||||
|
||||
coverage-report:
|
||||
name: Coverage Report
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, regression-tests]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
|
||||
- name: Generate coverage report
|
||||
run: |
|
||||
if [ -f "coverage.xml" ]; then
|
||||
pip install coverage
|
||||
coverage xml
|
||||
fi
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
flags: unittests
|
||||
name: codecov-umbrella
|
||||
|
||||
deployment-staging:
|
||||
name: Deploy to Staging
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, regression-tests, documentation-build, security-scan]
|
||||
if: github.ref == 'refs/heads/develop'
|
||||
environment: staging
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Deploy to staging
|
||||
run: |
|
||||
echo "Deploying to staging environment..."
|
||||
# 这里添加实际的部署步骤
|
||||
|
||||
- name: Run smoke tests
|
||||
run: |
|
||||
echo "Running smoke tests..."
|
||||
# 这里添加冒烟测试步骤
|
||||
|
||||
deployment-production:
|
||||
name: Deploy to Production
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, regression-tests, documentation-build, security-scan]
|
||||
if: github.ref == 'refs/heads/main'
|
||||
environment: production
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Deploy to production
|
||||
run: |
|
||||
echo "Deploying to production environment..."
|
||||
# 这里添加实际的生产部署步骤
|
||||
|
||||
- name: Health check
|
||||
run: |
|
||||
echo "Running health checks..."
|
||||
# 这里添加健康检查步骤
|
||||
|
||||
notify-results:
|
||||
name: Notify Results
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, regression-tests, performance-benchmarks]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
|
||||
- name: Generate summary report
|
||||
run: |
|
||||
echo "## CI/CD Pipeline Summary" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ -f "test-results-unit/junit-unit_tests.xml" ]; then
|
||||
echo "### Unit Tests" >> $GITHUB_STEP_SUMMARY
|
||||
echo "✅ Completed successfully" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
if [ -f "regression-results/junit-regression_tests.xml" ]; then
|
||||
echo "### Regression Tests" >> $GITHUB_STEP_SUMMARY
|
||||
echo "✅ Completed successfully" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
if [ -f "benchmark-results/complete_benchmark_suite.json" ]; then
|
||||
echo "### Performance Benchmarks" >> $GITHUB_STEP_SUMMARY
|
||||
echo "✅ Completed successfully" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Pipeline completed at: $(date)" >> $GITHUB_STEP_SUMMARY
|
||||
@ -0,0 +1,361 @@
|
||||
name: Comprehensive Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
schedule:
|
||||
- cron: '0 2 * * *' # 每天凌晨2点运行
|
||||
|
||||
jobs:
|
||||
unit-tests:
|
||||
name: Unit Tests
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest]
|
||||
python-version: [3.9, 3.10, 3.11]
|
||||
fail-fast: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run unit tests
|
||||
run: |
|
||||
python -m pytest tests/unit/ -v --tb=short --junitxml=test-results/unit.xml
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: unit-test-results-${{ matrix.os }}-${{ matrix.python-version }}
|
||||
path: test-results/unit.xml
|
||||
|
||||
integration-tests:
|
||||
name: Integration Tests
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest]
|
||||
python-version: [3.9, 3.10]
|
||||
fail-fast: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install system dependencies
|
||||
run: |
|
||||
if [ "$RUNNER_OS" == "Linux" ]; then
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y cbmc build-essential
|
||||
elif [ "$RUNNER_OS" == "macOS" ]; then
|
||||
brew install cbmc
|
||||
fi
|
||||
|
||||
- name: Install Python dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Setup FreeRTOS
|
||||
run: |
|
||||
./scripts/setup-freertos-example.sh
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run integration tests
|
||||
run: |
|
||||
python -m pytest tests/integration/ -v --tb=short --junitxml=test-results/integration.xml
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
FREERTOS_PATH: /opt/freertos
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: integration-test-results-${{ matrix.os }}-${{ matrix.python-version }}
|
||||
path: test-results/integration.xml
|
||||
|
||||
regression-tests:
|
||||
name: Regression Tests
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: [3.9, 3.10]
|
||||
fail-fast: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run regression tests
|
||||
run: |
|
||||
python -m pytest tests/regression/ -v --tb=short --junitxml=test-results/regression.xml
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: regression-test-results-${{ matrix.python-version }}
|
||||
path: test-results/regression.xml
|
||||
|
||||
performance-tests:
|
||||
name: Performance Tests
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'schedule' || contains(github.event.head_commit.message, '[performance]')
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.10
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run performance tests
|
||||
run: |
|
||||
python -m pytest tests/performance/ -v --tb=short --junitxml=test-results/performance.xml -m "not slow"
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
timeout-minutes: 30
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: performance-test-results
|
||||
path: test-results/performance.xml
|
||||
|
||||
code-quality:
|
||||
name: Code Quality
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.10
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run flake8
|
||||
run: |
|
||||
flake8 src/ tests/ --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
flake8 src/ tests/ --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
|
||||
- name: Run black check
|
||||
run: |
|
||||
black --check --diff src/ tests/
|
||||
|
||||
- name: Run isort check
|
||||
run: |
|
||||
isort --check-only --diff src/ tests/
|
||||
|
||||
- name: Run mypy
|
||||
run: |
|
||||
mypy src/ --ignore-missing-imports
|
||||
|
||||
- name: Run bandit security check
|
||||
run: |
|
||||
bandit -r src/ -f json -o bandit-report.json
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload security report
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: security-report
|
||||
path: bandit-report.json
|
||||
|
||||
coverage:
|
||||
name: Coverage Report
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, integration-tests, regression-tests]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.10
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Download all test results
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: test-results/
|
||||
|
||||
- name: Run tests with coverage
|
||||
run: |
|
||||
python -m pytest tests/ --cov=src/ --cov-report=xml --cov-report=html --cov-report=term-missing
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
flags: unittests
|
||||
name: codecov-umbrella
|
||||
fail_ci_if_error: false
|
||||
|
||||
- name: Upload coverage report
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: coverage-report
|
||||
path: htmlcov/
|
||||
|
||||
benchmark:
|
||||
name: Benchmark
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'schedule' || contains(github.event.head_commit.message, '[benchmark]')
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.10
|
||||
|
||||
- name: Cache pip packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pip-
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run benchmarks
|
||||
run: |
|
||||
python tests/tools/benchmark_runner.py
|
||||
env:
|
||||
PYTHONPATH: ${{ github.workspace }}
|
||||
|
||||
- name: Upload benchmark results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: benchmark-results
|
||||
path: benchmark_results/
|
||||
|
||||
test-summary:
|
||||
name: Test Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [unit-tests, integration-tests, regression-tests, performance-tests, code-quality, coverage]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Download all test results
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: all-test-results/
|
||||
|
||||
- name: Generate test summary
|
||||
run: |
|
||||
./scripts/run-comprehensive-tests.sh -t unit,integration,regression
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload test summary
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: test-summary
|
||||
path: test_results/
|
||||
@ -0,0 +1,536 @@
|
||||
name: Quality Gates
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: '3.11'
|
||||
|
||||
jobs:
|
||||
test-coverage:
|
||||
name: Test Coverage
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
pip install coverage pytest-cov
|
||||
|
||||
- name: Run tests with coverage
|
||||
run: |
|
||||
chmod +x scripts/run_tests.sh
|
||||
./scripts/run_tests.sh --verbose --coverage unit integration regression
|
||||
|
||||
- name: Generate coverage report
|
||||
run: |
|
||||
coverage xml
|
||||
coverage html
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
flags: unittests
|
||||
name: codecov-umbrella
|
||||
|
||||
- name: Check coverage thresholds
|
||||
run: |
|
||||
TOTAL_COVERAGE=$(coverage report --show-missing | grep TOTAL | awk '{print $4}' | sed 's/%//')
|
||||
echo "Total coverage: ${TOTAL_COVERAGE}%"
|
||||
|
||||
if [ "$TOTAL_COVERAGE" -lt 80 ]; then
|
||||
echo "❌ Coverage below 80% threshold"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Coverage meets 80% threshold"
|
||||
|
||||
- name: Upload coverage report
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: coverage-report
|
||||
path: |
|
||||
coverage.xml
|
||||
htmlcov/
|
||||
|
||||
performance-benchmarks:
|
||||
name: Performance Benchmarks
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
pip install matplotlib psutil
|
||||
|
||||
- name: Run performance benchmarks
|
||||
run: |
|
||||
chmod +x tools/run_benchmarks.py
|
||||
python tools/run_benchmarks.py --output-dir benchmark_results --iterations 5
|
||||
|
||||
- name: Compare with baseline
|
||||
run: |
|
||||
# Download baseline results (if available)
|
||||
curl -s https://raw.githubusercontent.com/codedetect/codedetect/main/benchmark_baseline.json -o baseline.json || true
|
||||
|
||||
if [ -f "baseline.json" ] && [ -f "benchmark_results/complete_benchmark_suite.json" ]; then
|
||||
echo "📊 Comparing with baseline performance..."
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
|
||||
# Load baseline and current results
|
||||
with open('baseline.json', 'r') as f:
|
||||
baseline = json.load(f)
|
||||
|
||||
with open('benchmark_results/complete_benchmark_suite.json', 'r') as f:
|
||||
current = json.load(f)
|
||||
|
||||
# Define performance thresholds
|
||||
THRESHOLDS = {
|
||||
'parsing_time': 1.2, # 20% increase allowed
|
||||
'verification_time': 1.15, # 15% increase allowed
|
||||
'mutation_time': 1.1, # 10% increase allowed
|
||||
'memory_usage': 1.25 # 25% increase allowed
|
||||
}
|
||||
|
||||
# Compare results
|
||||
regressions = []
|
||||
for result in current['results']:
|
||||
if result['category'] in ['parsing', 'verification', 'mutation']:
|
||||
metric_name = f"{result['category']}_time"
|
||||
if metric_name in THRESHOLDS:
|
||||
# Find corresponding baseline result
|
||||
baseline_result = None
|
||||
for br in baseline['results']:
|
||||
if br['name'] == result['name']:
|
||||
baseline_result = br
|
||||
break
|
||||
|
||||
if baseline_result:
|
||||
baseline_value = baseline_result['value']
|
||||
current_value = result['value']
|
||||
threshold = THRESHOLDS[metric_name]
|
||||
|
||||
if current_value > baseline_value * threshold:
|
||||
regressions.append({
|
||||
'metric': metric_name,
|
||||
'baseline': baseline_value,
|
||||
'current': current_value,
|
||||
'threshold': threshold,
|
||||
'degradation': (current_value - baseline_value) / baseline_value * 100
|
||||
})
|
||||
|
||||
if regressions:
|
||||
print("❌ Performance regressions detected:")
|
||||
for regression in regressions:
|
||||
print(f" {regression['metric']}: {regression['degradation']:.1f}% degradation")
|
||||
print(f" Baseline: {regression['baseline']:.4f}")
|
||||
print(f" Current: {regression['current']:.4f}")
|
||||
print(f" Threshold: {regression['threshold'] * 100:.0f}% increase allowed")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("✅ No performance regressions detected")
|
||||
|
||||
EOF
|
||||
else
|
||||
echo "⚠️ No baseline available for comparison"
|
||||
fi
|
||||
|
||||
- name: Upload benchmark results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: benchmark-results
|
||||
path: benchmark_results/
|
||||
|
||||
security-scan:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install bandit safety
|
||||
|
||||
- name: Run bandit security scan
|
||||
run: |
|
||||
bandit -r src/ -f json -o bandit-report.json
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
|
||||
with open('bandit-report.json', 'r') as f:
|
||||
results = json.load(f)
|
||||
|
||||
# Define severity thresholds
|
||||
HIGH_SEVERITY_THRESHOLD = 0
|
||||
MEDIUM_SEVERITY_THRESHOLD = 2
|
||||
LOW_SEVERITY_THRESHOLD = 5
|
||||
|
||||
high_severity = [r for r in results['results'] if r['issue_severity'] == 'HIGH']
|
||||
medium_severity = [r for r in results['results'] if r['issue_severity'] == 'MEDIUM']
|
||||
low_severity = [r for r in results['results'] if r['issue_severity'] == 'LOW']
|
||||
|
||||
print(f"🔍 Security scan results:")
|
||||
print(f" High severity: {len(high_severity)}")
|
||||
print(f" Medium severity: {len(medium_severity)}")
|
||||
print(f" Low severity: {len(low_severity)}")
|
||||
|
||||
if len(high_severity) > HIGH_SEVERITY_THRESHOLD:
|
||||
print(f"❌ High severity issues exceed threshold ({HIGH_SEVERITY_THRESHOLD})")
|
||||
for issue in high_severity:
|
||||
print(f" - {issue['test_name']}: {issue['issue_text']}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(medium_severity) > MEDIUM_SEVERITY_THRESHOLD:
|
||||
print(f"⚠️ Medium severity issues exceed threshold ({MEDIUM_SEVERITY_THRESHOLD})")
|
||||
# Don't exit, just warn
|
||||
|
||||
if len(low_severity) > LOW_SEVERITY_THRESHOLD:
|
||||
print(f"⚠️ Low severity issues exceed threshold ({LOW_SEVERITY_THRESHOLD})")
|
||||
# Don't exit, just warn
|
||||
|
||||
print("✅ Security scan passed")
|
||||
EOF
|
||||
|
||||
- name: Check dependencies for known vulnerabilities
|
||||
run: |
|
||||
safety check --json --output safety-report.json || true
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
|
||||
try:
|
||||
with open('safety-report.json', 'r') as f:
|
||||
results = json.load(f)
|
||||
except FileNotFoundError:
|
||||
print("⚠️ No safety report generated")
|
||||
sys.exit(0)
|
||||
|
||||
if results:
|
||||
print(f"🔍 Found {len(results)} known vulnerabilities in dependencies:")
|
||||
for vuln in results:
|
||||
print(f" - {vuln['id']}: {vuln['advisory']}")
|
||||
print(f" Package: {vuln['package']}")
|
||||
print(f" Version: {vuln['installed_version']}")
|
||||
print(f" Fixed in: {vuln['fixed_version']}")
|
||||
|
||||
# Allow only low severity vulnerabilities
|
||||
critical_high_vulns = [v for v in results if v['severity'] in ['CRITICAL', 'HIGH']]
|
||||
if critical_high_vulns:
|
||||
print(f"❌ Critical/High severity vulnerabilities found: {len(critical_high_vulns)}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("✅ No critical/high severity vulnerabilities found")
|
||||
else:
|
||||
print("✅ No known vulnerabilities found")
|
||||
EOF
|
||||
|
||||
- name: Upload security reports
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: security-reports
|
||||
path: |
|
||||
bandit-report.json
|
||||
safety-report.json
|
||||
|
||||
code-quality:
|
||||
name: Code Quality
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Run flake8
|
||||
run: |
|
||||
flake8 src/ tests/ --max-line-length=120 --exclude=__pycache__ --statistics
|
||||
|
||||
- name: Check code formatting
|
||||
run: |
|
||||
black --check --diff src/ tests/ || {
|
||||
echo "❌ Code formatting issues found"
|
||||
echo "Run 'black src/ tests/' to fix formatting"
|
||||
exit 1
|
||||
}
|
||||
|
||||
- name: Check import sorting
|
||||
run: |
|
||||
isort --check-only --diff src/ tests/ || {
|
||||
echo "❌ Import sorting issues found"
|
||||
echo "Run 'isort src/ tests/' to fix imports"
|
||||
exit 1
|
||||
}
|
||||
|
||||
- name: Check for TODO comments
|
||||
run: |
|
||||
# Count TODO comments (excluding test files)
|
||||
TODO_COUNT=$(grep -r "TODO\|FIXME" src/ --include="*.py" | grep -v test | wc -l)
|
||||
echo "Found $TODO_COUNT TODO/FIXME comments"
|
||||
|
||||
if [ "$TODO_COUNT" -gt 10 ]; then
|
||||
echo "⚠️ High number of TODO comments: $TODO_COUNT"
|
||||
# Don't fail, just warn
|
||||
fi
|
||||
|
||||
- name: Check code complexity
|
||||
run: |
|
||||
pip install radon
|
||||
radon cc src/ -a -nb --show-complexity
|
||||
python3 << 'EOF'
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
# Run radon and check complexity
|
||||
result = subprocess.run(['radon', 'cc', 'src/', '-a', '-nb'],
|
||||
capture_output=True, text=True)
|
||||
|
||||
# Parse results
|
||||
lines = result.stdout.strip().split('\n')
|
||||
high_complexity = []
|
||||
|
||||
for line in lines:
|
||||
if line.strip() and not line.startswith('Average'):
|
||||
parts = line.split()
|
||||
if len(parts) >= 2:
|
||||
try:
|
||||
complexity = int(parts[1])
|
||||
if complexity > 10: # Cyclomatic complexity threshold
|
||||
high_complexity.append((parts[0], complexity))
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
if high_complexity:
|
||||
print("⚠️ High complexity functions found:")
|
||||
for func_name, complexity in high_complexity:
|
||||
print(f" {func_name}: {complexity}")
|
||||
|
||||
# Allow up to 5 high complexity functions
|
||||
if len(high_complexity) > 5:
|
||||
print("❌ Too many high complexity functions")
|
||||
sys.exit(1)
|
||||
|
||||
print("✅ Code complexity check passed")
|
||||
EOF
|
||||
|
||||
integration-test-coverage:
|
||||
name: Integration Test Coverage
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
pip install coverage
|
||||
|
||||
- name: Run integration tests with coverage
|
||||
run: |
|
||||
chmod +x scripts/run_tests.sh
|
||||
./scripts/run_tests.sh --verbose --coverage integration
|
||||
|
||||
- name: Check integration test coverage
|
||||
run: |
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
|
||||
# Parse coverage report
|
||||
with open('coverage.xml', 'r') as f:
|
||||
coverage_data = f.read()
|
||||
|
||||
# Extract line coverage (simplified parsing)
|
||||
import re
|
||||
line_rate_match = re.search(r'line-rate="([^"]+)"', coverage_data)
|
||||
if line_rate_match:
|
||||
line_rate = float(line_rate_match.group(1))
|
||||
coverage_percentage = line_rate * 100
|
||||
|
||||
print(f"Integration test coverage: {coverage_percentage:.1f}%")
|
||||
|
||||
if coverage_percentage < 70:
|
||||
print("❌ Integration test coverage below 70% threshold")
|
||||
sys.exit(1)
|
||||
|
||||
print("✅ Integration test coverage meets threshold")
|
||||
else:
|
||||
print("⚠️ Could not parse coverage report")
|
||||
EOF
|
||||
|
||||
documentation-coverage:
|
||||
name: Documentation Coverage
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements-dev.txt
|
||||
pip install pydoc-markdown
|
||||
|
||||
- name: Generate API documentation
|
||||
run: |
|
||||
chmod +x scripts/generate_api_docs.py
|
||||
python scripts/generate_api_docs.py --output-dir docs/api
|
||||
|
||||
- name: Check documentation coverage
|
||||
run: |
|
||||
python3 << 'EOF'
|
||||
import os
|
||||
import ast
|
||||
sys
|
||||
from pathlib import Path
|
||||
|
||||
def check_documentation_coverage():
|
||||
src_dir = Path('src')
|
||||
documented_functions = 0
|
||||
total_functions = 0
|
||||
|
||||
for py_file in src_dir.rglob('*.py'):
|
||||
if '__pycache__' in str(py_file):
|
||||
continue
|
||||
|
||||
try:
|
||||
with open(py_file, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
tree = ast.parse(content)
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.FunctionDef):
|
||||
total_functions += 1
|
||||
docstring = ast.get_docstring(node)
|
||||
if docstring and docstring.strip():
|
||||
documented_functions += 1
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error parsing {py_file}: {e}")
|
||||
continue
|
||||
|
||||
if total_functions == 0:
|
||||
print("⚠️ No functions found to check documentation")
|
||||
return
|
||||
|
||||
coverage = (documented_functions / total_functions) * 100
|
||||
print(f"Documentation coverage: {coverage:.1f}% ({documented_functions}/{total_functions})")
|
||||
|
||||
if coverage < 60:
|
||||
print("❌ Documentation coverage below 60% threshold")
|
||||
sys.exit(1)
|
||||
|
||||
print("✅ Documentation coverage meets threshold")
|
||||
|
||||
check_documentation_coverage()
|
||||
EOF
|
||||
|
||||
quality-summary:
|
||||
name: Quality Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [test-coverage, performance-benchmarks, security-scan, code-quality, integration-test-coverage, documentation-coverage]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Generate quality summary
|
||||
run: |
|
||||
echo "# 📊 Quality Gates Summary" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Test Coverage
|
||||
echo "## ✅ Test Coverage" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Unit tests: $(if [ "${{ needs.test-coverage.result }}" = "success" ]; then echo "Passed (>80%)"; else echo "Failed"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Integration tests: $(if [ "${{ needs.integration-test-coverage.result }}" = "success" ]; then echo "Passed (>70%)"; else echo "Failed"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Performance
|
||||
echo "## ✅ Performance Benchmarks" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Performance regression: $(if [ "${{ needs.performance-benchmarks.result }}" = "success" ]; then echo "No regressions"; else echo "Regressions detected"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Security
|
||||
echo "## ✅ Security Scan" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Security vulnerabilities: $(if [ "${{ needs.security-scan.result }}" = "success" ]; then echo "No critical issues"; else echo "Issues found"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Code Quality
|
||||
echo "## ✅ Code Quality" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Code formatting: $(if [ "${{ needs.code-quality.result }}" = "success" ]; then echo "Passed"; else echo "Failed"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Documentation coverage: $(if [ "${{ needs.documentation-coverage.result }}" = "success" ]; then echo "Passed (>60%)"; else echo "Failed"; fi)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Overall status
|
||||
failed_jobs=0
|
||||
for job in test-coverage performance-benchmarks security-scan code-quality integration-test-coverage documentation-coverage; do
|
||||
if [ "${{ needs.$job.result }}" != "success" ]; then
|
||||
failed_jobs=$((failed_jobs + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $failed_jobs -eq 0 ]; then
|
||||
echo "## 🎉 All Quality Gates Passed!" >> $GITHUB_STEP_SUMMARY
|
||||
else
|
||||
echo "## ❌ $failed_jobs Quality Gate(s) Failed" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "*Generated on $(date)*" >> $GITHUB_STEP_SUMMARY
|
||||
@ -0,0 +1,356 @@
|
||||
name: Release Management
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: 'Version to release'
|
||||
required: true
|
||||
type: string
|
||||
prerelease:
|
||||
description: 'Is this a prerelease?'
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
name: Prepare Release
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
is_prerelease: ${{ steps.version.outputs.is_prerelease }}
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Determine version
|
||||
id: version
|
||||
run: |
|
||||
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
|
||||
VERSION="${{ github.event.inputs.version }}"
|
||||
IS_PRERELEASE="${{ github.event.inputs.prerelease }}"
|
||||
else
|
||||
VERSION="${GITHUB_REF#refs/tags/v}"
|
||||
if [[ "$VERSION" == *"-alpha"* ]] || [[ "$VERSION" == *"-beta"* ]] || [[ "$VERSION" == *"-rc"* ]]; then
|
||||
IS_PRERELEASE="true"
|
||||
else
|
||||
IS_PRERELEASE="false"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$IS_PRERELEASE" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Validate version format
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
if [[ ! "$VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+([-a-zA-Z0-9.]+)?$ ]]; then
|
||||
echo "Invalid version format: $VERSION"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
build-artifacts:
|
||||
name: Build Release Artifacts
|
||||
runs-on: ubuntu-latest
|
||||
needs: prepare-release
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, windows-latest, macos-latest]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install build wheel twine
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Build Python package
|
||||
run: |
|
||||
python -m build
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
chmod +x scripts/run_tests.sh
|
||||
./scripts/run_tests.sh --verbose unit integration regression
|
||||
|
||||
- name: Generate documentation
|
||||
run: |
|
||||
chmod +x scripts/generate_api_docs.py
|
||||
python scripts/generate_api_docs.py --output-dir docs/api
|
||||
|
||||
- name: Create distribution archives
|
||||
run: |
|
||||
# 创建源码分发
|
||||
python setup.py sdist --formats=gztar,zip
|
||||
|
||||
# 创建平台特定的二进制分发
|
||||
if [ "${{ matrix.os }}" = "ubuntu-latest" ]; then
|
||||
python setup.py bdist_wheel --plat-name manylinux1_x86_64
|
||||
elif [ "${{ matrix.os }}" = "windows-latest" ]; then
|
||||
python setup.py bdist_wheel --plat-name win_amd64
|
||||
elif [ "${{ matrix.os }}" = "macos-latest" ]; then
|
||||
python setup.py bdist_wheel --plat-name macosx_10_9_x86_64
|
||||
fi
|
||||
|
||||
- name: Upload build artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: dist-${{ matrix.os }}
|
||||
path: |
|
||||
dist/
|
||||
docs/api/
|
||||
test_reports/
|
||||
|
||||
create-release:
|
||||
name: Create GitHub Release
|
||||
runs-on: ubuntu-latest
|
||||
needs: [prepare-release, build-artifacts]
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
|
||||
- name: Generate release notes
|
||||
run: |
|
||||
VERSION="${{ needs.prepare-release.outputs.version }}"
|
||||
cat > release_notes.md << EOF
|
||||
# CodeDetect $VERSION
|
||||
|
||||
This release includes bug fixes, performance improvements, and new features.
|
||||
|
||||
## What's Changed
|
||||
|
||||
### 🚀 Features
|
||||
- Enhanced verification accuracy
|
||||
- Improved performance and scalability
|
||||
- Better FreeRTOS integration
|
||||
- Updated documentation and examples
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
- Fixed memory leaks in long-running verification
|
||||
- Improved error handling and reporting
|
||||
- Fixed concurrent access issues
|
||||
|
||||
### 📚 Documentation
|
||||
- Updated user manual
|
||||
- Enhanced API documentation
|
||||
- Added more examples and tutorials
|
||||
|
||||
## Installation
|
||||
|
||||
### From PyPI
|
||||
\`\`\`bash
|
||||
pip install codedetect==$VERSION
|
||||
\`\`\`
|
||||
|
||||
### From Source
|
||||
\`\`\`bash
|
||||
git clone https://github.com/codedetect/codedetect.git
|
||||
cd codedetect
|
||||
pip install .
|
||||
\`\`\`
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.8+
|
||||
- CBMC 5.12+
|
||||
- FreeRTOS (optional, for embedded verification)
|
||||
|
||||
## Support
|
||||
|
||||
- 📧 Email: support@codedetect.com
|
||||
- 🐛 Issues: [GitHub Issues](https://github.com/codedetect/codedetect/issues)
|
||||
- 📖 Docs: [Online Documentation](https://docs.codedetect.com)
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/codedetect/codedetect/compare/$(git describe --tags --abbrev=0)...$VERSION
|
||||
EOF
|
||||
|
||||
- name: Create Release
|
||||
uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
tag_name: v${{ needs.prepare-release.outputs.version }}
|
||||
name: CodeDetect ${{ needs.prepare-release.outputs.version }}
|
||||
body_path: release_notes.md
|
||||
draft: false
|
||||
prerelease: ${{ needs.prepare-release.outputs.is_prerelease }}
|
||||
files: |
|
||||
dist-*/dist/*
|
||||
docs/api/**
|
||||
test_reports/**
|
||||
|
||||
publish-to-pypi:
|
||||
name: Publish to PyPI
|
||||
runs-on: ubuntu-latest
|
||||
needs: [prepare-release, create-release]
|
||||
environment:
|
||||
name: PyPI
|
||||
url: https://pypi.org/p/codedetect
|
||||
|
||||
steps:
|
||||
- name: Download distribution artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: dist-ubuntu-latest
|
||||
path: dist/
|
||||
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
user: __token__
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
packages_dir: dist/
|
||||
|
||||
publish-to-docker:
|
||||
name: Publish to Docker Hub
|
||||
runs-on: ubuntu-latest
|
||||
needs: [prepare-release, create-release]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
username: ${{ secrets.DOCKER_USERNAME }}
|
||||
password: ${{ secrets.DOCKER_PASSWORD }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: |
|
||||
codedetect/codedetect:latest
|
||||
codedetect/codedetect:${{ needs.prepare-release.outputs.version }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
build-args: |
|
||||
VERSION=${{ needs.prepare-release.outputs.version }}
|
||||
|
||||
update-documentation:
|
||||
name: Update Documentation
|
||||
runs-on: ubuntu-latest
|
||||
needs: [prepare-release, create-release]
|
||||
|
||||
steps:
|
||||
- name: Checkout documentation repo
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: codedetect/docs.codedetect.com
|
||||
token: ${{ secrets.DOCS_DEPLOY_TOKEN }}
|
||||
|
||||
- name: Download documentation
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: dist-ubuntu-latest
|
||||
path: docs/
|
||||
|
||||
- name: Update documentation
|
||||
run: |
|
||||
# Copy new documentation
|
||||
cp -r docs/api/* . || true
|
||||
|
||||
# Update version information
|
||||
sed -i "s/latest/${{ needs.prepare-release.outputs.version }}/g" mkdocs.yml || true
|
||||
|
||||
# Commit changes
|
||||
git config --local user.email "action@github.com"
|
||||
git config --local user.name "GitHub Action"
|
||||
git add .
|
||||
if git diff --staged --quiet; then
|
||||
echo "No changes to commit"
|
||||
else
|
||||
git commit -m "Update documentation for release ${{ needs.prepare-release.outputs.version }}"
|
||||
git push
|
||||
fi
|
||||
|
||||
notify-users:
|
||||
name: Notify Users
|
||||
runs-on: ubuntu-latest
|
||||
needs: [create-release, publish-to-pypi, publish-to-docker]
|
||||
if: ${{ !needs.prepare-release.outputs.is_prerelease }}
|
||||
|
||||
steps:
|
||||
- name: Send release notification
|
||||
uses: 8398a7/action-slack@v3
|
||||
with:
|
||||
status: ${{ job.status }}
|
||||
channel: '#releases'
|
||||
webhook_url: ${{ secrets.SLACK_WEBHOOK }}
|
||||
env:
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
|
||||
|
||||
post-release-tasks:
|
||||
name: Post Release Tasks
|
||||
runs-on: ubuntu-latest
|
||||
needs: [create-release, publish-to-pypi, publish-to-docker]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Prepare next development version
|
||||
run: |
|
||||
VERSION="${{ needs.prepare-release.outputs.version }}"
|
||||
|
||||
# Extract version components
|
||||
MAJOR=$(echo $VERSION | cut -d. -f1)
|
||||
MINOR=$(echo $VERSION | cut -d. -f2)
|
||||
PATCH=$(echo $VERSION | cut -d. -f3 | cut -d- -f1)
|
||||
|
||||
# Increment patch version for next development
|
||||
NEXT_PATCH=$((PATCH + 1))
|
||||
NEXT_VERSION="${MAJOR}.${MINOR}.${NEXT_PATCH}-dev"
|
||||
|
||||
# Update version in setup.py and other files
|
||||
sed -i "s/version=\"$VERSION\"/version=\"$NEXT_VERSION\"/g" setup.py || true
|
||||
sed -i "s/__version__ = \"$VERSION\"/__version__ = \"$NEXT_VERSION\"/g" src/__init__.py || true
|
||||
|
||||
# Create development branch
|
||||
git checkout -b develop
|
||||
git add setup.py src/__init__.py
|
||||
git commit -m "Bump version to $NEXT_VERSION for development"
|
||||
git push origin develop
|
||||
|
||||
- name: Create milestone for next release
|
||||
uses: actions/github-script@v6
|
||||
with:
|
||||
script: |
|
||||
const version = "${{ needs.prepare-release.outputs.version }}"
|
||||
const [major, minor] = version.split('.').slice(0, 2).map(Number)
|
||||
const nextMinor = minor + 1
|
||||
const nextVersion = `${major}.${nextMinor}.0`
|
||||
|
||||
await github.rest.issues.createMilestone({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
title: nextVersion,
|
||||
description: `Features and improvements for ${nextVersion} release`
|
||||
})
|
||||
@ -0,0 +1,116 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
pip-wheel-metadata/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# Virtual environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# Logs
|
||||
logs/
|
||||
*.log
|
||||
|
||||
# Configuration
|
||||
config/local.yaml
|
||||
config/production.yaml
|
||||
|
||||
# Temporary files
|
||||
tmp/
|
||||
temp/
|
||||
*.tmp
|
||||
*.temp
|
||||
|
||||
# CBMC outputs
|
||||
cbmc/*.out
|
||||
cbmc/*.err
|
||||
cbmc/*.cbmc-out
|
||||
cbmc/*.trace
|
||||
|
||||
# Test coverage
|
||||
htmlcov/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Documentation
|
||||
docs/_build/
|
||||
docs/build/
|
||||
|
||||
# Node.js (if frontend)
|
||||
node_modules/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Database
|
||||
*.db
|
||||
*.sqlite
|
||||
*.sqlite3
|
||||
|
||||
# Compressed files
|
||||
*.zip
|
||||
*.tar.gz
|
||||
*.rar
|
||||
|
||||
# Binaries
|
||||
*.exe
|
||||
*.dll
|
||||
*.dylib
|
||||
*.bin
|
||||
|
||||
# Project specific
|
||||
*.harness
|
||||
*.proof
|
||||
*.witness
|
||||
results/
|
||||
output/
|
||||
|
||||
# Keep existing entries
|
||||
CLAUDE.md
|
||||
@ -0,0 +1,157 @@
|
||||
# Commands
|
||||
|
||||
Complete reference of all commands available in the Claude Code PM system.
|
||||
|
||||
> **Note**: Project Management commands (`/pm:*`) are documented in the main [README.md](README.md#command-reference).
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Context Commands](#context-commands)
|
||||
- [Testing Commands](#testing-commands)
|
||||
- [Utility Commands](#utility-commands)
|
||||
- [Review Commands](#review-commands)
|
||||
|
||||
## Context Commands
|
||||
|
||||
Commands for managing project context in `.claude/context/`.
|
||||
|
||||
### `/context:create`
|
||||
- **Purpose**: Create initial project context documentation
|
||||
- **Usage**: `/context:create`
|
||||
- **Description**: Analyzes the project structure and creates comprehensive baseline documentation in `.claude/context/`. Includes project overview, architecture, dependencies, and patterns.
|
||||
- **When to use**: At project start or when context needs full rebuild
|
||||
- **Output**: Multiple context files covering different aspects of the project
|
||||
|
||||
### `/context:update`
|
||||
- **Purpose**: Update existing context with recent changes
|
||||
- **Usage**: `/context:update`
|
||||
- **Description**: Refreshes context documentation based on recent code changes, new features, or architectural updates. Preserves existing context while adding new information.
|
||||
- **When to use**: After significant changes or before major work sessions
|
||||
- **Output**: Updated context files with change tracking
|
||||
|
||||
### `/context:prime`
|
||||
- **Purpose**: Load context into current conversation
|
||||
- **Usage**: `/context:prime`
|
||||
- **Description**: Reads all context files and loads them into the current conversation's memory. Essential for maintaining project awareness.
|
||||
- **When to use**: At the start of any work session
|
||||
- **Output**: Confirmation of loaded context
|
||||
|
||||
## Testing Commands
|
||||
|
||||
Commands for test configuration and execution.
|
||||
|
||||
### `/testing:prime`
|
||||
- **Purpose**: Configure testing setup
|
||||
- **Usage**: `/testing:prime`
|
||||
- **Description**: Detects and configures the project's testing framework, creates testing configuration, and prepares the test-runner agent.
|
||||
- **When to use**: Initial project setup or when testing framework changes
|
||||
- **Output**: `.claude/testing-config.md` with test commands and patterns
|
||||
|
||||
### `/testing:run`
|
||||
- **Purpose**: Execute tests with intelligent analysis
|
||||
- **Usage**: `/testing:run [test_target]`
|
||||
- **Description**: Runs tests using the test-runner agent which captures output to logs and returns only essential results to preserve context.
|
||||
- **Options**:
|
||||
- No arguments: Run all tests
|
||||
- File path: Run specific test file
|
||||
- Pattern: Run tests matching pattern
|
||||
- **Output**: Test summary with failures analyzed, no verbose output in main thread
|
||||
|
||||
## Utility Commands
|
||||
|
||||
General utility and maintenance commands.
|
||||
|
||||
### `/prompt`
|
||||
- **Purpose**: Handle complex prompts with multiple references
|
||||
- **Usage**: Write your prompt in the file, then type `/prompt`
|
||||
- **Description**: Ephemeral command for when complex prompts with numerous @ references fail in direct input. The prompt is written to the command file first, then executed.
|
||||
- **When to use**: When Claude's UI rejects complex prompts
|
||||
- **Output**: Executes the written prompt
|
||||
|
||||
### `/re-init`
|
||||
- **Purpose**: Update or create CLAUDE.md with PM rules
|
||||
- **Usage**: `/re-init`
|
||||
- **Description**: Updates the project's CLAUDE.md file with rules from `.claude/CLAUDE.md`, ensuring Claude instances have proper instructions.
|
||||
- **When to use**: After cloning PM system or updating rules
|
||||
- **Output**: Updated CLAUDE.md in project root
|
||||
|
||||
## Review Commands
|
||||
|
||||
Commands for handling external code review tools.
|
||||
|
||||
### `/code-rabbit`
|
||||
- **Purpose**: Process CodeRabbit review comments intelligently
|
||||
- **Usage**: `/code-rabbit` then paste comments
|
||||
- **Description**: Evaluates CodeRabbit suggestions with context awareness, accepting valid improvements while ignoring context-unaware suggestions. Spawns parallel agents for multi-file reviews.
|
||||
- **Features**:
|
||||
- Understands CodeRabbit lacks full context
|
||||
- Accepts: Real bugs, security issues, resource leaks
|
||||
- Ignores: Style preferences, irrelevant patterns
|
||||
- Parallel processing for multiple files
|
||||
- **Output**: Summary of accepted/ignored suggestions with reasoning
|
||||
|
||||
## Command Patterns
|
||||
|
||||
All commands follow consistent patterns:
|
||||
|
||||
### Allowed Tools
|
||||
Each command specifies its required tools in frontmatter:
|
||||
- `Read, Write, LS` - File operations
|
||||
- `Bash` - System commands
|
||||
- `Task` - Sub-agent spawning
|
||||
- `Grep` - Code searching
|
||||
|
||||
### Error Handling
|
||||
Commands follow fail-fast principles:
|
||||
- Check prerequisites first
|
||||
- Clear error messages with solutions
|
||||
- Never leave partial state
|
||||
|
||||
### Context Preservation
|
||||
Commands that process lots of information:
|
||||
- Use agents to shield main thread from verbose output
|
||||
- Return summaries, not raw data
|
||||
- Preserve only essential information
|
||||
|
||||
## Creating Custom Commands
|
||||
|
||||
To add new commands:
|
||||
|
||||
1. **Create file**: `commands/category/command-name.md`
|
||||
2. **Add frontmatter**:
|
||||
```yaml
|
||||
---
|
||||
allowed-tools: Read, Write, LS
|
||||
---
|
||||
```
|
||||
3. **Structure content**:
|
||||
- Purpose and usage
|
||||
- Preflight checks
|
||||
- Step-by-step instructions
|
||||
- Error handling
|
||||
- Output format
|
||||
|
||||
4. **Follow patterns**:
|
||||
- Keep it simple (no over-validation)
|
||||
- Fail fast with clear messages
|
||||
- Use agents for heavy processing
|
||||
- Return concise output
|
||||
|
||||
## Integration with Agents
|
||||
|
||||
Commands often use agents for heavy lifting:
|
||||
|
||||
- **test-runner**: Executes tests, analyzes results
|
||||
- **file-analyzer**: Summarizes verbose files
|
||||
- **code-analyzer**: Hunts bugs across codebase
|
||||
- **parallel-worker**: Coordinates parallel execution
|
||||
|
||||
This keeps the main conversation context clean while doing complex work.
|
||||
|
||||
## Notes
|
||||
|
||||
- Commands are markdown files interpreted as instructions
|
||||
- The `/` prefix triggers command execution
|
||||
- Commands can spawn agents for context preservation
|
||||
- All PM commands (`/pm:*`) are documented in the main README
|
||||
- Commands follow rules defined in `/rules/`
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Ran Aroussi
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@ -0,0 +1,72 @@
|
||||
# Common Makefile for CBMC Verification
|
||||
# This file provides common targets and variables for CBMC verification
|
||||
|
||||
# CBMC Configuration
|
||||
CBMC ?= cbmc
|
||||
CBMC_FLAGS ?= --unwinding 10 --bounds-check --pointer-check --overflow-check
|
||||
CBMC_SHOW_PROPERTIES ?= --show-properties
|
||||
CBMC_ERROR_TRACE ?= --trace
|
||||
|
||||
# Default target
|
||||
.PHONY: help
|
||||
help:
|
||||
@echo "Available targets:"
|
||||
@echo " all Run all proofs"
|
||||
@echo " clean Clean build artifacts"
|
||||
@echo " <proof-name> Run specific proof"
|
||||
@echo " show-<proof> Show properties for proof"
|
||||
@echo " trace-<proof> Show error trace for proof"
|
||||
|
||||
# Run all proofs
|
||||
.PHONY: all
|
||||
all:
|
||||
@for proof in $(PROOFS); do \
|
||||
echo "Running proof: $$proof"; \
|
||||
$(MAKE) $$proof || exit 1; \
|
||||
done
|
||||
|
||||
# Clean build artifacts
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -f *.out *.cbmc-out *.trace *.witness
|
||||
rm -rf cbmc-out/
|
||||
|
||||
# Generic proof target
|
||||
%.out: %.c
|
||||
$(CBMC) $(CBMC_FLAGS) $(CBMC_SHOW_PROPERTIES) $< > $@
|
||||
|
||||
# Show properties
|
||||
show-%: %.c
|
||||
$(CBMC) $(CBMC_SHOW_PROPERTIES) $<
|
||||
|
||||
# Show error trace
|
||||
trace-%: %.c
|
||||
$(CBMC) $(CBMC_ERROR_TRACE) $<
|
||||
|
||||
# Run with specific unwinding
|
||||
unwind-%: %.c
|
||||
$(CBMC) --unwinding $* $(CBMC_FLAGS) $<
|
||||
|
||||
# Run with specific checks
|
||||
check-%: %.c
|
||||
$(CBMC) --$*-check $(CBMC_FLAGS) $<
|
||||
|
||||
# Memory safety checks
|
||||
memory-safe: %.c
|
||||
$(CBMC) --bounds-check --pointer-check --div-by-zero-check $<
|
||||
|
||||
# Arithmetic checks
|
||||
arithmetic-safe: %.c
|
||||
$(CBMC) --overflow-check --undefined-shift-check --signed-overflow-check $<
|
||||
|
||||
# Full verification
|
||||
full-verify: %.c
|
||||
$(CBMC) --all-checks $(CBMC_FLAGS) $<
|
||||
|
||||
# Debug mode
|
||||
debug: %.c
|
||||
$(CBMC) --verbosity 10 $(CBMC_FLAGS) $<
|
||||
|
||||
# Profile mode
|
||||
profile: %.c
|
||||
$(CBMC) --json-ui $(CBMC_FLAGS) $< > profile.json
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue