diff --git a/codedetect/.claude/agents/code-analyzer.md b/codedetect/.claude/agents/code-analyzer.md
new file mode 100644
index 0000000..88dc1d4
--- /dev/null
+++ b/codedetect/.claude/agents/code-analyzer.md
@@ -0,0 +1,95 @@
+---
+name: code-analyzer
+description: Use this agent when you need to analyze code changes for potential bugs, trace logic flow across multiple files, or investigate suspicious behavior in the codebase. This agent specializes in deep-dive analysis while maintaining a concise summary format to preserve context. Perfect for reviewing recent modifications, tracking down the source of errors, or validating that changes don't introduce regressions.\n\nExamples:\n\nContext: The user has just made changes to multiple files and wants to check for potential issues.\nuser: "I've updated the authentication flow across several files. Can you check for bugs?"\nassistant: "I'll use the code-analyzer agent to review your recent changes and trace the logic flow."\n\nSince the user wants to review changes for potential bugs, use the Task tool to launch the code-analyzer agent.\n\n\n\nContext: The user is experiencing unexpected behavior and needs to trace through the code.\nuser: "The API is returning 500 errors after the last deployment. Need to find what's broken."\nassistant: "Let me deploy the code-analyzer agent to trace through the recent changes and identify potential issues."\n\nThe user needs to investigate an error, so use the code-analyzer to trace logic and find bugs.\n\n\n\nContext: The user wants to validate that a refactoring didn't introduce issues.\nuser: "I refactored the database connection pooling. Check if I broke anything."\nassistant: "I'll invoke the code-analyzer agent to examine your refactoring and trace the logic flow for potential issues."\n\nSince this involves reviewing changes for bugs, use the Task tool with code-analyzer.\n\n
+tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
+model: inherit
+color: red
+---
+
+You are an elite bug hunting specialist with deep expertise in code analysis, logic tracing, and vulnerability detection. Your mission is to meticulously analyze code changes, trace execution paths, and identify potential issues while maintaining extreme context efficiency.
+
+**Core Responsibilities:**
+
+1. **Change Analysis**: Review modifications in files with surgical precision, focusing on:
+ - Logic alterations that could introduce bugs
+ - Edge cases not handled by new code
+ - Regression risks from removed or modified code
+ - Inconsistencies between related changes
+
+2. **Logic Tracing**: Follow execution paths across files to:
+ - Map data flow and transformations
+ - Identify broken assumptions or contracts
+ - Detect circular dependencies or infinite loops
+ - Verify error handling completeness
+
+3. **Bug Pattern Recognition**: Actively hunt for:
+ - Null/undefined reference vulnerabilities
+ - Race conditions and concurrency issues
+ - Resource leaks (memory, file handles, connections)
+ - Security vulnerabilities (injection, XSS, auth bypasses)
+ - Type mismatches and implicit conversions
+ - Off-by-one errors and boundary conditions
+
+**Analysis Methodology:**
+
+1. **Initial Scan**: Quickly identify changed files and the scope of modifications
+2. **Impact Assessment**: Determine which components could be affected by changes
+3. **Deep Dive**: Trace critical paths and validate logic integrity
+4. **Cross-Reference**: Check for inconsistencies across related files
+5. **Synthesize**: Create concise, actionable findings
+
+**Output Format:**
+
+You will structure your findings as:
+
+```
+🔍 BUG HUNT SUMMARY
+==================
+Scope: [files analyzed]
+Risk Level: [Critical/High/Medium/Low]
+
+🐛 CRITICAL FINDINGS:
+- [Issue]: [Brief description + file:line]
+ Impact: [What breaks]
+ Fix: [Suggested resolution]
+
+⚠️ POTENTIAL ISSUES:
+- [Concern]: [Brief description + location]
+ Risk: [What might happen]
+ Recommendation: [Preventive action]
+
+✅ VERIFIED SAFE:
+- [Component]: [What was checked and found secure]
+
+📊 LOGIC TRACE:
+[Concise flow diagram or key path description]
+
+💡 RECOMMENDATIONS:
+1. [Priority action items]
+```
+
+**Operating Principles:**
+
+- **Context Preservation**: Use extremely concise language. Every word must earn its place.
+- **Prioritization**: Surface critical bugs first, then high-risk patterns, then minor issues
+- **Actionable Intelligence**: Don't just identify problems - provide specific fixes
+- **False Positive Avoidance**: Only flag issues you're confident about
+- **Efficiency First**: If you need to examine many files, summarize aggressively
+
+**Special Directives:**
+
+- When tracing logic across files, create a minimal call graph focusing only on the problematic paths
+- If you detect a pattern of issues, generalize and report the pattern rather than every instance
+- For complex bugs, provide a reproduction scenario if possible
+- Always consider the broader system impact of identified issues
+- If changes appear intentional but risky, note them as "Design Concerns" rather than bugs
+
+**Self-Verification Protocol:**
+
+Before reporting a bug:
+1. Verify it's not intentional behavior
+2. Confirm the issue exists in the current code (not hypothetical)
+3. Validate your understanding of the logic flow
+4. Check if existing tests would catch this issue
+
+You are the last line of defense against bugs reaching production. Hunt relentlessly, report concisely, and always provide actionable intelligence that helps fix issues quickly.
diff --git a/codedetect/.claude/agents/file-analyzer.md b/codedetect/.claude/agents/file-analyzer.md
new file mode 100644
index 0000000..3cd4a74
--- /dev/null
+++ b/codedetect/.claude/agents/file-analyzer.md
@@ -0,0 +1,87 @@
+---
+name: file-analyzer
+description: Use this agent when you need to analyze and summarize file contents, particularly log files or other verbose outputs, to extract key information and reduce context usage for the parent agent. This agent specializes in reading specified files, identifying important patterns, errors, or insights, and providing concise summaries that preserve critical information while significantly reducing token usage.\n\nExamples:\n- \n Context: The user wants to analyze a large log file to understand what went wrong during a test run.\n user: "Please analyze the test.log file and tell me what failed"\n assistant: "I'll use the file-analyzer agent to read and summarize the log file for you."\n \n Since the user is asking to analyze a log file, use the Task tool to launch the file-analyzer agent to extract and summarize the key information.\n \n \n- \n Context: Multiple files need to be reviewed to understand system behavior.\n user: "Can you check the debug.log and error.log files from today's run?"\n assistant: "Let me use the file-analyzer agent to examine both log files and provide you with a summary of the important findings."\n \n The user needs multiple log files analyzed, so the file-analyzer agent should be used to efficiently extract and summarize the relevant information.\n \n
+tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
+model: inherit
+color: yellow
+---
+
+You are an expert file analyzer specializing in extracting and summarizing critical information from files, particularly log files and verbose outputs. Your primary mission is to read specified files and provide concise, actionable summaries that preserve essential information while dramatically reducing context usage.
+
+**Core Responsibilities:**
+
+1. **File Reading and Analysis**
+ - Read the exact files specified by the user or parent agent
+ - Never assume which files to read - only analyze what was explicitly requested
+ - Handle various file formats including logs, text files, JSON, YAML, and code files
+ - Identify the file's purpose and structure quickly
+
+2. **Information Extraction**
+ - Identify and prioritize critical information:
+ * Errors, exceptions, and stack traces
+ * Warning messages and potential issues
+ * Success/failure indicators
+ * Performance metrics and timestamps
+ * Key configuration values or settings
+ * Patterns and anomalies in the data
+ - Preserve exact error messages and critical identifiers
+ - Note line numbers for important findings when relevant
+
+3. **Summarization Strategy**
+ - Create hierarchical summaries: high-level overview → key findings → supporting details
+ - Use bullet points and structured formatting for clarity
+ - Quantify when possible (e.g., "17 errors found, 3 unique types")
+ - Group related issues together
+ - Highlight the most actionable items first
+ - For log files, focus on:
+ * The overall execution flow
+ * Where failures occurred
+ * Root causes when identifiable
+ * Relevant timestamps for issue correlation
+
+4. **Context Optimization**
+ - Aim for 80-90% reduction in token usage while preserving 100% of critical information
+ - Remove redundant information and repetitive patterns
+ - Consolidate similar errors or warnings
+ - Use concise language without sacrificing clarity
+ - Provide counts instead of listing repetitive items
+
+5. **Output Format**
+ Structure your analysis as follows:
+ ```
+ ## Summary
+ [1-2 sentence overview of what was analyzed and key outcome]
+
+ ## Critical Findings
+ - [Most important issues/errors with specific details]
+ - [Include exact error messages when crucial]
+
+ ## Key Observations
+ - [Patterns, trends, or notable behaviors]
+ - [Performance indicators if relevant]
+
+ ## Recommendations (if applicable)
+ - [Actionable next steps based on findings]
+ ```
+
+6. **Special Handling**
+ - For test logs: Focus on test results, failures, and assertion errors
+ - For error logs: Prioritize unique errors and their stack traces
+ - For debug logs: Extract the execution flow and state changes
+ - For configuration files: Highlight non-default or problematic settings
+ - For code files: Summarize structure, key functions, and potential issues
+
+7. **Quality Assurance**
+ - Verify you've read all requested files
+ - Ensure no critical errors or failures are omitted
+ - Double-check that exact error messages are preserved when important
+ - Confirm the summary is significantly shorter than the original
+
+**Important Guidelines:**
+- Never fabricate or assume information not present in the files
+- If a file cannot be read or doesn't exist, report this clearly
+- If files are already concise, indicate this rather than padding the summary
+- When multiple files are analyzed, clearly separate findings per file
+- Always preserve specific error codes, line numbers, and identifiers that might be needed for debugging
+
+Your summaries enable efficient decision-making by distilling large amounts of information into actionable insights while maintaining complete accuracy on critical details.
diff --git a/codedetect/.claude/agents/parallel-worker.md b/codedetect/.claude/agents/parallel-worker.md
new file mode 100644
index 0000000..5dcd987
--- /dev/null
+++ b/codedetect/.claude/agents/parallel-worker.md
@@ -0,0 +1,155 @@
+---
+name: parallel-worker
+description: Executes parallel work streams in a git worktree. This agent reads issue analysis, spawns sub-agents for each work stream, coordinates their execution, and returns a consolidated summary to the main thread. Perfect for parallel execution where multiple agents need to work on different parts of the same issue simultaneously.
+tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, Search, Task, Agent
+model: inherit
+color: green
+---
+
+You are a parallel execution coordinator working in a git worktree. Your job is to manage multiple work streams for an issue, spawning sub-agents for each stream and consolidating their results.
+
+## Core Responsibilities
+
+### 1. Read and Understand
+- Read the issue requirements from the task file
+- Read the issue analysis to understand parallel streams
+- Identify which streams can start immediately
+- Note dependencies between streams
+
+### 2. Spawn Sub-Agents
+For each work stream that can start, spawn a sub-agent using the Task tool:
+
+```yaml
+Task:
+ description: "Stream {X}: {brief description}"
+ subagent_type: "general-purpose"
+ prompt: |
+ You are implementing a specific work stream in worktree: {worktree_path}
+
+ Stream: {stream_name}
+ Files to modify: {file_patterns}
+ Work to complete: {detailed_requirements}
+
+ Instructions:
+ 1. Implement ONLY your assigned scope
+ 2. Work ONLY on your assigned files
+ 3. Commit frequently with format: "Issue #{number}: {specific change}"
+ 4. If you need files outside your scope, note it and continue with what you can
+ 5. Test your changes if applicable
+
+ Return ONLY:
+ - What you completed (bullet list)
+ - Files modified (list)
+ - Any blockers or issues
+ - Tests results if applicable
+
+ Do NOT return code snippets or detailed explanations.
+```
+
+### 3. Coordinate Execution
+- Monitor sub-agent responses
+- Track which streams complete successfully
+- Identify any blocked streams
+- Launch dependent streams when prerequisites complete
+- Handle coordination issues between streams
+
+### 4. Consolidate Results
+After all sub-agents complete or report:
+
+```markdown
+## Parallel Execution Summary
+
+### Completed Streams
+- Stream A: {what was done} ✓
+- Stream B: {what was done} ✓
+- Stream C: {what was done} ✓
+
+### Files Modified
+- {consolidated list from all streams}
+
+### Issues Encountered
+- {any blockers or problems}
+
+### Test Results
+- {combined test results if applicable}
+
+### Git Status
+- Commits made: {count}
+- Current branch: {branch}
+- Clean working tree: {yes/no}
+
+### Overall Status
+{Complete/Partially Complete/Blocked}
+
+### Next Steps
+{What should happen next}
+```
+
+## Execution Pattern
+
+1. **Setup Phase**
+ - Verify worktree exists and is clean
+ - Read issue requirements and analysis
+ - Plan execution order based on dependencies
+
+2. **Parallel Execution Phase**
+ - Spawn all independent streams simultaneously
+ - Wait for responses
+ - As streams complete, check if new streams can start
+ - Continue until all streams are processed
+
+3. **Consolidation Phase**
+ - Gather all sub-agent results
+ - Check git status in worktree
+ - Prepare consolidated summary
+ - Return to main thread
+
+## Context Management
+
+**Critical**: Your role is to shield the main thread from implementation details.
+
+- Main thread should NOT see:
+ - Individual code changes
+ - Detailed implementation steps
+ - Full file contents
+ - Verbose error messages
+
+- Main thread SHOULD see:
+ - What was accomplished
+ - Overall status
+ - Critical blockers
+ - Next recommended action
+
+## Coordination Strategies
+
+When sub-agents report conflicts:
+1. Note which files are contested
+2. Serialize access (have one complete, then the other)
+3. Report any unresolveable conflicts up to main thread
+
+When sub-agents report blockers:
+1. Check if other streams can provide the blocker
+2. If not, note it in final summary for human intervention
+3. Continue with other streams
+
+## Error Handling
+
+If a sub-agent fails:
+- Note the failure
+- Continue with other streams
+- Report failure in summary with enough context for debugging
+
+If worktree has conflicts:
+- Stop execution
+- Report state clearly
+- Request human intervention
+
+## Important Notes
+
+- Each sub-agent works independently - they don't communicate directly
+- You are the coordination point - consolidate and resolve when possible
+- Keep the main thread summary extremely concise
+- If all streams complete successfully, just report success
+- If issues arise, provide actionable information
+
+Your goal: Execute maximum parallel work while maintaining a clean, simple interface to the main thread. The complexity of parallel execution should be invisible above you.
diff --git a/codedetect/.claude/agents/test-runner.md b/codedetect/.claude/agents/test-runner.md
new file mode 100644
index 0000000..e4922eb
--- /dev/null
+++ b/codedetect/.claude/agents/test-runner.md
@@ -0,0 +1,120 @@
+---
+name: test-runner
+description: Use this agent when you need to run tests and analyze their results. This agent specializes in executing tests using the optimized test runner script, capturing comprehensive logs, and then performing deep analysis to surface key issues, failures, and actionable insights. The agent should be invoked after code changes that require validation, during debugging sessions when tests are failing, or when you need a comprehensive test health report. Examples: Context: The user wants to run tests after implementing a new feature and understands any issues.user: "I've finished implementing the new authentication flow. Can you run the relevant tests and tell me if there are any problems?" assistant: "I'll use the test-runner agent to run the authentication tests and analyze the results for any issues."Since the user needs to run tests and understand their results, use the Task tool to launch the test-runner agent.Context: The user is debugging failing tests and needs a detailed analysis.user: "The workflow tests keep failing intermittently. Can you investigate?" assistant: "Let me use the test-runner agent to run the workflow tests multiple times and analyze the patterns in any failures."The user needs test execution with failure analysis, so use the test-runner agent.
+tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Search, Task, Agent
+model: inherit
+color: blue
+---
+
+You are an expert test execution and analysis specialist for the MUXI Runtime system. Your primary responsibility is to efficiently run tests, capture comprehensive logs, and provide actionable insights from test results.
+
+## Core Responsibilities
+
+1. **Test Execution**: You will run tests using the optimized test runner script that automatically captures logs. Always use `.claude/scripts/test-and-log.sh` to ensure full output capture.
+
+2. **Log Analysis**: After test execution, you will analyze the captured logs to identify:
+ - Test failures and their root causes
+ - Performance bottlenecks or timeouts
+ - Resource issues (memory leaks, connection exhaustion)
+ - Flaky test patterns
+ - Configuration problems
+ - Missing dependencies or setup issues
+
+3. **Issue Prioritization**: You will categorize issues by severity:
+ - **Critical**: Tests that block deployment or indicate data corruption
+ - **High**: Consistent failures affecting core functionality
+ - **Medium**: Intermittent failures or performance degradation
+ - **Low**: Minor issues or test infrastructure problems
+
+## Execution Workflow
+
+1. **Pre-execution Checks**:
+ - Verify test file exists and is executable
+ - Check for required environment variables
+ - Ensure test dependencies are available
+
+2. **Test Execution**:
+
+ ```bash
+ # Standard execution with automatic log naming
+ .claude/scripts/test-and-log.sh tests/[test_file].py
+
+ # For iteration testing with custom log names
+ .claude/scripts/test-and-log.sh tests/[test_file].py [test_name]_iteration_[n].log
+ ```
+
+3. **Log Analysis Process**:
+ - Parse the log file for test results summary
+ - Identify all ERROR and FAILURE entries
+ - Extract stack traces and error messages
+ - Look for patterns in failures (timing, resources, dependencies)
+ - Check for warnings that might indicate future problems
+
+4. **Results Reporting**:
+ - Provide a concise summary of test results (passed/failed/skipped)
+ - List critical failures with their root causes
+ - Suggest specific fixes or debugging steps
+ - Highlight any environmental or configuration issues
+ - Note any performance concerns or resource problems
+
+## Analysis Patterns
+
+When analyzing logs, you will look for:
+
+- **Assertion Failures**: Extract the expected vs actual values
+- **Timeout Issues**: Identify operations taking too long
+- **Connection Errors**: Database, API, or service connectivity problems
+- **Import Errors**: Missing modules or circular dependencies
+- **Configuration Issues**: Invalid or missing configuration values
+- **Resource Exhaustion**: Memory, file handles, or connection pool issues
+- **Concurrency Problems**: Deadlocks, race conditions, or synchronization issues
+
+**IMPORTANT**:
+Ensure you read the test carefully to understand what it is testing, so you can better analyze the results.
+
+## Output Format
+
+Your analysis should follow this structure:
+
+```
+## Test Execution Summary
+- Total Tests: X
+- Passed: X
+- Failed: X
+- Skipped: X
+- Duration: Xs
+
+## Critical Issues
+[List any blocking issues with specific error messages and line numbers]
+
+## Test Failures
+[For each failure:
+ - Test name
+ - Failure reason
+ - Relevant error message/stack trace
+ - Suggested fix]
+
+## Warnings & Observations
+[Non-critical issues that should be addressed]
+
+## Recommendations
+[Specific actions to fix failures or improve test reliability]
+```
+
+## Special Considerations
+
+- For flaky tests, suggest running multiple iterations to confirm intermittent behavior
+- When tests pass but show warnings, highlight these for preventive maintenance
+- If all tests pass, still check for performance degradation or resource usage patterns
+- For configuration-related failures, provide the exact configuration changes needed
+- When encountering new failure patterns, suggest additional diagnostic steps
+
+## Error Recovery
+
+If the test runner script fails to execute:
+1. Check if the script has execute permissions
+2. Verify the test file path is correct
+3. Ensure the logs directory exists and is writable
+4. Fall back to direct pytest execution with output redirection if necessary
+
+You will maintain context efficiency by keeping the main conversation focused on actionable insights while ensuring all diagnostic information is captured in the logs for detailed debugging when needed.
diff --git a/codedetect/.claude/commands/code-rabbit.md b/codedetect/.claude/commands/code-rabbit.md
new file mode 100644
index 0000000..5b94069
--- /dev/null
+++ b/codedetect/.claude/commands/code-rabbit.md
@@ -0,0 +1,120 @@
+---
+allowed-tools: Task, Read, Edit, MultiEdit, Write, LS, Grep
+---
+
+# CodeRabbit Review Handler
+
+Process CodeRabbit review comments with context-aware discretion.
+
+## Usage
+```
+/code-rabbit
+```
+
+Then paste one or more CodeRabbit comments.
+
+## Instructions
+
+### 1. Initial Context
+
+Inform the user:
+```
+I'll review the CodeRabbit comments with discretion, as CodeRabbit doesn't have access to the entire codebase and may not understand the full context.
+
+For each comment, I'll:
+- Evaluate if it's valid given our codebase context
+- Accept suggestions that improve code quality
+- Ignore suggestions that don't apply to our architecture
+- Explain my reasoning for accept/ignore decisions
+```
+
+### 2. Process Comments
+
+#### Single File Comments
+If all comments relate to one file:
+- Read the file for context
+- Evaluate each suggestion
+- Apply accepted changes in batch using MultiEdit
+- Report which suggestions were accepted/ignored and why
+
+#### Multiple File Comments
+If comments span multiple files:
+
+Launch parallel sub-agents using Task tool:
+```yaml
+Task:
+ description: "CodeRabbit fixes for {filename}"
+ subagent_type: "general-purpose"
+ prompt: |
+ Review and apply CodeRabbit suggestions for {filename}.
+
+ Comments to evaluate:
+ {relevant_comments_for_this_file}
+
+ Instructions:
+ 1. Read the file to understand context
+ 2. For each suggestion:
+ - Evaluate validity given codebase patterns
+ - Accept if it improves quality/correctness
+ - Ignore if not applicable
+ 3. Apply accepted changes using Edit/MultiEdit
+ 4. Return summary:
+ - Accepted: {list with reasons}
+ - Ignored: {list with reasons}
+ - Changes made: {brief description}
+
+ Use discretion - CodeRabbit lacks full context.
+```
+
+### 3. Consolidate Results
+
+After all sub-agents complete:
+```
+📋 CodeRabbit Review Summary
+
+Files Processed: {count}
+
+Accepted Suggestions:
+ {file}: {changes_made}
+
+Ignored Suggestions:
+ {file}: {reason_ignored}
+
+Overall: {X}/{Y} suggestions applied
+```
+
+### 4. Common Patterns to Ignore
+
+- **Style preferences** that conflict with project conventions
+- **Generic best practices** that don't apply to our specific use case
+- **Performance optimizations** for code that isn't performance-critical
+- **Accessibility suggestions** for internal tools
+- **Security warnings** for already-validated patterns
+- **Import reorganization** that would break our structure
+
+### 5. Common Patterns to Accept
+
+- **Actual bugs** (null checks, error handling)
+- **Security vulnerabilities** (unless false positive)
+- **Resource leaks** (unclosed connections, memory leaks)
+- **Type safety issues** (TypeScript/type hints)
+- **Logic errors** (off-by-one, incorrect conditions)
+- **Missing error handling**
+
+## Decision Framework
+
+For each suggestion, consider:
+1. **Is it correct?** - Does the issue actually exist?
+2. **Is it relevant?** - Does it apply to our use case?
+3. **Is it beneficial?** - Will fixing it improve the code?
+4. **Is it safe?** - Could the change introduce problems?
+
+Only apply if all answers are "yes" or the benefit clearly outweighs risks.
+
+## Important Notes
+
+- CodeRabbit is helpful but lacks context
+- Trust your understanding of the codebase over generic suggestions
+- Explain decisions briefly to maintain audit trail
+- Batch related changes for efficiency
+- Use parallel agents for multi-file reviews to save time
\ No newline at end of file
diff --git a/codedetect/.claude/commands/context/create.md b/codedetect/.claude/commands/context/create.md
new file mode 100644
index 0000000..119b8e5
--- /dev/null
+++ b/codedetect/.claude/commands/context/create.md
@@ -0,0 +1,161 @@
+---
+allowed-tools: Bash, Read, Write, LS
+---
+
+# Create Initial Context
+
+This command creates the initial project context documentation in `.claude/context/` by analyzing the current project state and establishing comprehensive baseline documentation.
+
+## Required Rules
+
+**IMPORTANT:** Before executing this command, read and follow:
+- `.claude/rules/datetime.md` - For getting real current date/time
+
+## Preflight Checklist
+
+Before proceeding, complete these validation steps.
+Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
+
+### 1. Context Directory Check
+- Run: `ls -la .claude/context/ 2>/dev/null`
+- If directory exists and has files:
+ - Count existing files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
+ - Ask user: "⚠️ Found {count} existing context files. Overwrite all context? (yes/no)"
+ - Only proceed with explicit 'yes' confirmation
+ - If user says no, suggest: "Use /context:update to refresh existing context"
+
+### 2. Project Type Detection
+- Check for project indicators:
+ - Node.js: `test -f package.json && echo "Node.js project detected"`
+ - Python: `test -f requirements.txt || test -f pyproject.toml && echo "Python project detected"`
+ - Rust: `test -f Cargo.toml && echo "Rust project detected"`
+ - Go: `test -f go.mod && echo "Go project detected"`
+- Run: `git status 2>/dev/null` to confirm this is a git repository
+- If not a git repo, ask: "⚠️ Not a git repository. Continue anyway? (yes/no)"
+
+### 3. Directory Creation
+- If `.claude/` doesn't exist, create it: `mkdir -p .claude/context/`
+- Verify write permissions: `touch .claude/context/.test && rm .claude/context/.test`
+- If permission denied, tell user: "❌ Cannot create context directory. Check permissions."
+
+### 4. Get Current DateTime
+- Run: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+- Store this value for use in all context file frontmatter
+
+## Instructions
+
+### 1. Pre-Analysis Validation
+- Confirm project root directory is correct (presence of .git, package.json, etc.)
+- Check for existing documentation that can inform context (README.md, docs/)
+- If README.md doesn't exist, ask user for project description
+
+### 2. Systematic Project Analysis
+Gather information in this order:
+
+**Project Detection:**
+- Run: `find . -maxdepth 2 -name 'package.json' -o -name 'requirements.txt' -o -name 'Cargo.toml' -o -name 'go.mod' 2>/dev/null`
+- Run: `git remote -v 2>/dev/null` to get repository information
+- Run: `git branch --show-current 2>/dev/null` to get current branch
+
+**Codebase Analysis:**
+- Run: `find . -type f -name '*.js' -o -name '*.py' -o -name '*.rs' -o -name '*.go' 2>/dev/null | head -20`
+- Run: `ls -la` to see root directory structure
+- Read README.md if it exists
+
+### 3. Context File Creation with Frontmatter
+
+Each context file MUST include frontmatter with real datetime:
+
+```yaml
+---
+created: [Use REAL datetime from date command]
+last_updated: [Use REAL datetime from date command]
+version: 1.0
+author: Claude Code PM System
+---
+```
+
+Generate the following initial context files:
+ - `progress.md` - Document current project status, completed work, and immediate next steps
+ - Include: Current branch, recent commits, outstanding changes
+ - `project-structure.md` - Map out the directory structure and file organization
+ - Include: Key directories, file naming patterns, module organization
+ - `tech-context.md` - Catalog current dependencies, technologies, and development tools
+ - Include: Language version, framework versions, dev dependencies
+ - `system-patterns.md` - Identify existing architectural patterns and design decisions
+ - Include: Design patterns observed, architectural style, data flow
+ - `product-context.md` - Define product requirements, target users, and core functionality
+ - Include: User personas, core features, use cases
+ - `project-brief.md` - Establish project scope, goals, and key objectives
+ - Include: What it does, why it exists, success criteria
+ - `project-overview.md` - Provide a high-level summary of features and capabilities
+ - Include: Feature list, current state, integration points
+ - `project-vision.md` - Articulate long-term vision and strategic direction
+ - Include: Future goals, potential expansions, strategic priorities
+ - `project-style-guide.md` - Document coding standards, conventions, and style preferences
+ - Include: Naming conventions, file structure patterns, comment style
+### 4. Quality Validation
+
+After creating each file:
+- Verify file was created successfully
+- Check file is not empty (minimum 10 lines of content)
+- Ensure frontmatter is present and valid
+- Validate markdown formatting is correct
+
+### 5. Error Handling
+
+**Common Issues:**
+- **No write permissions:** "❌ Cannot write to .claude/context/. Check permissions."
+- **Disk space:** "❌ Insufficient disk space for context files."
+- **File creation failed:** "❌ Failed to create {filename}. Error: {error}"
+
+If any file fails to create:
+- Report which files were successfully created
+- Provide option to continue with partial context
+- Never leave corrupted or incomplete files
+
+### 6. Post-Creation Summary
+
+Provide comprehensive summary:
+```
+📋 Context Creation Complete
+
+📁 Created context in: .claude/context/
+✅ Files created: {count}/9
+
+📊 Context Summary:
+ - Project Type: {detected_type}
+ - Language: {primary_language}
+ - Git Status: {clean/changes}
+ - Dependencies: {count} packages
+
+📝 File Details:
+ ✅ progress.md ({lines} lines) - Current status and recent work
+ ✅ project-structure.md ({lines} lines) - Directory organization
+ [... list all files with line counts and brief description ...]
+
+⏰ Created: {timestamp}
+🔄 Next: Use /context:prime to load context in new sessions
+💡 Tip: Run /context:update regularly to keep context current
+```
+
+## Context Gathering Commands
+
+Use these commands to gather project information:
+- Target directory: `.claude/context/` (create if needed)
+- Current git status: `git status --short`
+- Recent commits: `git log --oneline -10`
+- Project README: Read `README.md` if exists
+- Package files: Check for `package.json`, `requirements.txt`, `Cargo.toml`, `go.mod`, etc.
+- Documentation scan: `find . -type f -name '*.md' -path '*/docs/*' 2>/dev/null | head -10`
+- Test detection: `find . -type d \( -name 'test' -o -name 'tests' -o -name '__tests__' -o -name 'spec' \) 2>/dev/null | head -5`
+
+## Important Notes
+
+- **Always use real datetime** from system clock, never placeholders
+- **Ask for confirmation** before overwriting existing context
+- **Validate each file** is created successfully
+- **Provide detailed summary** of what was created
+- **Handle errors gracefully** with specific guidance
+
+$ARGUMENTS
diff --git a/codedetect/.claude/commands/context/prime.md b/codedetect/.claude/commands/context/prime.md
new file mode 100644
index 0000000..f10611a
--- /dev/null
+++ b/codedetect/.claude/commands/context/prime.md
@@ -0,0 +1,146 @@
+---
+allowed-tools: Bash, Read, LS
+---
+
+# Prime Context
+
+This command loads essential context for a new agent session by reading the project context documentation and understanding the codebase structure.
+
+## Preflight Checklist
+
+Before proceeding, complete these validation steps.
+Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
+
+### 1. Context Availability Check
+- Run: `ls -la .claude/context/ 2>/dev/null`
+- If directory doesn't exist or is empty:
+ - Tell user: "❌ No context found. Please run /context:create first to establish project context."
+ - Exit gracefully
+- Count available context files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
+- Report: "📁 Found {count} context files to load"
+
+### 2. File Integrity Check
+- For each context file found:
+ - Verify file is readable: `test -r ".claude/context/{file}" && echo "readable"`
+ - Check file has content: `test -s ".claude/context/{file}" && echo "has content"`
+ - Check for valid frontmatter (should start with `---`)
+- Report any issues:
+ - Empty files: "⚠️ {filename} is empty (skipping)"
+ - Unreadable files: "⚠️ Cannot read {filename} (permission issue)"
+ - Missing frontmatter: "⚠️ {filename} missing frontmatter (may be corrupted)"
+
+### 3. Project State Check
+- Run: `git status --short 2>/dev/null` to see current state
+- Run: `git branch --show-current 2>/dev/null` to get current branch
+- Note if not in git repository (context may be less complete)
+
+## Instructions
+
+### 1. Context Loading Sequence
+
+Load context files in priority order for optimal understanding:
+
+**Priority 1 - Essential Context (load first):**
+1. `project-overview.md` - High-level understanding of the project
+2. `project-brief.md` - Core purpose and goals
+3. `tech-context.md` - Technical stack and dependencies
+
+**Priority 2 - Current State (load second):**
+4. `progress.md` - Current status and recent work
+5. `project-structure.md` - Directory and file organization
+
+**Priority 3 - Deep Context (load third):**
+6. `system-patterns.md` - Architecture and design patterns
+7. `product-context.md` - User needs and requirements
+8. `project-style-guide.md` - Coding conventions
+9. `project-vision.md` - Long-term direction
+
+### 2. Validation During Loading
+
+For each file loaded:
+- Check frontmatter exists and parse:
+ - `created` date should be valid
+ - `last_updated` should be ≥ created date
+ - `version` should be present
+- If frontmatter is invalid, note but continue loading content
+- Track which files loaded successfully vs failed
+
+### 3. Supplementary Information
+
+After loading context files:
+- Run: `git ls-files --others --exclude-standard | head -20` to see untracked files
+- Read `README.md` if it exists for additional project information
+- Check for `.env.example` or similar for environment setup needs
+
+### 4. Error Recovery
+
+**If critical files are missing:**
+- `project-overview.md` missing: Try to understand from README.md
+- `tech-context.md` missing: Analyze package.json/requirements.txt directly
+- `progress.md` missing: Check recent git commits for status
+
+**If context is incomplete:**
+- Inform user which files are missing
+- Suggest running `/context:update` to refresh context
+- Continue with partial context but note limitations
+
+### 5. Loading Summary
+
+Provide comprehensive summary after priming:
+
+```
+🧠 Context Primed Successfully
+
+📖 Loaded Context Files:
+ ✅ Essential: {count}/3 files
+ ✅ Current State: {count}/2 files
+ ✅ Deep Context: {count}/4 files
+
+🔍 Project Understanding:
+ - Name: {project_name}
+ - Type: {project_type}
+ - Language: {primary_language}
+ - Status: {current_status from progress.md}
+ - Branch: {git_branch}
+
+📊 Key Metrics:
+ - Last Updated: {most_recent_update}
+ - Context Version: {version}
+ - Files Loaded: {success_count}/{total_count}
+
+⚠️ Warnings:
+ {list any missing files or issues}
+
+🎯 Ready State:
+ ✅ Project context loaded
+ ✅ Current status understood
+ ✅ Ready for development work
+
+💡 Project Summary:
+ {2-3 sentence summary of what the project is and current state}
+```
+
+### 6. Partial Context Handling
+
+If some files fail to load:
+- Continue with available context
+- Clearly note what's missing
+- Suggest remediation:
+ - "Missing technical context - run /context:create to rebuild"
+ - "Progress file corrupted - run /context:update to refresh"
+
+### 7. Performance Optimization
+
+For large contexts:
+- Load files in parallel when possible
+- Show progress indicator: "Loading context files... {current}/{total}"
+- Skip extremely large files (>10000 lines) with warning
+- Cache parsed frontmatter for faster subsequent loads
+
+## Important Notes
+
+- **Always validate** files before attempting to read
+- **Load in priority order** to get essential context first
+- **Handle missing files gracefully** - don't fail completely
+- **Provide clear summary** of what was loaded and project state
+- **Note any issues** that might affect development work
diff --git a/codedetect/.claude/commands/context/update.md b/codedetect/.claude/commands/context/update.md
new file mode 100644
index 0000000..f2a4cf6
--- /dev/null
+++ b/codedetect/.claude/commands/context/update.md
@@ -0,0 +1,220 @@
+---
+allowed-tools: Bash, Read, Write, LS
+---
+
+# Update Context
+
+This command updates the project context documentation in `.claude/context/` to reflect the current state of the project. Run this at the end of each development session to keep context accurate.
+
+## Required Rules
+
+**IMPORTANT:** Before executing this command, read and follow:
+- `.claude/rules/datetime.md` - For getting real current date/time
+
+## Preflight Checklist
+
+Before proceeding, complete these validation steps.
+Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
+
+### 1. Context Validation
+- Run: `ls -la .claude/context/ 2>/dev/null`
+- If directory doesn't exist or is empty:
+ - Tell user: "❌ No context to update. Please run /context:create first."
+ - Exit gracefully
+- Count existing files: `ls -1 .claude/context/*.md 2>/dev/null | wc -l`
+- Report: "📁 Found {count} context files to check for updates"
+
+### 2. Change Detection
+
+Gather information about what has changed:
+
+**Git Changes:**
+- Run: `git status --short` to see uncommitted changes
+- Run: `git log --oneline -10` to see recent commits
+- Run: `git diff --stat HEAD~5..HEAD 2>/dev/null` to see files changed recently
+
+**File Modifications:**
+- Check context file ages: `find .claude/context -name "*.md" -type f -exec ls -lt {} + | head -5`
+- Note which context files are oldest and may need updates
+
+**Dependency Changes:**
+- Node.js: `git diff HEAD~5..HEAD package.json 2>/dev/null`
+- Python: `git diff HEAD~5..HEAD requirements.txt 2>/dev/null`
+- Check if new dependencies were added or versions changed
+
+### 3. Get Current DateTime
+- Run: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+- Store for updating `last_updated` field in modified files
+
+## Instructions
+
+### 1. Systematic Change Analysis
+
+For each context file, determine if updates are needed:
+
+**Check each file systematically:**
+#### `progress.md` - **Always Update**
+ - Check: Recent commits, current branch, uncommitted changes
+ - Update: Latest completed work, current blockers, next steps
+ - Run: `git log --oneline -5` to get recent commit messages
+ - Include completion percentages if applicable
+
+#### `project-structure.md` - **Update if Changed**
+ - Check: `git diff --name-status HEAD~10..HEAD | grep -E '^A'` for new files
+ - Update: New directories, moved files, structural reorganization
+ - Only update if significant structural changes occurred
+
+#### `tech-context.md` - **Update if Dependencies Changed**
+ - Check: Package files for new dependencies or version changes
+ - Update: New libraries, upgraded versions, new dev tools
+ - Include security updates or breaking changes
+
+#### `system-patterns.md` - **Update if Architecture Changed**
+ - Check: New design patterns, architectural decisions
+ - Update: New patterns adopted, refactoring done
+ - Only update for significant architectural changes
+
+#### `product-context.md` - **Update if Requirements Changed**
+ - Check: New features implemented, user feedback incorporated
+ - Update: New user stories, changed requirements
+ - Include any pivot in product direction
+
+#### `project-brief.md` - **Rarely Update**
+ - Check: Only if fundamental project goals changed
+ - Update: Major scope changes, new objectives
+ - Usually remains stable
+
+#### `project-overview.md` - **Update for Major Milestones**
+ - Check: Major features completed, significant progress
+ - Update: Feature status, capability changes
+ - Update when reaching project milestones
+
+#### `project-vision.md` - **Rarely Update**
+ - Check: Strategic direction changes
+ - Update: Only for major vision shifts
+ - Usually remains stable
+
+#### `project-style-guide.md` - **Update if Conventions Changed**
+ - Check: New linting rules, style decisions
+ - Update: Convention changes, new patterns adopted
+ - Include examples of new patterns
+### 2. Smart Update Strategy
+
+**For each file that needs updating:**
+
+1. **Read existing file** to understand current content
+2. **Identify specific sections** that need updates
+3. **Preserve frontmatter** but update `last_updated` field:
+ ```yaml
+ ---
+ created: [preserve original]
+ last_updated: [Use REAL datetime from date command]
+ version: [increment if major update, e.g., 1.0 → 1.1]
+ author: Claude Code PM System
+ ---
+ ```
+4. **Make targeted updates** - don't rewrite entire file
+5. **Add update notes** at the bottom if significant:
+ ```markdown
+ ## Update History
+ - {date}: {summary of what changed}
+ ```
+
+### 3. Update Validation
+
+After updating each file:
+- Verify file still has valid frontmatter
+- Check file size is reasonable (not corrupted)
+- Ensure markdown formatting is preserved
+- Confirm updates accurately reflect changes
+
+### 4. Skip Optimization
+
+**Skip files that don't need updates:**
+- If no relevant changes detected, skip the file
+- Report skipped files in summary
+- Don't update timestamp if content unchanged
+- This preserves accurate "last modified" information
+
+### 5. Error Handling
+
+**Common Issues:**
+- **File locked:** "❌ Cannot update {file} - may be open in editor"
+- **Permission denied:** "❌ Cannot write to {file} - check permissions"
+- **Corrupted file:** "⚠️ {file} appears corrupted - skipping update"
+- **Disk space:** "❌ Insufficient disk space for updates"
+
+If update fails:
+- Report which files were successfully updated
+- Note which files failed and why
+- Preserve original files (don't leave corrupted state)
+
+### 6. Update Summary
+
+Provide detailed summary of updates:
+
+```
+🔄 Context Update Complete
+
+📊 Update Statistics:
+ - Files Scanned: {total_count}
+ - Files Updated: {updated_count}
+ - Files Skipped: {skipped_count} (no changes needed)
+ - Errors: {error_count}
+
+📝 Updated Files:
+ ✅ progress.md - Updated recent commits, current status
+ ✅ tech-context.md - Added 3 new dependencies
+ ✅ project-structure.md - Noted new /utils directory
+
+⏭️ Skipped Files (no changes):
+ - project-brief.md (last updated: 5 days ago)
+ - project-vision.md (last updated: 2 weeks ago)
+ - system-patterns.md (last updated: 3 days ago)
+
+⚠️ Issues:
+ {any warnings or errors}
+
+⏰ Last Update: {timestamp}
+🔄 Next: Run this command regularly to keep context current
+💡 Tip: Major changes? Consider running /context:create for full refresh
+```
+
+### 7. Incremental Update Tracking
+
+**Track what was updated:**
+- Note which sections of each file were modified
+- Keep changes focused and surgical
+- Don't regenerate unchanged content
+- Preserve formatting and structure
+
+### 8. Performance Optimization
+
+For large projects:
+- Process files in parallel when possible
+- Show progress: "Updating context files... {current}/{total}"
+- Skip very large files with warning
+- Use git diff to quickly identify changed areas
+
+## Context Gathering Commands
+
+Use these commands to detect changes:
+- Context directory: `.claude/context/`
+- Current git status: `git status --short`
+- Recent commits: `git log --oneline -10`
+- Changed files: `git diff --name-only HEAD~5..HEAD 2>/dev/null`
+- Branch info: `git branch --show-current`
+- Uncommitted changes: `git diff --stat`
+- New untracked files: `git ls-files --others --exclude-standard | head -10`
+- Dependency changes: Check package.json, requirements.txt, etc.
+
+## Important Notes
+
+- **Only update files with actual changes** - preserve accurate timestamps
+- **Always use real datetime** from system clock for `last_updated`
+- **Make surgical updates** - don't regenerate entire files
+- **Validate each update** - ensure files remain valid
+- **Provide detailed summary** - show what changed and what didn't
+- **Handle errors gracefully** - don't corrupt existing context
+
+$ARGUMENTS
diff --git a/codedetect/.claude/commands/pm/blocked.md b/codedetect/.claude/commands/pm/blocked.md
new file mode 100644
index 0000000..7eaccd1
--- /dev/null
+++ b/codedetect/.claude/commands/pm/blocked.md
@@ -0,0 +1,11 @@
+---
+allowed-tools: Bash
+---
+
+Run `bash .claude/scripts/pm/blocked.sh` using a sub-agent and show me the complete output.
+
+- DO NOT truncate.
+- DO NOT collapse.
+- DO NOT abbreviate.
+- Show ALL lines in full.
+- DO NOT print any other comments.
diff --git a/codedetect/.claude/commands/pm/clean.md b/codedetect/.claude/commands/pm/clean.md
new file mode 100644
index 0000000..58a88e3
--- /dev/null
+++ b/codedetect/.claude/commands/pm/clean.md
@@ -0,0 +1,102 @@
+---
+allowed-tools: Bash, Read, Write, LS
+---
+
+# Clean
+
+Clean up completed work and archive old epics.
+
+## Usage
+```
+/pm:clean [--dry-run]
+```
+
+Options:
+- `--dry-run` - Show what would be cleaned without doing it
+
+## Instructions
+
+### 1. Identify Completed Epics
+
+Find epics with:
+- `status: completed` in frontmatter
+- All tasks closed
+- Last update > 30 days ago
+
+### 2. Identify Stale Work
+
+Find:
+- Progress files for closed issues
+- Update directories for completed work
+- Orphaned task files (epic deleted)
+- Empty directories
+
+### 3. Show Cleanup Plan
+
+```
+🧹 Cleanup Plan
+
+Completed Epics to Archive:
+ {epic_name} - Completed {days} days ago
+ {epic_name} - Completed {days} days ago
+
+Stale Progress to Remove:
+ {count} progress files for closed issues
+
+Empty Directories:
+ {list_of_empty_dirs}
+
+Space to Recover: ~{size}KB
+
+{If --dry-run}: This is a dry run. No changes made.
+{Otherwise}: Proceed with cleanup? (yes/no)
+```
+
+### 4. Execute Cleanup
+
+If user confirms:
+
+**Archive Epics:**
+```bash
+mkdir -p .claude/epics/.archived
+mv .claude/epics/{completed_epic} .claude/epics/.archived/
+```
+
+**Remove Stale Files:**
+- Delete progress files for closed issues > 30 days
+- Remove empty update directories
+- Clean up orphaned files
+
+**Create Archive Log:**
+Create `.claude/epics/.archived/archive-log.md`:
+```markdown
+# Archive Log
+
+## {current_date}
+- Archived: {epic_name} (completed {date})
+- Removed: {count} stale progress files
+- Cleaned: {count} empty directories
+```
+
+### 5. Output
+
+```
+✅ Cleanup Complete
+
+Archived:
+ {count} completed epics
+
+Removed:
+ {count} stale files
+ {count} empty directories
+
+Space recovered: {size}KB
+
+System is clean and organized.
+```
+
+## Important Notes
+
+Always offer --dry-run to preview changes.
+Never delete PRDs or incomplete work.
+Keep archive log for history.
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-close.md b/codedetect/.claude/commands/pm/epic-close.md
new file mode 100644
index 0000000..db2b181
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-close.md
@@ -0,0 +1,69 @@
+---
+allowed-tools: Bash, Read, Write, LS
+---
+
+# Epic Close
+
+Mark an epic as complete when all tasks are done.
+
+## Usage
+```
+/pm:epic-close
+```
+
+## Instructions
+
+### 1. Verify All Tasks Complete
+
+Check all task files in `.claude/epics/$ARGUMENTS/`:
+- Verify all have `status: closed` in frontmatter
+- If any open tasks found: "❌ Cannot close epic. Open tasks remain: {list}"
+
+### 2. Update Epic Status
+
+Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+
+Update epic.md frontmatter:
+```yaml
+status: completed
+progress: 100%
+updated: {current_datetime}
+completed: {current_datetime}
+```
+
+### 3. Update PRD Status
+
+If epic references a PRD, update its status to "complete".
+
+### 4. Close Epic on GitHub
+
+If epic has GitHub issue:
+```bash
+gh issue close {epic_issue_number} --comment "✅ Epic completed - all tasks done"
+```
+
+### 5. Archive Option
+
+Ask user: "Archive completed epic? (yes/no)"
+
+If yes:
+- Move epic directory to `.claude/epics/.archived/{epic_name}/`
+- Create archive summary with completion date
+
+### 6. Output
+
+```
+✅ Epic closed: $ARGUMENTS
+ Tasks completed: {count}
+ Duration: {days_from_created_to_completed}
+
+{If archived}: Archived to .claude/epics/.archived/
+
+Next epic: Run /pm:next to see priority work
+```
+
+## Important Notes
+
+Only close epics with all tasks complete.
+Preserve all data when archiving.
+Update related PRD status.
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-decompose.md b/codedetect/.claude/commands/pm/epic-decompose.md
new file mode 100644
index 0000000..2af2572
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-decompose.md
@@ -0,0 +1,230 @@
+---
+allowed-tools: Bash, Read, Write, LS, Task
+---
+
+# Epic Decompose
+
+Break epic into concrete, actionable tasks.
+
+## Usage
+```
+/pm:epic-decompose
+```
+
+## Required Rules
+
+**IMPORTANT:** Before executing this command, read and follow:
+- `.claude/rules/datetime.md` - For getting real current date/time
+
+## Preflight Checklist
+
+Before proceeding, complete these validation steps.
+Do not bother the user with preflight checks progress ("I'm not going to ..."). Just do them and move on.
+
+1. **Verify epic exists:**
+ - Check if `.claude/epics/$ARGUMENTS/epic.md` exists
+ - If not found, tell user: "❌ Epic not found: $ARGUMENTS. First create it with: /pm:prd-parse $ARGUMENTS"
+ - Stop execution if epic doesn't exist
+
+2. **Check for existing tasks:**
+ - Check if any numbered task files (001.md, 002.md, etc.) already exist in `.claude/epics/$ARGUMENTS/`
+ - If tasks exist, list them and ask: "⚠️ Found {count} existing tasks. Delete and recreate all tasks? (yes/no)"
+ - Only proceed with explicit 'yes' confirmation
+ - If user says no, suggest: "View existing tasks with: /pm:epic-show $ARGUMENTS"
+
+3. **Validate epic frontmatter:**
+ - Verify epic has valid frontmatter with: name, status, created, prd
+ - If invalid, tell user: "❌ Invalid epic frontmatter. Please check: .claude/epics/$ARGUMENTS/epic.md"
+
+4. **Check epic status:**
+ - If epic status is already "completed", warn user: "⚠️ Epic is marked as completed. Are you sure you want to decompose it again?"
+
+## Instructions
+
+You are decomposing an epic into specific, actionable tasks for: **$ARGUMENTS**
+
+### 1. Read the Epic
+- Load the epic from `.claude/epics/$ARGUMENTS/epic.md`
+- Understand the technical approach and requirements
+- Review the task breakdown preview
+
+### 2. Analyze for Parallel Creation
+
+Determine if tasks can be created in parallel:
+- If tasks are mostly independent: Create in parallel using Task agents
+- If tasks have complex dependencies: Create sequentially
+- For best results: Group independent tasks for parallel creation
+
+### 3. Parallel Task Creation (When Possible)
+
+If tasks can be created in parallel, spawn sub-agents:
+
+```yaml
+Task:
+ description: "Create task files batch {X}"
+ subagent_type: "general-purpose"
+ prompt: |
+ Create task files for epic: $ARGUMENTS
+
+ Tasks to create:
+ - {list of 3-4 tasks for this batch}
+
+ For each task:
+ 1. Create file: .claude/epics/$ARGUMENTS/{number}.md
+ 2. Use exact format with frontmatter and all sections
+ 3. Follow task breakdown from epic
+ 4. Set parallel/depends_on fields appropriately
+ 5. Number sequentially (001.md, 002.md, etc.)
+
+ Return: List of files created
+```
+
+### 4. Task File Format with Frontmatter
+For each task, create a file with this exact structure:
+
+```markdown
+---
+name: [Task Title]
+status: open
+created: [Current ISO date/time]
+updated: [Current ISO date/time]
+github: [Will be updated when synced to GitHub]
+depends_on: [] # List of task numbers this depends on, e.g., [001, 002]
+parallel: true # Can this run in parallel with other tasks?
+conflicts_with: [] # Tasks that modify same files, e.g., [003, 004]
+---
+
+# Task: [Task Title]
+
+## Description
+Clear, concise description of what needs to be done
+
+## Acceptance Criteria
+- [ ] Specific criterion 1
+- [ ] Specific criterion 2
+- [ ] Specific criterion 3
+
+## Technical Details
+- Implementation approach
+- Key considerations
+- Code locations/files affected
+
+## Dependencies
+- [ ] Task/Issue dependencies
+- [ ] External dependencies
+
+## Effort Estimate
+- Size: XS/S/M/L/XL
+- Hours: estimated hours
+- Parallel: true/false (can run in parallel with other tasks)
+
+## Definition of Done
+- [ ] Code implemented
+- [ ] Tests written and passing
+- [ ] Documentation updated
+- [ ] Code reviewed
+- [ ] Deployed to staging
+```
+
+### 3. Task Naming Convention
+Save tasks as: `.claude/epics/$ARGUMENTS/{task_number}.md`
+- Use sequential numbering: 001.md, 002.md, etc.
+- Keep task titles short but descriptive
+
+### 4. Frontmatter Guidelines
+- **name**: Use a descriptive task title (without "Task:" prefix)
+- **status**: Always start with "open" for new tasks
+- **created**: Get REAL current datetime by running: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+- **updated**: Use the same real datetime as created for new tasks
+- **github**: Leave placeholder text - will be updated during sync
+- **depends_on**: List task numbers that must complete before this can start (e.g., [001, 002])
+- **parallel**: Set to true if this can run alongside other tasks without conflicts
+- **conflicts_with**: List task numbers that modify the same files (helps coordination)
+
+### 5. Task Types to Consider
+- **Setup tasks**: Environment, dependencies, scaffolding
+- **Data tasks**: Models, schemas, migrations
+- **API tasks**: Endpoints, services, integration
+- **UI tasks**: Components, pages, styling
+- **Testing tasks**: Unit tests, integration tests
+- **Documentation tasks**: README, API docs
+- **Deployment tasks**: CI/CD, infrastructure
+
+### 6. Parallelization
+Mark tasks with `parallel: true` if they can be worked on simultaneously without conflicts.
+
+### 7. Execution Strategy
+
+Choose based on task count and complexity:
+
+**Small Epic (< 5 tasks)**: Create sequentially for simplicity
+
+**Medium Epic (5-10 tasks)**:
+- Batch into 2-3 groups
+- Spawn agents for each batch
+- Consolidate results
+
+**Large Epic (> 10 tasks)**:
+- Analyze dependencies first
+- Group independent tasks
+- Launch parallel agents (max 5 concurrent)
+- Create dependent tasks after prerequisites
+
+Example for parallel execution:
+```markdown
+Spawning 3 agents for parallel task creation:
+- Agent 1: Creating tasks 001-003 (Database layer)
+- Agent 2: Creating tasks 004-006 (API layer)
+- Agent 3: Creating tasks 007-009 (UI layer)
+```
+
+### 8. Task Dependency Validation
+
+When creating tasks with dependencies:
+- Ensure referenced dependencies exist (e.g., if Task 003 depends on Task 002, verify 002 was created)
+- Check for circular dependencies (Task A → Task B → Task A)
+- If dependency issues found, warn but continue: "⚠️ Task dependency warning: {details}"
+
+### 9. Update Epic with Task Summary
+After creating all tasks, update the epic file by adding this section:
+```markdown
+## Tasks Created
+- [ ] 001.md - {Task Title} (parallel: true/false)
+- [ ] 002.md - {Task Title} (parallel: true/false)
+- etc.
+
+Total tasks: {count}
+Parallel tasks: {parallel_count}
+Sequential tasks: {sequential_count}
+Estimated total effort: {sum of hours}
+```
+
+Also update the epic's frontmatter progress if needed (still 0% until tasks actually start).
+
+### 9. Quality Validation
+
+Before finalizing tasks, verify:
+- [ ] All tasks have clear acceptance criteria
+- [ ] Task sizes are reasonable (1-3 days each)
+- [ ] Dependencies are logical and achievable
+- [ ] Parallel tasks don't conflict with each other
+- [ ] Combined tasks cover all epic requirements
+
+### 10. Post-Decomposition
+
+After successfully creating tasks:
+1. Confirm: "✅ Created {count} tasks for epic: $ARGUMENTS"
+2. Show summary:
+ - Total tasks created
+ - Parallel vs sequential breakdown
+ - Total estimated effort
+3. Suggest next step: "Ready to sync to GitHub? Run: /pm:epic-sync $ARGUMENTS"
+
+## Error Recovery
+
+If any step fails:
+- If task creation partially completes, list which tasks were created
+- Provide option to clean up partial tasks
+- Never leave the epic in an inconsistent state
+
+Aim for tasks that can be completed in 1-3 days each. Break down larger tasks into smaller, manageable pieces for the "$ARGUMENTS" epic.
diff --git a/codedetect/.claude/commands/pm/epic-edit.md b/codedetect/.claude/commands/pm/epic-edit.md
new file mode 100644
index 0000000..850dd7d
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-edit.md
@@ -0,0 +1,66 @@
+---
+allowed-tools: Read, Write, LS
+---
+
+# Epic Edit
+
+Edit epic details after creation.
+
+## Usage
+```
+/pm:epic-edit
+```
+
+## Instructions
+
+### 1. Read Current Epic
+
+Read `.claude/epics/$ARGUMENTS/epic.md`:
+- Parse frontmatter
+- Read content sections
+
+### 2. Interactive Edit
+
+Ask user what to edit:
+- Name/Title
+- Description/Overview
+- Architecture decisions
+- Technical approach
+- Dependencies
+- Success criteria
+
+### 3. Update Epic File
+
+Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+
+Update epic.md:
+- Preserve all frontmatter except `updated`
+- Apply user's edits to content
+- Update `updated` field with current datetime
+
+### 4. Option to Update GitHub
+
+If epic has GitHub URL in frontmatter:
+Ask: "Update GitHub issue? (yes/no)"
+
+If yes:
+```bash
+gh issue edit {issue_number} --body-file .claude/epics/$ARGUMENTS/epic.md
+```
+
+### 5. Output
+
+```
+✅ Updated epic: $ARGUMENTS
+ Changes made to: {sections_edited}
+
+{If GitHub updated}: GitHub issue updated ✅
+
+View epic: /pm:epic-show $ARGUMENTS
+```
+
+## Important Notes
+
+Preserve frontmatter history (created, github URL, etc.).
+Don't change task files when editing epic.
+Follow `/rules/frontmatter-operations.md`.
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-list.md b/codedetect/.claude/commands/pm/epic-list.md
new file mode 100644
index 0000000..502423d
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-list.md
@@ -0,0 +1,13 @@
+---
+allowed-tools: Bash
+---
+
+Run `bash .claude/scripts/pm/epic-list.sh` using a sub-agent and show me the complete output.
+
+- You MUST display the complete output.
+- DO NOT truncate.
+- DO NOT collapse.
+- DO NOT abbreviate.
+- Show ALL lines in full.
+- DO NOT print any other comments.
+
diff --git a/codedetect/.claude/commands/pm/epic-merge.md b/codedetect/.claude/commands/pm/epic-merge.md
new file mode 100644
index 0000000..17ec85c
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-merge.md
@@ -0,0 +1,211 @@
+---
+allowed-tools: Bash, Read, Write
+---
+
+# Epic Merge
+
+Merge completed epic from worktree back to main branch.
+
+## Usage
+```
+/pm:epic-merge
+```
+
+## Quick Check
+
+1. **Verify worktree exists:**
+ ```bash
+ git worktree list | grep "epic-$ARGUMENTS" || echo "❌ No worktree for epic: $ARGUMENTS"
+ ```
+
+2. **Check for active agents:**
+ Read `.claude/epics/$ARGUMENTS/execution-status.md`
+ If active agents exist: "⚠️ Active agents detected. Stop them first with: /pm:epic-stop $ARGUMENTS"
+
+## Instructions
+
+### 1. Pre-Merge Validation
+
+Navigate to worktree and check status:
+```bash
+cd ../epic-$ARGUMENTS
+
+# Check for uncommitted changes
+if [[ $(git status --porcelain) ]]; then
+ echo "⚠️ Uncommitted changes in worktree:"
+ git status --short
+ echo "Commit or stash changes before merging"
+ exit 1
+fi
+
+# Check branch status
+git fetch origin
+git status -sb
+```
+
+### 2. Run Tests (Optional but Recommended)
+
+```bash
+# Look for test commands
+if [ -f package.json ]; then
+ npm test || echo "⚠️ Tests failed. Continue anyway? (yes/no)"
+elif [ -f Makefile ]; then
+ make test || echo "⚠️ Tests failed. Continue anyway? (yes/no)"
+fi
+```
+
+### 3. Update Epic Documentation
+
+Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+
+Update `.claude/epics/$ARGUMENTS/epic.md`:
+- Set status to "completed"
+- Update completion date
+- Add final summary
+
+### 4. Attempt Merge
+
+```bash
+# Return to main repository
+cd {main-repo-path}
+
+# Ensure main is up to date
+git checkout main
+git pull origin main
+
+# Attempt merge
+echo "Merging epic/$ARGUMENTS to main..."
+git merge epic/$ARGUMENTS --no-ff -m "Merge epic: $ARGUMENTS
+
+Completed features:
+$(cd .claude/epics/$ARGUMENTS && ls *.md | grep -E '^[0-9]+' | while read f; do
+ echo "- $(grep '^name:' $f | cut -d: -f2)"
+done)
+
+Closes epic #$(grep 'github:' .claude/epics/$ARGUMENTS/epic.md | grep -oE '#[0-9]+')"
+```
+
+### 5. Handle Merge Conflicts
+
+If merge fails with conflicts:
+```bash
+# Check conflict status
+git status
+
+echo "
+❌ Merge conflicts detected!
+
+Conflicts in:
+$(git diff --name-only --diff-filter=U)
+
+Options:
+1. Resolve manually:
+ - Edit conflicted files
+ - git add {files}
+ - git commit
+
+2. Abort merge:
+ git merge --abort
+
+3. Get help:
+ /pm:epic-resolve $ARGUMENTS
+
+Worktree preserved at: ../epic-$ARGUMENTS
+"
+exit 1
+```
+
+### 6. Post-Merge Cleanup
+
+If merge succeeds:
+```bash
+# Push to remote
+git push origin main
+
+# Clean up worktree
+git worktree remove ../epic-$ARGUMENTS
+echo "✅ Worktree removed: ../epic-$ARGUMENTS"
+
+# Delete branch
+git branch -d epic/$ARGUMENTS
+git push origin --delete epic/$ARGUMENTS 2>/dev/null || true
+
+# Archive epic locally
+mkdir -p .claude/epics/archived/
+mv .claude/epics/$ARGUMENTS .claude/epics/archived/
+echo "✅ Epic archived: .claude/epics/archived/$ARGUMENTS"
+```
+
+### 7. Update GitHub Issues
+
+Close related issues:
+```bash
+# Get issue numbers from epic
+epic_issue=$(grep 'github:' .claude/epics/archived/$ARGUMENTS/epic.md | grep -oE '[0-9]+$')
+
+# Close epic issue
+gh issue close $epic_issue -c "Epic completed and merged to main"
+
+# Close task issues
+for task_file in .claude/epics/archived/$ARGUMENTS/[0-9]*.md; do
+ issue_num=$(grep 'github:' $task_file | grep -oE '[0-9]+$')
+ if [ ! -z "$issue_num" ]; then
+ gh issue close $issue_num -c "Completed in epic merge"
+ fi
+done
+```
+
+### 8. Final Output
+
+```
+✅ Epic Merged Successfully: $ARGUMENTS
+
+Summary:
+ Branch: epic/$ARGUMENTS → main
+ Commits merged: {count}
+ Files changed: {count}
+ Issues closed: {count}
+
+Cleanup completed:
+ ✓ Worktree removed
+ ✓ Branch deleted
+ ✓ Epic archived
+ ✓ GitHub issues closed
+
+Next steps:
+ - Deploy changes if needed
+ - Start new epic: /pm:prd-new {feature}
+ - View completed work: git log --oneline -20
+```
+
+## Conflict Resolution Help
+
+If conflicts need resolution:
+```
+The epic branch has conflicts with main.
+
+This typically happens when:
+- Main has changed since epic started
+- Multiple epics modified same files
+- Dependencies were updated
+
+To resolve:
+1. Open conflicted files
+2. Look for <<<<<<< markers
+3. Choose correct version or combine
+4. Remove conflict markers
+5. git add {resolved files}
+6. git commit
+7. git push
+
+Or abort and try later:
+ git merge --abort
+```
+
+## Important Notes
+
+- Always check for uncommitted changes first
+- Run tests before merging when possible
+- Use --no-ff to preserve epic history
+- Archive epic data instead of deleting
+- Close GitHub issues to maintain sync
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-oneshot.md b/codedetect/.claude/commands/pm/epic-oneshot.md
new file mode 100644
index 0000000..80f2e06
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-oneshot.md
@@ -0,0 +1,89 @@
+---
+allowed-tools: Read, LS
+---
+
+# Epic Oneshot
+
+Decompose epic into tasks and sync to GitHub in one operation.
+
+## Usage
+```
+/pm:epic-oneshot
+```
+
+## Instructions
+
+### 1. Validate Prerequisites
+
+Check that epic exists and hasn't been processed:
+```bash
+# Epic must exist
+test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
+
+# Check for existing tasks
+if ls .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | grep -q .; then
+ echo "⚠️ Tasks already exist. This will create duplicates."
+ echo "Delete existing tasks or use /pm:epic-sync instead."
+ exit 1
+fi
+
+# Check if already synced
+if grep -q "github:" .claude/epics/$ARGUMENTS/epic.md; then
+ echo "⚠️ Epic already synced to GitHub."
+ echo "Use /pm:epic-sync to update."
+ exit 1
+fi
+```
+
+### 2. Execute Decompose
+
+Simply run the decompose command:
+```
+Running: /pm:epic-decompose $ARGUMENTS
+```
+
+This will:
+- Read the epic
+- Create task files (using parallel agents if appropriate)
+- Update epic with task summary
+
+### 3. Execute Sync
+
+Immediately follow with sync:
+```
+Running: /pm:epic-sync $ARGUMENTS
+```
+
+This will:
+- Create epic issue on GitHub
+- Create sub-issues (using parallel agents if appropriate)
+- Rename task files to issue IDs
+- Create worktree
+
+### 4. Output
+
+```
+🚀 Epic Oneshot Complete: $ARGUMENTS
+
+Step 1: Decomposition ✓
+ - Tasks created: {count}
+
+Step 2: GitHub Sync ✓
+ - Epic: #{number}
+ - Sub-issues created: {count}
+ - Worktree: ../epic-$ARGUMENTS
+
+Ready for development!
+ Start work: /pm:epic-start $ARGUMENTS
+ Or single task: /pm:issue-start {task_number}
+```
+
+## Important Notes
+
+This is simply a convenience wrapper that runs:
+1. `/pm:epic-decompose`
+2. `/pm:epic-sync`
+
+Both commands handle their own error checking, parallel execution, and validation. This command just orchestrates them in sequence.
+
+Use this when you're confident the epic is ready and want to go from epic to GitHub issues in one step.
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-refresh.md b/codedetect/.claude/commands/pm/epic-refresh.md
new file mode 100644
index 0000000..8f1e916
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-refresh.md
@@ -0,0 +1,102 @@
+---
+allowed-tools: Read, Write, LS
+---
+
+# Epic Refresh
+
+Update epic progress based on task states.
+
+## Usage
+```
+/pm:epic-refresh
+```
+
+## Instructions
+
+### 1. Count Task Status
+
+Scan all task files in `.claude/epics/$ARGUMENTS/`:
+- Count total tasks
+- Count tasks with `status: closed`
+- Count tasks with `status: open`
+- Count tasks with work in progress
+
+### 2. Calculate Progress
+
+```
+progress = (closed_tasks / total_tasks) * 100
+```
+
+Round to nearest integer.
+
+### 3. Update GitHub Task List
+
+If epic has GitHub issue, sync task checkboxes:
+
+```bash
+# Get epic issue number from epic.md frontmatter
+epic_issue={extract_from_github_field}
+
+if [ ! -z "$epic_issue" ]; then
+ # Get current epic body
+ gh issue view $epic_issue --json body -q .body > /tmp/epic-body.md
+
+ # For each task, check its status and update checkbox
+ for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
+ task_issue=$(grep 'github:' $task_file | grep -oE '[0-9]+$')
+ task_status=$(grep 'status:' $task_file | cut -d: -f2 | tr -d ' ')
+
+ if [ "$task_status" = "closed" ]; then
+ # Mark as checked
+ sed -i "s/- \[ \] #$task_issue/- [x] #$task_issue/" /tmp/epic-body.md
+ else
+ # Ensure unchecked (in case manually checked)
+ sed -i "s/- \[x\] #$task_issue/- [ ] #$task_issue/" /tmp/epic-body.md
+ fi
+ done
+
+ # Update epic issue
+ gh issue edit $epic_issue --body-file /tmp/epic-body.md
+fi
+```
+
+### 4. Determine Epic Status
+
+- If progress = 0% and no work started: `backlog`
+- If progress > 0% and < 100%: `in-progress`
+- If progress = 100%: `completed`
+
+### 5. Update Epic
+
+Get current datetime: `date -u +"%Y-%m-%dT%H:%M:%SZ"`
+
+Update epic.md frontmatter:
+```yaml
+status: {calculated_status}
+progress: {calculated_progress}%
+updated: {current_datetime}
+```
+
+### 6. Output
+
+```
+🔄 Epic refreshed: $ARGUMENTS
+
+Tasks:
+ Closed: {closed_count}
+ Open: {open_count}
+ Total: {total_count}
+
+Progress: {old_progress}% → {new_progress}%
+Status: {old_status} → {new_status}
+GitHub: Task list updated ✓
+
+{If complete}: Run /pm:epic-close $ARGUMENTS to close epic
+{If in progress}: Run /pm:next to see priority tasks
+```
+
+## Important Notes
+
+This is useful after manual task edits or GitHub sync.
+Don't modify task files, only epic status.
+Preserve all other frontmatter fields.
\ No newline at end of file
diff --git a/codedetect/.claude/commands/pm/epic-show.md b/codedetect/.claude/commands/pm/epic-show.md
new file mode 100644
index 0000000..b2761f1
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-show.md
@@ -0,0 +1,11 @@
+---
+allowed-tools: Bash
+---
+
+Run `bash .claude/scripts/pm/epic-show.sh $ARGUMENTS` using a sub-agent and show me the complete output.
+
+- DO NOT truncate.
+- DO NOT collapse.
+- DO NOT abbreviate.
+- Show ALL lines in full.
+- DO NOT print any other comments.
diff --git a/codedetect/.claude/commands/pm/epic-start-worktree.md b/codedetect/.claude/commands/pm/epic-start-worktree.md
new file mode 100644
index 0000000..29d6cb5
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-start-worktree.md
@@ -0,0 +1,221 @@
+---
+allowed-tools: Bash, Read, Write, LS, Task
+---
+
+# Epic Start
+
+Launch parallel agents to work on epic tasks in a shared worktree.
+
+## Usage
+```
+/pm:epic-start
+```
+
+## Quick Check
+
+1. **Verify epic exists:**
+ ```bash
+ test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
+ ```
+
+2. **Check GitHub sync:**
+ Look for `github:` field in epic frontmatter.
+ If missing: "❌ Epic not synced. Run: /pm:epic-sync $ARGUMENTS first"
+
+3. **Check for worktree:**
+ ```bash
+ git worktree list | grep "epic-$ARGUMENTS"
+ ```
+
+## Instructions
+
+### 1. Create or Enter Worktree
+
+Follow `/rules/worktree-operations.md`:
+
+```bash
+# If worktree doesn't exist, create it
+if ! git worktree list | grep -q "epic-$ARGUMENTS"; then
+ git checkout main
+ git pull origin main
+ git worktree add ../epic-$ARGUMENTS -b epic/$ARGUMENTS
+ echo "✅ Created worktree: ../epic-$ARGUMENTS"
+else
+ echo "✅ Using existing worktree: ../epic-$ARGUMENTS"
+fi
+```
+
+### 2. Identify Ready Issues
+
+Read all task files in `.claude/epics/$ARGUMENTS/`:
+- Parse frontmatter for `status`, `depends_on`, `parallel` fields
+- Check GitHub issue status if needed
+- Build dependency graph
+
+Categorize issues:
+- **Ready**: No unmet dependencies, not started
+- **Blocked**: Has unmet dependencies
+- **In Progress**: Already being worked on
+- **Complete**: Finished
+
+### 3. Analyze Ready Issues
+
+For each ready issue without analysis:
+```bash
+# Check for analysis
+if ! test -f .claude/epics/$ARGUMENTS/{issue}-analysis.md; then
+ echo "Analyzing issue #{issue}..."
+ # Run analysis (inline or via Task tool)
+fi
+```
+
+### 4. Launch Parallel Agents
+
+For each ready issue with analysis:
+
+```markdown
+## Starting Issue #{issue}: {title}
+
+Reading analysis...
+Found {count} parallel streams:
+ - Stream A: {description} (Agent-{id})
+ - Stream B: {description} (Agent-{id})
+
+Launching agents in worktree: ../epic-$ARGUMENTS/
+```
+
+Use Task tool to launch each stream:
+```yaml
+Task:
+ description: "Issue #{issue} Stream {X}"
+ subagent_type: "{agent_type}"
+ prompt: |
+ Working in worktree: ../epic-$ARGUMENTS/
+ Issue: #{issue} - {title}
+ Stream: {stream_name}
+
+ Your scope:
+ - Files: {file_patterns}
+ - Work: {stream_description}
+
+ Read full requirements from:
+ - .claude/epics/$ARGUMENTS/{task_file}
+ - .claude/epics/$ARGUMENTS/{issue}-analysis.md
+
+ Follow coordination rules in /rules/agent-coordination.md
+
+ Commit frequently with message format:
+ "Issue #{issue}: {specific change}"
+
+ Update progress in:
+ .claude/epics/$ARGUMENTS/updates/{issue}/stream-{X}.md
+```
+
+### 5. Track Active Agents
+
+Create/update `.claude/epics/$ARGUMENTS/execution-status.md`:
+
+```markdown
+---
+started: {datetime}
+worktree: ../epic-$ARGUMENTS
+branch: epic/$ARGUMENTS
+---
+
+# Execution Status
+
+## Active Agents
+- Agent-1: Issue #1234 Stream A (Database) - Started {time}
+- Agent-2: Issue #1234 Stream B (API) - Started {time}
+- Agent-3: Issue #1235 Stream A (UI) - Started {time}
+
+## Queued Issues
+- Issue #1236 - Waiting for #1234
+- Issue #1237 - Waiting for #1235
+
+## Completed
+- {None yet}
+```
+
+### 6. Monitor and Coordinate
+
+Set up monitoring:
+```bash
+echo "
+Agents launched successfully!
+
+Monitor progress:
+ /pm:epic-status $ARGUMENTS
+
+View worktree changes:
+ cd ../epic-$ARGUMENTS && git status
+
+Stop all agents:
+ /pm:epic-stop $ARGUMENTS
+
+Merge when complete:
+ /pm:epic-merge $ARGUMENTS
+"
+```
+
+### 7. Handle Dependencies
+
+As agents complete streams:
+- Check if any blocked issues are now ready
+- Launch new agents for newly-ready work
+- Update execution-status.md
+
+## Output Format
+
+```
+🚀 Epic Execution Started: $ARGUMENTS
+
+Worktree: ../epic-$ARGUMENTS
+Branch: epic/$ARGUMENTS
+
+Launching {total} agents across {issue_count} issues:
+
+Issue #1234: Database Schema
+ ├─ Stream A: Schema creation (Agent-1) ✓ Started
+ └─ Stream B: Migrations (Agent-2) ✓ Started
+
+Issue #1235: API Endpoints
+ ├─ Stream A: User endpoints (Agent-3) ✓ Started
+ ├─ Stream B: Post endpoints (Agent-4) ✓ Started
+ └─ Stream C: Tests (Agent-5) ⏸ Waiting for A & B
+
+Blocked Issues (2):
+ - #1236: UI Components (depends on #1234)
+ - #1237: Integration (depends on #1235, #1236)
+
+Monitor with: /pm:epic-status $ARGUMENTS
+```
+
+## Error Handling
+
+If agent launch fails:
+```
+❌ Failed to start Agent-{id}
+ Issue: #{issue}
+ Stream: {stream}
+ Error: {reason}
+
+Continue with other agents? (yes/no)
+```
+
+If worktree creation fails:
+```
+❌ Cannot create worktree
+ {git error message}
+
+Try: git worktree prune
+Or: Check existing worktrees with: git worktree list
+```
+
+## Important Notes
+
+- Follow `/rules/worktree-operations.md` for git operations
+- Follow `/rules/agent-coordination.md` for parallel work
+- Agents work in the SAME worktree (not separate ones)
+- Maximum parallel agents should be reasonable (e.g., 5-10)
+- Monitor system resources if launching many agents
diff --git a/codedetect/.claude/commands/pm/epic-start.md b/codedetect/.claude/commands/pm/epic-start.md
new file mode 100644
index 0000000..51628a4
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-start.md
@@ -0,0 +1,247 @@
+---
+allowed-tools: Bash, Read, Write, LS, Task
+---
+
+# Epic Start
+
+Launch parallel agents to work on epic tasks in a shared branch.
+
+## Usage
+```
+/pm:epic-start
+```
+
+## Quick Check
+
+1. **Verify epic exists:**
+ ```bash
+ test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
+ ```
+
+2. **Check GitHub sync:**
+ Look for `github:` field in epic frontmatter.
+ If missing: "❌ Epic not synced. Run: /pm:epic-sync $ARGUMENTS first"
+
+3. **Check for branch:**
+ ```bash
+ git branch -a | grep "epic/$ARGUMENTS"
+ ```
+
+4. **Check for uncommitted changes:**
+ ```bash
+ git status --porcelain
+ ```
+ If output is not empty: "❌ You have uncommitted changes. Please commit or stash them before starting an epic"
+
+## Instructions
+
+### 1. Create or Enter Branch
+
+Follow `/rules/branch-operations.md`:
+
+```bash
+# Check for uncommitted changes
+if [ -n "$(git status --porcelain)" ]; then
+ echo "❌ You have uncommitted changes. Please commit or stash them before starting an epic."
+ exit 1
+fi
+
+# If branch doesn't exist, create it
+if ! git branch -a | grep -q "epic/$ARGUMENTS"; then
+ git checkout main
+ git pull origin main
+ git checkout -b epic/$ARGUMENTS
+ git push -u origin epic/$ARGUMENTS
+ echo "✅ Created branch: epic/$ARGUMENTS"
+else
+ git checkout epic/$ARGUMENTS
+ git pull origin epic/$ARGUMENTS
+ echo "✅ Using existing branch: epic/$ARGUMENTS"
+fi
+```
+
+### 2. Identify Ready Issues
+
+Read all task files in `.claude/epics/$ARGUMENTS/`:
+- Parse frontmatter for `status`, `depends_on`, `parallel` fields
+- Check GitHub issue status if needed
+- Build dependency graph
+
+Categorize issues:
+- **Ready**: No unmet dependencies, not started
+- **Blocked**: Has unmet dependencies
+- **In Progress**: Already being worked on
+- **Complete**: Finished
+
+### 3. Analyze Ready Issues
+
+For each ready issue without analysis:
+```bash
+# Check for analysis
+if ! test -f .claude/epics/$ARGUMENTS/{issue}-analysis.md; then
+ echo "Analyzing issue #{issue}..."
+ # Run analysis (inline or via Task tool)
+fi
+```
+
+### 4. Launch Parallel Agents
+
+For each ready issue with analysis:
+
+```markdown
+## Starting Issue #{issue}: {title}
+
+Reading analysis...
+Found {count} parallel streams:
+ - Stream A: {description} (Agent-{id})
+ - Stream B: {description} (Agent-{id})
+
+Launching agents in branch: epic/$ARGUMENTS
+```
+
+Use Task tool to launch each stream:
+```yaml
+Task:
+ description: "Issue #{issue} Stream {X}"
+ subagent_type: "{agent_type}"
+ prompt: |
+ Working in branch: epic/$ARGUMENTS
+ Issue: #{issue} - {title}
+ Stream: {stream_name}
+
+ Your scope:
+ - Files: {file_patterns}
+ - Work: {stream_description}
+
+ Read full requirements from:
+ - .claude/epics/$ARGUMENTS/{task_file}
+ - .claude/epics/$ARGUMENTS/{issue}-analysis.md
+
+ Follow coordination rules in /rules/agent-coordination.md
+
+ Commit frequently with message format:
+ "Issue #{issue}: {specific change}"
+
+ Update progress in:
+ .claude/epics/$ARGUMENTS/updates/{issue}/stream-{X}.md
+```
+
+### 5. Track Active Agents
+
+Create/update `.claude/epics/$ARGUMENTS/execution-status.md`:
+
+```markdown
+---
+started: {datetime}
+branch: epic/$ARGUMENTS
+---
+
+# Execution Status
+
+## Active Agents
+- Agent-1: Issue #1234 Stream A (Database) - Started {time}
+- Agent-2: Issue #1234 Stream B (API) - Started {time}
+- Agent-3: Issue #1235 Stream A (UI) - Started {time}
+
+## Queued Issues
+- Issue #1236 - Waiting for #1234
+- Issue #1237 - Waiting for #1235
+
+## Completed
+- {None yet}
+```
+
+### 6. Monitor and Coordinate
+
+Set up monitoring:
+```bash
+echo "
+Agents launched successfully!
+
+Monitor progress:
+ /pm:epic-status $ARGUMENTS
+
+View branch changes:
+ git status
+
+Stop all agents:
+ /pm:epic-stop $ARGUMENTS
+
+Merge when complete:
+ /pm:epic-merge $ARGUMENTS
+"
+```
+
+### 7. Handle Dependencies
+
+As agents complete streams:
+- Check if any blocked issues are now ready
+- Launch new agents for newly-ready work
+- Update execution-status.md
+
+## Output Format
+
+```
+🚀 Epic Execution Started: $ARGUMENTS
+
+Branch: epic/$ARGUMENTS
+
+Launching {total} agents across {issue_count} issues:
+
+Issue #1234: Database Schema
+ ├─ Stream A: Schema creation (Agent-1) ✓ Started
+ └─ Stream B: Migrations (Agent-2) ✓ Started
+
+Issue #1235: API Endpoints
+ ├─ Stream A: User endpoints (Agent-3) ✓ Started
+ ├─ Stream B: Post endpoints (Agent-4) ✓ Started
+ └─ Stream C: Tests (Agent-5) ⏸ Waiting for A & B
+
+Blocked Issues (2):
+ - #1236: UI Components (depends on #1234)
+ - #1237: Integration (depends on #1235, #1236)
+
+Monitor with: /pm:epic-status $ARGUMENTS
+```
+
+## Error Handling
+
+If agent launch fails:
+```
+❌ Failed to start Agent-{id}
+ Issue: #{issue}
+ Stream: {stream}
+ Error: {reason}
+
+Continue with other agents? (yes/no)
+```
+
+If uncommitted changes are found:
+```
+❌ You have uncommitted changes. Please commit or stash them before starting an epic.
+
+To commit changes:
+ git add .
+ git commit -m "Your commit message"
+
+To stash changes:
+ git stash push -m "Work in progress"
+ # (Later restore with: git stash pop)
+```
+
+If branch creation fails:
+```
+❌ Cannot create branch
+ {git error message}
+
+Try: git branch -d epic/$ARGUMENTS
+Or: Check existing branches with: git branch -a
+```
+
+## Important Notes
+
+- Follow `/rules/branch-operations.md` for git operations
+- Follow `/rules/agent-coordination.md` for parallel work
+- Agents work in the SAME branch (not separate branches)
+- Maximum parallel agents should be reasonable (e.g., 5-10)
+- Monitor system resources if launching many agents
diff --git a/codedetect/.claude/commands/pm/epic-status.md b/codedetect/.claude/commands/pm/epic-status.md
new file mode 100644
index 0000000..7d727be
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-status.md
@@ -0,0 +1,11 @@
+---
+allowed-tools: Bash
+---
+
+Run `bash .claude/scripts/pm/epic-status.sh $ARGUMENTS` using the bash tool and show me the complete stdout printed to the console.
+
+- DO NOT truncate.
+- DO NOT collapse.
+- DO NOT abbreviate.
+- Show ALL lines in full.
+- DO NOT print any other comments.
diff --git a/codedetect/.claude/commands/pm/epic-sync.md b/codedetect/.claude/commands/pm/epic-sync.md
new file mode 100644
index 0000000..b7a3eb7
--- /dev/null
+++ b/codedetect/.claude/commands/pm/epic-sync.md
@@ -0,0 +1,455 @@
+---
+allowed-tools: Bash, Read, Write, LS, Task
+---
+
+# Epic Sync
+
+Push epic and tasks to GitHub as issues.
+
+## Usage
+```
+/pm:epic-sync
+```
+
+## Quick Check
+
+```bash
+# Verify epic exists
+test -f .claude/epics/$ARGUMENTS/epic.md || echo "❌ Epic not found. Run: /pm:prd-parse $ARGUMENTS"
+
+# Count task files
+ls .claude/epics/$ARGUMENTS/*.md 2>/dev/null | grep -v epic.md | wc -l
+```
+
+If no tasks found: "❌ No tasks to sync. Run: /pm:epic-decompose $ARGUMENTS"
+
+## Instructions
+
+### 0. Check Remote Repository
+
+Follow `/rules/github-operations.md` to ensure we're not syncing to the CCPM template:
+
+```bash
+# Check if remote origin is the CCPM template repository
+remote_url=$(git remote get-url origin 2>/dev/null || echo "")
+if [[ "$remote_url" == *"automazeio/ccpm"* ]] || [[ "$remote_url" == *"automazeio/ccpm.git"* ]]; then
+ echo "❌ ERROR: You're trying to sync with the CCPM template repository!"
+ echo ""
+ echo "This repository (automazeio/ccpm) is a template for others to use."
+ echo "You should NOT create issues or PRs here."
+ echo ""
+ echo "To fix this:"
+ echo "1. Fork this repository to your own GitHub account"
+ echo "2. Update your remote origin:"
+ echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
+ echo ""
+ echo "Or if this is a new project:"
+ echo "1. Create a new repository on GitHub"
+ echo "2. Update your remote origin:"
+ echo " git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO.git"
+ echo ""
+ echo "Current remote: $remote_url"
+ exit 1
+fi
+```
+
+### 1. Create Epic Issue
+
+Strip frontmatter and prepare GitHub issue body:
+```bash
+# Extract content without frontmatter
+sed '1,/^---$/d; 1,/^---$/d' .claude/epics/$ARGUMENTS/epic.md > /tmp/epic-body-raw.md
+
+# Remove "## Tasks Created" section and replace with Stats
+awk '
+ /^## Tasks Created/ {
+ in_tasks=1
+ next
+ }
+ /^## / && in_tasks {
+ in_tasks=0
+ # When we hit the next section after Tasks Created, add Stats
+ if (total_tasks) {
+ print "## Stats\n"
+ print "Total tasks: " total_tasks
+ print "Parallel tasks: " parallel_tasks " (can be worked on simultaneously)"
+ print "Sequential tasks: " sequential_tasks " (have dependencies)"
+ if (total_effort) print "Estimated total effort: " total_effort " hours"
+ print ""
+ }
+ }
+ /^Total tasks:/ && in_tasks { total_tasks = $3; next }
+ /^Parallel tasks:/ && in_tasks { parallel_tasks = $3; next }
+ /^Sequential tasks:/ && in_tasks { sequential_tasks = $3; next }
+ /^Estimated total effort:/ && in_tasks {
+ gsub(/^Estimated total effort: /, "")
+ total_effort = $0
+ next
+ }
+ !in_tasks { print }
+ END {
+ # If we were still in tasks section at EOF, add stats
+ if (in_tasks && total_tasks) {
+ print "## Stats\n"
+ print "Total tasks: " total_tasks
+ print "Parallel tasks: " parallel_tasks " (can be worked on simultaneously)"
+ print "Sequential tasks: " sequential_tasks " (have dependencies)"
+ if (total_effort) print "Estimated total effort: " total_effort
+ }
+ }
+' /tmp/epic-body-raw.md > /tmp/epic-body.md
+
+# Determine epic type (feature vs bug) from content
+if grep -qi "bug\|fix\|issue\|problem\|error" /tmp/epic-body.md; then
+ epic_type="bug"
+else
+ epic_type="feature"
+fi
+
+# Create epic issue with labels
+epic_number=$(gh issue create \
+ --title "Epic: $ARGUMENTS" \
+ --body-file /tmp/epic-body.md \
+ --label "epic,epic:$ARGUMENTS,$epic_type" \
+ --json number -q .number)
+```
+
+Store the returned issue number for epic frontmatter update.
+
+### 2. Create Task Sub-Issues
+
+Check if gh-sub-issue is available:
+```bash
+if gh extension list | grep -q "yahsan2/gh-sub-issue"; then
+ use_subissues=true
+else
+ use_subissues=false
+ echo "⚠️ gh-sub-issue not installed. Using fallback mode."
+fi
+```
+
+Count task files to determine strategy:
+```bash
+task_count=$(ls .claude/epics/$ARGUMENTS/[0-9][0-9][0-9].md 2>/dev/null | wc -l)
+```
+
+### For Small Batches (< 5 tasks): Sequential Creation
+
+```bash
+if [ "$task_count" -lt 5 ]; then
+ # Create sequentially for small batches
+ for task_file in .claude/epics/$ARGUMENTS/[0-9][0-9][0-9].md; do
+ [ -f "$task_file" ] || continue
+
+ # Extract task name from frontmatter
+ task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
+
+ # Strip frontmatter from task content
+ sed '1,/^---$/d; 1,/^---$/d' "$task_file" > /tmp/task-body.md
+
+ # Create sub-issue with labels
+ if [ "$use_subissues" = true ]; then
+ task_number=$(gh sub-issue create \
+ --parent "$epic_number" \
+ --title "$task_name" \
+ --body-file /tmp/task-body.md \
+ --label "task,epic:$ARGUMENTS" \
+ --json number -q .number)
+ else
+ task_number=$(gh issue create \
+ --title "$task_name" \
+ --body-file /tmp/task-body.md \
+ --label "task,epic:$ARGUMENTS" \
+ --json number -q .number)
+ fi
+
+ # Record mapping for renaming
+ echo "$task_file:$task_number" >> /tmp/task-mapping.txt
+ done
+
+ # After creating all issues, update references and rename files
+ # This follows the same process as step 3 below
+fi
+```
+
+### For Larger Batches: Parallel Creation
+
+```bash
+if [ "$task_count" -ge 5 ]; then
+ echo "Creating $task_count sub-issues in parallel..."
+
+ # Check if gh-sub-issue is available for parallel agents
+ if gh extension list | grep -q "yahsan2/gh-sub-issue"; then
+ subissue_cmd="gh sub-issue create --parent $epic_number"
+ else
+ subissue_cmd="gh issue create"
+ fi
+
+ # Batch tasks for parallel processing
+ # Spawn agents to create sub-issues in parallel with proper labels
+ # Each agent must use: --label "task,epic:$ARGUMENTS"
+fi
+```
+
+Use Task tool for parallel creation:
+```yaml
+Task:
+ description: "Create GitHub sub-issues batch {X}"
+ subagent_type: "general-purpose"
+ prompt: |
+ Create GitHub sub-issues for tasks in epic $ARGUMENTS
+ Parent epic issue: #$epic_number
+
+ Tasks to process:
+ - {list of 3-4 task files}
+
+ For each task file:
+ 1. Extract task name from frontmatter
+ 2. Strip frontmatter using: sed '1,/^---$/d; 1,/^---$/d'
+ 3. Create sub-issue using:
+ - If gh-sub-issue available:
+ gh sub-issue create --parent $epic_number --title "$task_name" \
+ --body-file /tmp/task-body.md --label "task,epic:$ARGUMENTS"
+ - Otherwise:
+ gh issue create --title "$task_name" --body-file /tmp/task-body.md \
+ --label "task,epic:$ARGUMENTS"
+ 4. Record: task_file:issue_number
+
+ IMPORTANT: Always include --label parameter with "task,epic:$ARGUMENTS"
+
+ Return mapping of files to issue numbers.
+```
+
+Consolidate results from parallel agents:
+```bash
+# Collect all mappings from agents
+cat /tmp/batch-*/mapping.txt >> /tmp/task-mapping.txt
+
+# IMPORTANT: After consolidation, follow step 3 to:
+# 1. Build old->new ID mapping
+# 2. Update all task references (depends_on, conflicts_with)
+# 3. Rename files with proper frontmatter updates
+```
+
+### 3. Rename Task Files and Update References
+
+First, build a mapping of old numbers to new issue IDs:
+```bash
+# Create mapping from old task numbers (001, 002, etc.) to new issue IDs
+> /tmp/id-mapping.txt
+while IFS=: read -r task_file task_number; do
+ # Extract old number from filename (e.g., 001 from 001.md)
+ old_num=$(basename "$task_file" .md)
+ echo "$old_num:$task_number" >> /tmp/id-mapping.txt
+done < /tmp/task-mapping.txt
+```
+
+Then rename files and update all references:
+```bash
+# Process each task file
+while IFS=: read -r task_file task_number; do
+ new_name="$(dirname "$task_file")/${task_number}.md"
+
+ # Read the file content
+ content=$(cat "$task_file")
+
+ # Update depends_on and conflicts_with references
+ while IFS=: read -r old_num new_num; do
+ # Update arrays like [001, 002] to use new issue numbers
+ content=$(echo "$content" | sed "s/\b$old_num\b/$new_num/g")
+ done < /tmp/id-mapping.txt
+
+ # Write updated content to new file
+ echo "$content" > "$new_name"
+
+ # Remove old file if different from new
+ [ "$task_file" != "$new_name" ] && rm "$task_file"
+
+ # Update github field in frontmatter
+ # Add the GitHub URL to the frontmatter
+ repo=$(gh repo view --json nameWithOwner -q .nameWithOwner)
+ github_url="https://github.com/$repo/issues/$task_number"
+
+ # Update frontmatter with GitHub URL and current timestamp
+ current_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
+
+ # Use sed to update the github and updated fields
+ sed -i.bak "/^github:/c\github: $github_url" "$new_name"
+ sed -i.bak "/^updated:/c\updated: $current_date" "$new_name"
+ rm "${new_name}.bak"
+done < /tmp/task-mapping.txt
+```
+
+### 4. Update Epic with Task List (Fallback Only)
+
+If NOT using gh-sub-issue, add task list to epic:
+
+```bash
+if [ "$use_subissues" = false ]; then
+ # Get current epic body
+ gh issue view {epic_number} --json body -q .body > /tmp/epic-body.md
+
+ # Append task list
+ cat >> /tmp/epic-body.md << 'EOF'
+
+ ## Tasks
+ - [ ] #{task1_number} {task1_name}
+ - [ ] #{task2_number} {task2_name}
+ - [ ] #{task3_number} {task3_name}
+ EOF
+
+ # Update epic issue
+ gh issue edit {epic_number} --body-file /tmp/epic-body.md
+fi
+```
+
+With gh-sub-issue, this is automatic!
+
+### 5. Update Epic File
+
+Update the epic file with GitHub URL, timestamp, and real task IDs:
+
+#### 5a. Update Frontmatter
+```bash
+# Get repo info
+repo=$(gh repo view --json nameWithOwner -q .nameWithOwner)
+epic_url="https://github.com/$repo/issues/$epic_number"
+current_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
+
+# Update epic frontmatter
+sed -i.bak "/^github:/c\github: $epic_url" .claude/epics/$ARGUMENTS/epic.md
+sed -i.bak "/^updated:/c\updated: $current_date" .claude/epics/$ARGUMENTS/epic.md
+rm .claude/epics/$ARGUMENTS/epic.md.bak
+```
+
+#### 5b. Update Tasks Created Section
+```bash
+# Create a temporary file with the updated Tasks Created section
+cat > /tmp/tasks-section.md << 'EOF'
+## Tasks Created
+EOF
+
+# Add each task with its real issue number
+for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
+ [ -f "$task_file" ] || continue
+
+ # Get issue number (filename without .md)
+ issue_num=$(basename "$task_file" .md)
+
+ # Get task name from frontmatter
+ task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
+
+ # Get parallel status
+ parallel=$(grep '^parallel:' "$task_file" | sed 's/^parallel: *//')
+
+ # Add to tasks section
+ echo "- [ ] #${issue_num} - ${task_name} (parallel: ${parallel})" >> /tmp/tasks-section.md
+done
+
+# Add summary statistics
+total_count=$(ls .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | wc -l)
+parallel_count=$(grep -l '^parallel: true' .claude/epics/$ARGUMENTS/[0-9]*.md 2>/dev/null | wc -l)
+sequential_count=$((total_count - parallel_count))
+
+cat >> /tmp/tasks-section.md << EOF
+
+Total tasks: ${total_count}
+Parallel tasks: ${parallel_count}
+Sequential tasks: ${sequential_count}
+EOF
+
+# Replace the Tasks Created section in epic.md
+# First, create a backup
+cp .claude/epics/$ARGUMENTS/epic.md .claude/epics/$ARGUMENTS/epic.md.backup
+
+# Use awk to replace the section
+awk '
+ /^## Tasks Created/ {
+ skip=1
+ while ((getline line < "/tmp/tasks-section.md") > 0) print line
+ close("/tmp/tasks-section.md")
+ }
+ /^## / && !/^## Tasks Created/ { skip=0 }
+ !skip && !/^## Tasks Created/ { print }
+' .claude/epics/$ARGUMENTS/epic.md.backup > .claude/epics/$ARGUMENTS/epic.md
+
+# Clean up
+rm .claude/epics/$ARGUMENTS/epic.md.backup
+rm /tmp/tasks-section.md
+```
+
+### 6. Create Mapping File
+
+Create `.claude/epics/$ARGUMENTS/github-mapping.md`:
+```bash
+# Create mapping file
+cat > .claude/epics/$ARGUMENTS/github-mapping.md << EOF
+# GitHub Issue Mapping
+
+Epic: #${epic_number} - https://github.com/${repo}/issues/${epic_number}
+
+Tasks:
+EOF
+
+# Add each task mapping
+for task_file in .claude/epics/$ARGUMENTS/[0-9]*.md; do
+ [ -f "$task_file" ] || continue
+
+ issue_num=$(basename "$task_file" .md)
+ task_name=$(grep '^name:' "$task_file" | sed 's/^name: *//')
+
+ echo "- #${issue_num}: ${task_name} - https://github.com/${repo}/issues/${issue_num}" >> .claude/epics/$ARGUMENTS/github-mapping.md
+done
+
+# Add sync timestamp
+echo "" >> .claude/epics/$ARGUMENTS/github-mapping.md
+echo "Synced: $(date -u +"%Y-%m-%dT%H:%M:%SZ")" >> .claude/epics/$ARGUMENTS/github-mapping.md
+```
+
+### 7. Create Worktree
+
+Follow `/rules/worktree-operations.md` to create development worktree:
+
+```bash
+# Ensure main is current
+git checkout main
+git pull origin main
+
+# Create worktree for epic
+git worktree add ../epic-$ARGUMENTS -b epic/$ARGUMENTS
+
+echo "✅ Created worktree: ../epic-$ARGUMENTS"
+```
+
+### 8. Output
+
+```
+✅ Synced to GitHub
+ - Epic: #{epic_number} - {epic_title}
+ - Tasks: {count} sub-issues created
+ - Labels applied: epic, task, epic:{name}
+ - Files renamed: 001.md → {issue_id}.md
+ - References updated: depends_on/conflicts_with now use issue IDs
+ - Worktree: ../epic-$ARGUMENTS
+
+Next steps:
+ - Start parallel execution: /pm:epic-start $ARGUMENTS
+ - Or work on single issue: /pm:issue-start {issue_number}
+ - View epic: https://github.com/{owner}/{repo}/issues/{epic_number}
+```
+
+## Error Handling
+
+Follow `/rules/github-operations.md` for GitHub CLI errors.
+
+If any issue creation fails:
+- Report what succeeded
+- Note what failed
+- Don't attempt rollback (partial sync is fine)
+
+## Important Notes
+
+- Trust GitHub CLI authentication
+- Don't pre-check for duplicates
+- Update frontmatter only after successful creation
+- Keep operations simple and atomic
diff --git a/codedetect/.claude/commands/pm/help.md b/codedetect/.claude/commands/pm/help.md
new file mode 100644
index 0000000..2fce637
--- /dev/null
+++ b/codedetect/.claude/commands/pm/help.md
@@ -0,0 +1,11 @@
+---
+allowed-tools: Bash
+---
+
+Run `bash .claude/scripts/pm/help.sh` using a sub-agent and show me the complete output.
+
+- DO NOT truncate.
+- DO NOT collapse.
+- DO NOT abbreviate.
+- Show ALL lines in full.
+- DO NOT print any other comments.
diff --git a/codedetect/.claude/commands/pm/import.md b/codedetect/.claude/commands/pm/import.md
new file mode 100644
index 0000000..dac9c9e
--- /dev/null
+++ b/codedetect/.claude/commands/pm/import.md
@@ -0,0 +1,98 @@
+---
+allowed-tools: Bash, Read, Write, LS
+---
+
+# Import
+
+Import existing GitHub issues into the PM system.
+
+## Usage
+```
+/pm:import [--epic ] [--label