TL;DR
Test Companion is a 7-phase testing workflow where the AI that built a feature demonstrates it while you validate. No more fetching from APIs—the implementation context is passed directly from build phase to test phase. Tests evolve based on feedback. Issues are tracked across rounds.
Core Philosophy
“The AI that builds already knows what to test.”
No more manual test documentation. The implementation context is captured automatically after building a feature:
- Accurate: From actual implementation, not prompt text
- Specific: Test steps from behaviors implemented, not guesses
- Complete: Edge cases you considered are included
The 7-Phase Workflow
Phase 0: Context Capture
After building a feature, capture the implementation context automatically. This enables context-aware testing.
- Files changed with change summaries
- Components affected and testable actions
- Behaviors implemented with expected outcomes
- Edge cases considered during development
- Technical decisions and reasoning
Phase 1: Feature Briefing
Show a briefing generated from the implementation context—not from manual documentation.
- What was built (from user-visible changes)
- Open issues from previous rounds
- Validation points (behaviors + edge cases)
- Focus areas for testing
- Estimated testing time
Phase 2: Session Start
Start the test session directly from implementation context. Steps are auto-generated from behaviors.
- Auto-generated test steps from context
- Configurable options (capture logs, pause on error)
- Visual progress banner appears
- Step tracking begins
Phase 3: Live Demonstration
The AI demonstrates what was built while narrating each step. You watch and validate.
- Step-by-step narration before actions
- Round and step tracking ([Round 1] Step 3/7)
- Pause for UI changes to be visible
- Describe expectations clearly
Phase 4: Feedback Capture
Capture feedback and track issues formally. Feedback evolves the test plan.
- Issue tracking with severity levels
- Categories: bug, ux, missing feature, improvement
- Add new test steps based on feedback
- Mark regression points for future runs
Phase 5: Fix & Revalidate
Issues and fixes are tracked across rounds. Tests improve over time.
- Record fixes with affected files
- Update issue status to 'fixed'
- Start new testing round
- Continuous mode for fix-test loops
Phase 6: Summary & Report
Generate comprehensive report with round history and cumulative state.
- Round-by-round results
- Open issues summary
- Fixes applied
- Downloadable markdown report
Test Modes
User-Initiated (Default)
Interactive testing with the user present:
- Shows context briefing, waits for “Start Testing”
- Pauses for manual steps and feedback
- Shows report modal at end
Automated (CI/CD)
Headless testing for pipelines:
- Runs through all steps without waiting
- No briefing wait—auto-proceeds
- No report popup—just completes
Issue Categories
| Category | Indicators | Priority |
|---|---|---|
| bug | “doesn't work”, “broken”, “error” | HIGH |
| ux | “confusing”, “slow”, “awkward” | MEDIUM |
| missing_feature | “should have”, “expected” | MEDIUM |
| improvement | “would be nice”, “suggestion” | LOW |
Results & Benefits
Visual Progress Banner
Critical: Banner Must Be Visible
The Test Companion banner is your single source of truth. During testing, the user must see:
- Visibility: What step Claude is currently executing
- Progress: Step X of Y progress display
- Control: Pause, Skip, and Feedback buttons
- Awareness: User knows testing is in progress
Related Features
Test Companion works best with PACT Framework and Session Memory.
Need help? support@gritflowai.io