Operation Modes
gptcgt separates AI behavior into 6 distinct operation modes. Each mode controls how many models are dispatched, how verification works, and what it costs. Switch modes mid-session to match the task at hand.
Quick Comparison
| Mode | Credits | Models | Best For |
|---|---|---|---|
| Scout | 1 | 1 (lightweight) | Exploring, reading, Q&A |
| Standard | 5 | 1 (capable) | Daily coding tasks |
| Ensemble | 25 | 3 parallel | Important changes, hard bugs |
| Architect | 100 | Multi-phase | Large features, refactors |
| Battle | 25 | 2 head-to-head | Algorithm comparisons |
| Single Provider | 5 | 1 (locked vendor) | Vendor-specific needs |
π Scout Mode
Cost: 1 Credit β The cheapest option. Scout uses a fast, lightweight model to navigate your codebase without making edits. It reads directory structures, builds AST maps via tree-sitter, and answers questions like βWhere is the authentication logic?β or βWhat does this function do?β
Use when: You're exploring unfamiliar code, asking questions, or need a quick explanation.
β‘ Standard Mode
Cost: 5 Credits β Your daily driver. A single capable model (e.g., Claude 3.5 Sonnet, GPT-4o, Gemini 2.5 Pro) applies changes directly to your files. The router automatically picks the best model based on task complexity and ELO ratings.
Use when: Regular coding tasks β fixing bugs, adding features, writing tests, refactoring.
π― Ensemble Mode
Cost: 25 Credits β The quality maximizer. Your prompt is dispatched to 3 different AI models simultaneously. Each model works in isolation, producing its own diff. An impartial Arbiter Model then:
- Reads all three solutions
- Scores each one on correctness, completeness, code quality, and security
- Picks the winner with evidence-backed reasoning
- Optionally cherry-picks the best parts from each solution
The losing models' ELO ratings drop; the winner's rises. Over time, the system learns which models to pick for which kinds of tasks.
Use when: The task is important and you want the provably best solution. Bug fixes in production code, security-sensitive changes, complex algorithmic work.
ποΈ Architect Mode
Cost: 100 Credits β For complex, multi-stage feature builds. Architect mode operates in two phases:
Phase 1 β Plan: The AI generates a detailed implementation plan with numbered steps, files to modify, and rationale. You review and approve the plan before any code is written.
Phase 2 β Execute: The AI implements each step in the plan, verifying in a sandbox as it goes. Only the final, tested branch is presented to you.
Use when: Building entire features from scratch, large refactors, multi-file architectural changes.
βοΈ Battle Mode
Cost: 25 Credits β Two state-of-the-art models go head-to-head. You see their strategies side-by-side in a split-screen diff and manually select the winner. The winning model gets an ELO boost.
Use when: Edge-case algorithms, performance optimizations, or when you want to see fundamentally different approaches to the same problem.
π Single Provider Modes
Cost: 5 Credits β Lock execution to a specific AI provider family. Instead of letting the router choose across all available models, you restrict to one vendor:
SINGLE_MODEL_OPENAIβ Only OpenAI modelsSINGLE_MODEL_ANTHROPICβ Only Anthropic modelsSINGLE_MODEL_GOOGLEβ Only Google models
The system still auto-selects the best model within that family based on complexity and ELO ratings.
Use when: You have a corporate policy restricting which AI providers you can use, or you strongly prefer a specific vendor's coding style.
Switching Modes
Press Ctrl+Q to open the Quality Tier selector, or set your default in settings (Ctrl+,). You can also set a mode per-project in .gptcgt/config.toml.