gptcgt Documentation

Welcome to the official documentation for gptcgt — the multi-model AI coding terminal that transforms your shell into an intelligent IDE. gptcgt connects to the world's most capable Large Language Models and lets them read, reason about, and edit your codebase directly.

What Makes gptcgt Different

  • Multi-model orchestration — Run 3+ AI models on the same task and pick the best result, verified by an impartial Arbiter.
  • Terminal-native — No Electron app. gptcgt lives in your shell, right next to your build tools, Git, and CI/CD pipelines.
  • Provider-agnostic — OpenAI, Anthropic, Google, DeepSeek, xAI, Mistral, Groq, Cohere, OpenRouter, or your own local models.
  • Transparent cost tracking — See exactly what every token costs, with 5 independent safety layers to prevent runaway spend.
  • Autonomous execution — Give it a goal, walk away, and come back to a fully implemented feature with test coverage.
  • Security-first — Workspace sandboxing prevents the AI from touching files outside your project. All generated code is scanned for vulnerabilities.
  • ELO-ranked routing — Models compete head-to-head. Winners get selected more often over time.

Supported Languages

gptcgt works with any language or framework. It uses tree-sitter for fast AST parsing and an optional Language Server Protocol (LSP) client for cross-file reference checking. Pre-configured support for:

  • Python (pyright / pylsp)
  • TypeScript / JavaScript (ts-language-server)
  • Rust (rust-analyzer)
  • Go (gopls)
  • Java (jdtls)
  • C / C++ (clangd)

If an LSP isn't installed for your language, gptcgt still works — you just skip the cross-file reference verification step.