Frequently Asked Questions
General
What is gptcgt?
gptcgt is a terminal-based AI coding IDE that connects to multiple Large Language Models (OpenAI, Anthropic, Google, DeepSeek, xAI, and more). It lets you chat with AI about your code, generate changes as diffs, run multiple models simultaneously, and even run fully autonomous coding sessions.
Is gptcgt free?
Yes — gptcgt is free to use with your own API keys (BYOK mode). You only pay the providers directly. Alternatively, you can subscribe to gptcgt Pro for Managed Credits, which simplifies billing and gives you access to all providers through a single account.
What languages does it support?
gptcgt works with any programming language. It uses tree-sitter for fast AST parsing and has pre-configured LSP support for Python, TypeScript/JavaScript, Rust, Go, Java, and C/C++. Other languages work fine — you just skip the cross-file reference verification.
Privacy & Security
Is my code sent to servers to train AI?
No. We use provider API endpoints with zero-data-retention agreements. We do not use user data, prompts, or code to train models. In BYOK mode, your code goes directly to the provider — we never see it. See the Privacy Policy.
Where are my API keys stored?
Keys are stored in your operating system's native keychain (macOS Keychain, Windows Credential Locker, Linux Secret Service). They never touch disk in plaintext. The keyring Python library handles encryption.
Can the AI access files outside my project?
No. The Workspace security boundary resolves all file paths (including symlinks and ../ traversals) and rejects any access outside your project root. This is enforced at the deepest level — every file read, write, list, and delete operation goes through this gatekeeper.
What if the AI generates vulnerable code?
Every code change is automatically scanned by a 3-layer security system: custom regex patterns (instant), Semgrep (OWASP Top 10), and language-specific scanners (Bandit for Python). Critical vulnerabilities trigger an auto-fix attempt before presenting changes to you. See Security & Safety.
Usage
What happens if the AI deletes my files?
We strongly recommend using Git. The AI does have access to file deletion tools if instructed. Always commit before starting high-impact operations (Architect, Ensemble, Autonomous). Crash recovery automatically backs up unapplied diffs in .gptcgt/recovery/.
How does Ensemble mode pick the winner?
An impartial Arbiter model reads all 3 solutions and scores them on correctness, completeness, code quality, and security. It selects the winner with evidence-backed reasoning. The losing models' ELO ratings drop, and the winner's rises — so better models get selected more often over time.
Can I use gptcgt in a team?
Yes. Commit .gptcgt/config.toml to Git for shared project settings (context files, test commands, lint commands). Each team member uses their own API keys or managed account. The web dashboard supports team management for organizations.
What is the .gptcgt directory?
It stores project-specific state:
config.toml— Per-project configurationphase.md— Project file map and development phases (auto-generated)project.md— Auto-detected tech stack summarymemory.json— Agent telemetry and routing historyrecovery/— Crash recovery state and diff backups
Add .gptcgt/ to .gitignore if you don't want to share state across the team (config.toml being the exception).
Billing
How do I cancel my subscription?
Type /billing in the terminal or visit the Accounts page on the web dashboard. Cancellation takes effect at the end of the current billing cycle.
What happens when I run out of credits?
Depends on your settings:
- Overage disabled (default) — Operations halt with a 402 error. The system suggests a cheaper mode.
- Overage enabled — You continue with pay-as-you-go billing.
- Auto-downgrade enabled — The system automatically suggests Scout mode instead of blocking.
Why not just use Cursor / Windsurf / another AI editor?
Those are excellent products, but they lock you into a VSCode fork. gptcgt brings AI orchestration natively into your terminal — right next to your build tools, Git, and servers. Key differentiators:
- Multi-model — Run 3+ models simultaneously and pick the best result
- ELO rankings — Models compete and improve over time
- Transparent costs — See exactly what every token costs
- Provider-agnostic — Not locked to one AI vendor
- Terminal-native — No GUI overhead, works over SSH