Plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, Ollama, or any OpenAI-compatible model. All tools work. Zero lock-in.
The full tool system, streaming, multi-step reasoning — all working through the model you choose.
Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks, and more.
Token-by-token streaming output, just like the original. Watch your model think and act in real time.
The model calls tools, gets results, reasons, and continues. Complex agentic workflows work out of the box.
Pass images as base64 or URL to vision-capable models. Screenshots, diagrams, UI mockups — all supported.
/commit, /review, /compact, /diff, /doctor — all the commands you know from Claude Code still work.
AgentTool spawns sub-agents using your provider. Persistent memory system keeps context across sessions.
Any model that speaks the OpenAI chat completions API. Cloud or local.
GPT-4o, GPT-4o-mini
V3, R1
via OpenRouter
Local, free
Large, Medium
Llama, Qwen
Ultra-fast inference
Enterprise
Local GUI
OpenAI API format
Three steps. One command. Any model.
One command to install globally via npm.
npm install -g @aryanjsx/openclaude
Set your provider and model with environment variables.
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
Launch OpenClaude. That's it. Everything works.
openclaude
A thin translation layer. Claude Code doesn't know it's talking to a different model.
A rough guide to what works best for agentic tool use.
| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller (<7B) | Limited | Limited | Very Fast |
Install in one command. Use any model. Ship faster.