AnyForge Control
Govern every AI call.
Change nothing else.
AnyForge Control is the governance layer underneath the AnyForge platform. For teams using their own agents (Claude Code, Cursor, Cline, Continue, or a raw SDK client), Control ships as a governed LLM proxy that installs in five minutes. Zero workflow change. Immediate visibility.
Using AnyForge Code Studio or AnyForge Crew? Control is already native there — nothing separate to install. This page is for the "bring your own agent" path.
4-step install · 5 minutes
- Sign up at anyforge.ai — free, no card. You get a 100M-token free trial on signup.
- Add your provider API keys at
/settings/secrets— Anthropic, OpenAI, Google, or OpenRouter. Keys live in Google Secret Manager, never in our database. - Run the CLI in your repo:
Detects Claude Code, Cursor, Cline, Continue, or a raw SDK client. Mints a scoped AnyForge API key, writes the proxy config.$ npx anyforge init - Open your dashboard at
crew.anyforge.ai— cost analytics, routing decisions, audit chain. Your next LLM call shows up within seconds.
Free to start with a 100M-token trial allowance. After trial: $0.30 per 1M tokens, provider-agnostic — same rate whether you use cloud LLMs or self-host. No subscription, no seats, no daily caps. Routing + caching typically save customers more on wasted model spend than the platform fee costs — the net effect is savings, not cost.
Capabilities
Visibility, routing, and governance. Opt-in per rule.
Unified proxy
One endpoint in front of OpenAI, Anthropic, Gemini, and OpenRouter. SDK-compatible drop-in, no prompt rewrites.
Self-hosted Ollama
Run models on your own Ollama daemon (local, on-prem, or Ollama Cloud). Same governance layer applied to every call — for air-gapped, data-residency, or regulated deployments.
Cost analytics
Per-call, per-agent, per-project spend. Dimensional rollups. Alerts at configurable thresholds.
Intent-based routing
Rules, regex, or embedding-based routing. Send each call to the cheapest model that meets the quality bar.
Portable memory
Shared context that survives provider switches. Same conversation history, different model, no rebuild.
Hash-chained audit
Signed, append-only log of every prompt, response, and tool call. Regulator-ready.
Optional OPA policy
Redaction, budget caps, PII blocking, allow-lists. Turn each rule on when you need it, not before.
What you learn on Control
Three questions the data answers in the first week.
What did this feature cost in tokens?
Per-call spend, grouped by agent, project, or user. The answer nobody on your team can produce today.
Which model pays for itself on this kind of task?
Model-by-model latency, success rate, and cost-per-outcome. Pick the cheapest model that meets the quality bar with evidence.
Which workflows are ready for multi-agent delivery?
Repeated tasks with real stages, dependencies, and a human approver show up as Crew candidates on the dashboard.
When you are ready
Graduate to AnyForge Crew. Keep Control running underneath.
Crew is role-based multi-agent delivery: Architect, Engineer, QA, and Compliance run as a stateful pipeline with two mandatory human approval gates and a cryptographic audit trail. You turn it on per project. Control keeps running underneath, so spend and audit stay unified across both planes.
You do not have to move to Crew. Many teams run Control indefinitely. It is a valid end state.