Skip to content
Features

One gateway to run production AI safely

Keep one API surface while you route across providers, enforce spend policies, and capture request-level visibility.

Platform

Built for teams shipping real AI workloads

Reliability controls, spend governance, and deep observability in a single gateway layer.

Routing

Route across providers with automatic failover.

Keep traffic flowing when providers slow down or fail. ProxyGuard reroutes requests in real time using your policy rules.

OpenAIgpt-4o
503
Anthropicclaude-3.5-sonnet
rerouted
Failover latency82ms
Observability

Track token usage and spend in real time.

Monitor prompt and completion tokens with per-project spend visibility and configurable budget alerts.

Token usageLast 24h
53.2k
tokens across 3 projects
Prompt
31.4k
Completion
18.2k
Spend
$4.21
Playground

Test policies before they reach production.

Simulate requests, inspect traces, and tune routing and limits before rollout.

proxy-route.ts
import { forward } from '@proxyguard/providers'
const res = await forward({
model: 'gpt-4o',
messages: req.body.messages,
budget: project.dailyCap,
})
200 OK
842ms
Getting Started

From first request to production in three steps

Connect once, apply policies centrally, and monitor every token from day one.

app/index.ts
TypeScript
import OpenAI from 'openai'
const client = new OpenAI({)
apiKey: process.env.PROXYGUARD_API_KEY,
baseURL: 'https://proxy.yourdomain.com/v1',// ← one line change
})
// Use it exactly like before — same SDK, same methods
const res = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
})
1
Setup

Switch one URL and keep shipping.

Replace your provider base URL with your ProxyGuard endpoint. Your SDK, prompts, and app logic stay the same.

  • Works with any OpenAI-compatible SDK
  • Python, Node, Go, curl — any client
  • Zero-downtime migration
2
Policy

Set guardrails once.

Define spend caps, rate limits, IP allowlists, and failover priority in one place.

  • Budget alerts at 75% and 90%
  • Per-key and per-project rate limiting
  • Automatic provider failover
3
Insight

See every request clearly.

Capture cost, latency, model, and token data for every request. Export or stream it to your own tools.

  • Live request logs with metadata
  • Cost attribution by project and model
  • Compliance-ready audit trail
Providers

Every Provider,
One Gateway

Add or swap providers without rewriting integrations. ProxyGuard handles routing and failover while your team keeps one stable API surface.

Put AI spend and reliability on autopilot

Route every model call through one policy layer for budgets, failover, and request-level analytics.