Agumbe AI Gateway is the central layer between your applications and the large language model providers they depend on. Instead of wiring every application directly to OpenAI, Anthropic, or other providers, teams call a single Agumbe endpoint and let the gateway handle model routing, authentication, guardrails, usage tracking, and operational visibility. At the API layer, Agumbe exposes chat completions and embeddings endpoints, so teams can integrate with familiar request and response shapes while still gaining Agumbe-specific controls. Your application sends requests to the gateway, selects a model or Agumbe model alias, and receives a normalized response back from the underlying provider. Production AI systems need more than a direct SDK call. They need a stable way to manage provider changes, inspect request behavior, apply policy, monitor spend, and give teams a shared operating surface. Agumbe AI Gateway gives teams that foundation through one integration point for chat completions and embeddings, multi-provider routing, model aliases, app-level guardrails, request observability, usage controls, and billing visibility through the Agumbe Console.Documentation Index
Fetch the complete documentation index at: https://agumbe.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
.png?fit=max&auto=format&n=vcPX7VeShPkdlPKN&q=85&s=57000cc96bd54f7e9c32dd5171820f1e)