Kong AI Gateway 3.10: Smarter Tools for Smarter AI
The latest release of Kong AI Gateway, version 3.10, introduces a suite of new features aimed at helping organizations better manage their generative AI usage. With growing concerns over data privacy, hallucinations from LLMs, and inconsistent developer experiences, this update focuses on simplifying governance while keeping sensitive data secure. It’s all about enabling teams to move faster—without losing control.
As generative AI adoption spreads across industries, the challenges are no longer about gaining access to powerful models—they’re about orchestrating them effectively. Enterprises need scalable, secure systems that integrate easily into existing development workflows. Kong AI Gateway meets this need by consolidating key capabilities like observability, governance, and LLM access into a single API-aware control layer. It lets platform teams standardize how AI is deployed and consumed across their organizations.
One of the standout features in this release is the new AI RAG Injector plugin, designed to reduce hallucinations in large language models. RAG—Retrieval-Augmented Generation—has emerged as a key strategy for grounding model responses in reliable data. Traditionally, though, building a RAG pipeline involves a lot of developer time and infrastructure: generating embeddings, storing them in a vector database, and coding prompt enrichment logic manually.
Kong now automates all of that. With the AI RAG Injector, the gateway itself handles embedding generation, retrieves relevant content from vector databases, and enriches prompts in real time—all without needing developers to write that logic into each application. This not only cuts complexity but also ensures consistent, vetted data usage across your entire platform. It’s a big step toward smarter, safer, and more scalable AI integration.