Koog 0.4.0 Introduces Structured Output, iOS Support, and GPT-5 Integration
JetBrains has released Koog 0.4.0, a major update to its Kotlin-based framework for building AI agents. The update introduces native structured output, designed to ensure that AI-generated data maintains consistent formatting in production environments. Additionally, Koog 0.4.0 now supports Apple’s iOS platform, the latest GPT-5 models, and OpenTelemetry integration, making it easier for developers to build observable, reliable, and deployable agents across multiple platforms. The code for this update is available on GitHub.
The addition of native structured output addresses a common challenge in working with large language models (LLMs). Often, an LLM can generate the correct data format, but downstream processes fail due to slight deviations. Koog 0.4.0 solves this by leveraging structured output when supported by the model. For cases where structured output isn’t natively available, the framework employs a prompt-and-retry approach, coupled with a fixing parser powered by a separate model, ensuring that the payload matches the expected format precisely. These pragmatic guardrails, including retries and fixing strategies, make agent outputs more predictable and production-ready.
With iOS support, Koog now extends Kotlin Multiplatform capabilities, allowing developers to write an agent once and deploy it across iOS, Android, and JVM backends. This includes consistent strategy graphs, observability hooks, and testing frameworks. However, developers targeting iOS need to upgrade to Koog 0.4.1 to fully leverage the platform. Meanwhile, support for GPT-5 and custom LLM parameters, such as reasoningEffort, enables finer control over the trade-off between output quality, cost, and latency. OpenTelemetry integration also ties Koog into monitoring and observability tools like W&B Weave and the Langfuse open-source LLM engineering platform.
Koog 0.4.0 also enhances developer control over agent execution and reliability. The RetryingLLMClient helps mitigate LLM timeouts, network hiccups, or misbehaving tools with Conservative, Production, and Aggressive presets. Developers can install plugins on any agent and observe detailed metrics, including nested agent events, token usage, and cost breakdowns per request. Additional features like DeepSeek model support and fine-grained control options further ensure that AI agents built with Koog are robust, transparent, and ready for production deployment.

