All modules:
Core library for building and executing AI agents with a graph-based architecture.
Extends agents-core
module with tools, as well as utilities for building graphs and strategies.
Provides common infrastructure and utilities for implementing agent features, including configuration, messaging, and I/O capabilities.
Provides EventHandler
feature that allows to listen and react to events in the agent execution.
Provides AgentMemory
feature that allows to store and persist facts from LLM history between agent runs and even between multiple agents
Provides implementation of the MessageTokenizer
feature for AI Agents
Provides implementation of the Tracing
feature for AI Agents
A module provides integration with Model Context Protocol (MCP) servers. The main components of the MCP integration in Koog are:
Comprehensive testing utilities for AI agents, providing mocking capabilities and validation tools for agent behavior.
A module that provides a framework for defining, describing, and executing tools that can be used by AI agents to interact with the environment.
Provides utilities used across other modules.
A foundational module that provides core interfaces and data structures for representing and comparing text and code embeddings.
A module that provides functionality for generating and comparing embeddings using remote LLM services.
A file-based implementation of the PromptCache interface for storing prompt execution results in the file system.
Core interfaces and models for caching prompt execution results with an in-memory implementation.
A Redis-based implementation of the PromptCache interface for storing prompt execution results in a Redis database.
A client implementation for executing prompts using Anthropic's Claude models with support for images and documents.
A caching wrapper for PromptExecutor that stores and retrieves responses to avoid redundant LLM calls.
A client implementation for executing prompts using Google Gemini models with comprehensive multimodal support.
Implementations of PromptExecutor for executing prompts with Large Language Models (LLMs).
A comprehensive module that provides unified access to multiple LLM providers (OpenAI, Anthropic, OpenRouter) for prompt execution.
Core interfaces and models for executing prompts against language models.
A client implementation for executing prompts using local Ollama models with limited multimodal support.
A client implementation for executing prompts using OpenAI's GPT models with support for images and audio.
A client implementation for executing prompts using OpenRouter's API to access various LLM providers with multimodal support.
A module that provides abstractions and implementations for working with Large Language Models (LLMs) from various providers.
A utility module for creating and manipulating Markdown content with a fluent builder API.
A core module that defines data models and parameters for controlling language model behavior.
A module for defining, parsing, and formatting structured data in various formats.
A module that provides interfaces and implementations for tokenizing text and counting tokens when working with Large Language Models (LLMs).
A utility module for creating and manipulating XML content with a fluent builder API.