All modules:

Link copied to clipboard

Core library for building and executing AI agents with a graph-based architecture.

Link copied to clipboard

Extends agents-core module with tools, as well as utilities for building graphs and strategies.

Link copied to clipboard

Provides common infrastructure and utilities for implementing agent features, including configuration, messaging, and I/O capabilities.

Link copied to clipboard

Provides EventHandler feature that allows to listen and react to events in the agent execution.

Link copied to clipboard

Provides AgentMemory feature that allows to store and persist facts from LLM history between agent runs and even between multiple agents

Link copied to clipboard

Provides implementation of the MessageTokenizer feature for AI Agents

Link copied to clipboard

Provides implementation of the Tracing feature for AI Agents

Link copied to clipboard

A module provides integration with Model Context Protocol (MCP) servers. The main components of the MCP integration in Koog are:

Link copied to clipboard

Comprehensive testing utilities for AI agents, providing mocking capabilities and validation tools for agent behavior.

Link copied to clipboard

A module that provides a framework for defining, describing, and executing tools that can be used by AI agents to interact with the environment.

Link copied to clipboard

Provides utilities used across other modules.

Link copied to clipboard

A foundational module that provides core interfaces and data structures for representing and comparing text and code embeddings.

Link copied to clipboard

A module that provides functionality for generating and comparing embeddings using remote LLM services.

Link copied to clipboard

A file-based implementation of the PromptCache interface for storing prompt execution results in the file system.

Link copied to clipboard

Core interfaces and models for caching prompt execution results with an in-memory implementation.

Link copied to clipboard

A Redis-based implementation of the PromptCache interface for storing prompt execution results in a Redis database.

Link copied to clipboard

A client implementation for executing prompts using Anthropic's Claude models with support for images and documents.

Link copied to clipboard

A caching wrapper for PromptExecutor that stores and retrieves responses to avoid redundant LLM calls.

Link copied to clipboard
Link copied to clipboard

A client implementation for executing prompts using Google Gemini models with comprehensive multimodal support.

Link copied to clipboard

Implementations of PromptExecutor for executing prompts with Large Language Models (LLMs).

Link copied to clipboard

A comprehensive module that provides unified access to multiple LLM providers (OpenAI, Anthropic, OpenRouter) for prompt execution.

Link copied to clipboard

Core interfaces and models for executing prompts against language models.

Link copied to clipboard

A client implementation for executing prompts using local Ollama models with limited multimodal support.

Link copied to clipboard

A client implementation for executing prompts using OpenAI's GPT models with support for images and audio.

Link copied to clipboard

A client implementation for executing prompts using OpenRouter's API to access various LLM providers with multimodal support.

Link copied to clipboard

A module that provides abstractions and implementations for working with Large Language Models (LLMs) from various providers.

Link copied to clipboard

A utility module for creating and manipulating Markdown content with a fluent builder API.

Link copied to clipboard

A core module that defines data models and parameters for controlling language model behavior.

Link copied to clipboard

A module for defining, parsing, and formatting structured data in various formats.

Link copied to clipboard

A module that provides interfaces and implementations for tokenizing text and counting tokens when working with Large Language Models (LLMs).

Link copied to clipboard

A utility module for creating and manipulating XML content with a fluent builder API.