Agent Loop Documentation
Overview
The agent loop is a core feature of the TextLayer system that enables autonomous, iterative conversations between users and AI agents. It allows the AI to engage in multi-step reasoning, tool usage, and problem-solving without requiring user intervention after each step.Architecture Overview
The agent loop follows the Command Pattern architecture established in TextLayer, with clear separation between:- Controllers: Handle HTTP requests and route to appropriate commands
- Commands: Contain business logic for processing chat messages
- Services: Provide core functionality like LLM communication and tool execution
- Tools: Specific capabilities the agent can use (SQL queries, reasoning, etc.)
Key Components
1. Core Agent Loop Implementation
ChatClient.agent_loop()
Located in:backend/textlayer/services/llm/client/chat.py
- Iterates up to
max_stepstimes (default: 10) - Processes chat messages in each iteration
- Handles tool calls and responses
- Implements safety mechanisms for error handling and timeouts
- Error Handling: Tracks consecutive errors (max 3) before breaking
- Timeout Protection: 300-second timeout to prevent infinite loops
- Thread Completion Detection: Checks if conversation is naturally finished
- Graceful Termination: Adds termination messages when limits are reached
Helper Methods:
_is_thread_finished(): Determines if conversation has naturally concluded
_check_termination_conditions(): Checks for timeout conditions
_terminate_thread(): Creates termination messages when limits are reached
2. Command Layer
ProcessChatMessageCommand
Located in:backend/textlayer/commands/threads/process_chat_message.py
This command encapsulates the business logic for processing chat messages and provides the interface between the controller layer and the LLM services.
Key Features:
- Supports both regular chat and agent loop modes via
agent_loopboolean parameter - Validates input parameters (messages, stream, max_steps)
- Sets up the LLM session with fallback models
- Configures the toolkit of available tools
- Applies data analysis system prompt
3. Controller Layer
ThreadController
Located in:backend/textlayer/controllers/thread_controller.py
Provides a clean interface for the route layer to process chat messages:
4. Route Layer
Thread Routes
Located in:backend/textlayer/routes/thread_routes.py
Currently, the HTTP API endpoints (/chat and /chat/stream) have agent_loop=False hardcoded, meaning the agent loop is not exposed via the REST API. This suggests it’s primarily used for internal processing or CLI operations.
5. CLI Interface
CLI Handler
Located in:backend/textlayer/cli/threads/process_chat_message.py
Provides a clean interface for programmatic access to the agent loop:
- Enables agent loop mode (
agent_loop=True) - Returns the last assistant message from the conversation
- Includes comprehensive error handling
- Integrates with Langfuse for observability
Data Flow
1. Initialization Flow
2. Agent Loop Iteration Flow
Usage Examples
1. CLI Usage
2. Programmatic Usage
Development Considerations
- Agent loop is not exposed via REST API (hardcoded to
False) - Streaming is disabled in agent loop mode
- Tool execution is synchronous within each iteration
Best Practices
When working with the agent loop system:- Set appropriate max_steps: Balance between allowing sufficient iterations and preventing runaway loops
- Monitor tool execution: Use Langfuse observability to track tool calls and performance
- Handle errors gracefully: The system includes built-in error handling, but consider application-specific error scenarios
- Test iteratively: Use the CLI interface for testing before integrating into larger applications
Integration with Other Components
The agent loop integrates seamlessly with other TextLayer Core components:- Vaul Toolkit: Provides the tools that the agent can use during iterations
- LLMOps: Monitoring and observability through Langfuse integration
- FLEX Stack: Built on the same architectural principles