This guide explains how to build tools with TextLayer Core. TextLayer Core uses the Vaul toolkit to create and manage tool calls for AI systems. Tools can be implemented either as standalone functions or with a service implementation class for more complex functionality.
In TextLayer Core, tools are functions decorated with @tool_call and @observe that can be called by AI systems. There are two main approaches to implementing tools:
Tools without a service: Simple functions that implement their functionality directly
Tools with a service implementation: Functions that use a separate service class with multiple methods
Tools without a service are implemented as standalone functions decorated with @tool_call and @observe. These are suitable for simple functionality that doesn’t require complex business logic or external service interactions.
from langfuse.decorators import observefrom vaul import tool_call
The @observe decorator from Langfuse tracks the function’s execution for monitoring and tracing, while the @tool_call decorator from Vaul enables the function to be called by AI systems.
Step 2: Create a Function with Type Hints and Docstrings
Define your function with proper type hints and comprehensive docstrings:
Copy
@tool_call@observedef think(thought: str) -> str: """Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed. Args: thought: A thought to think about. """ return thought
This simple “think” tool gives the model room to think about its previous or upcoming action. It’s useful for AI systems to reason about their actions in order to self correct or self validate.
Step 3: Implement More Complex Functionality (Optional)
For tools that require more functionality but don’t need a separate service class, you can import libraries and implement the functionality directly in the tool function:
Copy
@tool_call@observedef get_current_date_time( format: Optional[str] = None, timezone: Optional[str] = None,) -> Dict[str, Any]: """ Get the current date and time in a specified format and timezone. Args: format: Optional. The format string to format the date/time (e.g., "%Y-%m-%d %H:%M:%S"). If None, defaults to ISO 8601 format. timezone: Optional. A valid timezone string from the IANA time zone database (e.g., "America/New_York", "Asia/Tokyo"). Defaults to UTC. Returns: dict: A dictionary containing current_date_time and epoch. """ from datetime import datetime import pytz if timezone: tz = pytz.timezone(timezone) else: tz = pytz.utc now = datetime.now(tz) return { "current_date_time": now.strftime(format) if format else now.isoformat(), "epoch": int(now.timestamp()) }
This datetime tool handles timezone conversions and formatting but still doesn’t require a separate service class.
For more complex functionality, it’s often better to create a separate service class that handles the business logic, and then create a tool that uses this service. This approach provides better separation of concerns and allows the service to be reused across multiple tools.
First, create a service class with the necessary methods for your functionality:
Copy
import sqlite3from typing import Any, Dict, Optionalimport pandas as pdclass SQLiteDatastore: """A datastore implementation for SQLite.""" def __init__(self, database: Optional[str] = None) -> None: """ Initialize the SQLiteDatastore. Args: database (str, optional): Path to the SQLite database file. If None, an in-memory database is used. """ if database is None: database = ":memory:" self.connection = sqlite3.connect(database) self.connection.row_factory = sqlite3.Row def execute(self, query: str, parameters: Optional[Dict[str, Any]] = None) -> pd.DataFrame: """ Execute a SQL query and return the result as a DataFrame. Args: query (str): The SQL query to execute. parameters (Dict[str, Any], optional): Parameters to include in the query. Returns: pd.DataFrame: The query result. """ cursor = self.connection.cursor() if parameters: cursor.execute(query, parameters) else: cursor.execute(query) rows = cursor.fetchall() columns = [description[0] for description in cursor.description] return pd.DataFrame(rows, columns=columns) def get_columns(self, table_name: str) -> pd.DataFrame: """Retrieve column information for a specific table.""" query = f"PRAGMA table_info('{table_name}')" # Implementation details... return df def get_sample_data(self, table_name: str, limit: int = 5) -> pd.DataFrame: """Retrieve a sample of data from a specific table.""" query = f""" SELECT * FROM {table_name} ORDER BY RANDOM() LIMIT {limit} """ return self.execute(query)
Next, create a tool function that initializes and uses the service class:
Copy
from langfuse.decorators import observefrom vaul import tool_callfrom app import loggerfrom app.services.db.datastore import SQLiteDatastore@tool_call@observedef text_to_sql(query: str) -> str: """Executes a SQL query for SQLite and returns the result as a markdown table. Args: query (str): The SQL query to execute on the SQLite database. Returns: str: The result of the SQL query execution, formatted as a markdown table. """ logger.info(f"Converting natural language query to SQL query: {query}") # Initialize the SQLite datastore datastore = SQLiteDatastore(database="data/data.db") if not query: logger.error("No query provided") return "" # Execute the query result = datastore.execute(query) # Return the result return result.to_markdown(index=False, floatfmt=".2f") if result is not None else ""
This tool function initializes the SQLiteDatastore service, executes the query, and returns the result as a markdown table.
Each tool should have a single responsibility. If a tool is becoming too complex, consider splitting it into multiple tools or moving complexity to the service layer.
When building tools for TextLayer Core, detailed docstrings and proper type hints are not just good programming practices—they are essential for LLM integration. Here’s why:
The @tool_call decorator from Vaul automatically generates OpenAPI schemas from your function signatures and docstrings. These schemas are then passed to LLMs like GPT-4o to enable them to understand and use your tools.
Copy
# Under the hood, Vaul transforms your function into an OpenAPI schema like this:{ "name": "text_to_sql", "description": "Executes a SQL query for SQLite and returns the result as a markdown table.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "The SQL query to execute on the SQLite database." } }, "required": ["query"] }}
To make your tools available to AI systems, you need to register them with the Vaul toolkit:
Copy
from vaul import Toolkitfrom app.services.llm.tools.db.text_to_sql import text_to_sqlfrom app.services.llm.tools.prompting.think import thinkfrom app.services.llm.tools.datetime.get_current_date_time import get_current_date_time# Create a toolkittoolkit = Toolkit(name="TextLayer Tools")# Add individual toolstoolkit.add_tool(text_to_sql)toolkit.add_tool(think)toolkit.add_tool(get_current_date_time)# Or add multiple tools at oncetoolkit.add_tools(text_to_sql, think, get_current_date_time)# Get schemas for all tools in the toolkit (for use with LLM providers)tool_schemas = toolkit.tool_schemas()
The toolkit provides methods for adding tools, generating schemas, and running tools by name. This centralized management makes it easier to maintain and extend your tool collection.