TextLayer Core API

TextLayer Core provides a comprehensive API for building AI-powered applications. This API reference documents the available endpoints, request formats, and response structures.

API Overview

TextLayer Core’s API follows RESTful principles and is designed to be easy to integrate with any application. The API enables you to:
  • Send messages to LLMs through chat endpoints
  • Receive streaming responses for real-time interactions
  • Check system health and status
  • Retrieve available models

Available Endpoints

TextLayer Core provides several endpoints for different functionalities. For detailed information about specific endpoints and their request/response formats, explore the individual endpoint documentation in this section.

Message Format

When sending messages to the chat endpoints, use the following format:
{
  "messages": [
    {
      "role": "user",
      "content": "Your message here"
    }
  ]
}
The API supports different message roles:
  • user: Messages from the end user
  • assistant: Messages from the AI assistant
  • system: System instructions that guide the assistant’s behavior

Response Format

All TextLayer Core API responses follow a consistent format:
{
  "status": 200,
  "payload": {
    // Response data specific to the endpoint
  },
  "correlation_id": "123e4567-e89b-12d3-a456-426614174000"
}
The response contains:
  • status: HTTP status code (e.g., 200 for success, 400 for bad request)
  • payload: The actual response data, which varies by endpoint
  • correlation_id: A unique identifier for tracking the request

Error Responses

Error responses maintain the same structure but with appropriate status codes:
{
  "status": 400,
  "payload": {
    "message": "Failed to validate schema"
  },
  "correlation_id": "123e4567-e89b-12d3-a456-426614174000"
}

Streaming Responses

Streaming responses use the text/event-stream format with chunked content. The API supports both Server-Sent Events (SSE) and Vercel AI SDK streaming protocols based on the Accept header.