Frequently Asked Questions

This page provides answers to commonly asked questions about TextLayer Core. If you don’t find an answer to your question here, please check our Resource Hub or contact support.

General Questions

What is TextLayer Core?

TextLayer Core is an AI enablement layer designed to simplify the integration of Large Language Models (LLMs) into applications. It provides a structured architecture pattern, pre-built integrations for multiple LLM providers, search capabilities, and observability tools.

Who is TextLayer Core designed for?

TextLayer Core is designed for developers and organizations looking to build AI-powered applications without dealing with the complexity of direct LLM integration, observability setup, and deployment management.

What can I build with TextLayer Core?

With TextLayer Core, you can build:
  • Internal AI tools like knowledge bases and chatbots
  • LLM-powered APIs with consistent architecture
  • Applications with standardized AI patterns across teams
  • AI systems with robust monitoring for production use
  • Custom tools that extend AI capabilities using a standardized framework

Is TextLayer Core open source?

Yes, TextLayer Core is an open-source project available on GitHub. We welcome contributions from the community.

What programming languages does TextLayer Core support?

TextLayer Core is primarily built with Python and focuses on providing a Python-based backend. It can be integrated with applications written in any language through its API endpoints.

Technical Questions

Which LLM providers does TextLayer Core support?

TextLayer Core supports multiple LLM providers through LiteLLM integration, including:
  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Anthropic (Claude)
  • Cohere
  • Google (PaLM, Gemini)
  • Mistral AI
  • Hugging Face models
  • And many others

How does TextLayer Core handle vector search?

TextLayer Core uses OpenSearch/Elasticsearch for vector search capabilities. It provides:
  • Semantic search with embeddings
  • Hybrid search combining keyword and semantic search
  • Configurable retrieval strategies
  • Document chunking and preprocessing utilities

What observability features does TextLayer Core provide?

TextLayer Core integrates with Langfuse to provide comprehensive observability features:
  • Structured logging and tracing of LLM interactions
  • Prompt and response tracking
  • Cost monitoring and analytics
  • Performance metrics and evaluation
  • User feedback collection and analysis

How do I set up environment variables for TextLayer Core?

TextLayer Core supports multiple methods for managing environment variables:
  • Doppler (recommended for production and team environments)
  • .env files (for local development)
  • Direct environment variable configuration
  • Other secrets management tools like HashiCorp Vault, Keeper, and Infisical
For details, see our Secrets Management documentation.

What is the FLEX Stack?

FLEX Stack is the architectural foundation of TextLayer Core:
  • F - Flask: Provides the web framework foundation with a modular structure
  • L - LiteLLM/Langfuse: Handles LLM provider integration and observability
  • E - Elasticsearch/OpenSearch: Powers vector search capabilities
  • X - eXternal services: Enables custom tool integration through Vaul

Troubleshooting Questions

TextLayer Core fails to start with environment variable errors

If you’re seeing environment variable errors when starting TextLayer Core:
  1. Check that all required environment variables are set (see the README.md for a list)
  2. Verify that your .env file is in the correct location or Doppler is properly configured
  3. Ensure that your environment variables have the correct format and values
  4. Try running with FLASK_DEBUG=1 for more detailed error messages

I’m getting OpenSearch connection errors

If you’re experiencing OpenSearch connection issues:
  1. Verify that OpenSearch is running (check with docker ps if using Docker)
  2. Confirm that the ELASTICSEARCH_URL environment variable is set correctly
  3. Check that the username and password are correct in ELASTICSEARCH_USER and ELASTICSEARCH_PASSWORD
  4. Ensure that your network allows connections to the OpenSearch instance
  5. Try restarting the OpenSearch container with docker-compose restart opensearch

How can I debug LLM API errors?

To debug issues with LLM API calls:
  1. Set LOG_LEVEL=DEBUG to see detailed logs of API requests and responses
  2. Check that your API keys are correct and have sufficient permissions
  3. Verify that the selected model is available with your current subscription
  4. Look for rate limiting or quota errors in the logs
  5. Use the Langfuse dashboard to inspect the detailed trace of LLM interactions

My embeddings aren’t working correctly

If you’re having issues with embeddings:
  1. Verify that your embedding model is correctly specified in EMBEDDING_MODEL
  2. Check that your OpenAI/provider API key has permission to use embeddings
  3. Look for any errors in the logs related to embedding generation
  4. Try a different embedding model to see if the issue persists
  5. Check the embedding dimensions match with KNN_EMBEDDING_DIMENSION

How do I troubleshoot tool execution failures?

If your custom tools are failing:
  1. Review the logs for detailed error messages
  2. Verify that the tool is properly registered in your toolkit
  3. Check that the tool function handles exceptions appropriately
  4. Test the tool function independently of the LLM to ensure it works correctly
  5. Confirm that the tool’s schema correctly matches its implementation