Deploying TextLayer Core

This guide covers deploying applications built with TextLayer Core using Terraform for infrastructure as code. TextLayer Core includes Terraform scripts in the repository that define the AWS infrastructure components needed for a production deployment.

Terraform Overview

TextLayer Core uses Terraform to define, provision, and manage the cloud infrastructure required for deployment. The Terraform scripts are located in the infra directory of the textlayer-core repository and are organized into several modules:
  • Network: VPC, subnets, DNS configurations
  • Backend: Core infrastructure components
  • Langfuse ECS: Langfuse observability platform running on ECS
These modules create a complete AWS environment for running TextLayer Core in production with appropriate networking, compute resources, and observability solutions.

Deployment Prerequisites

Before deploying TextLayer Core, ensure you have:
  1. AWS Account: Access to an AWS account with appropriate permissions
  2. AWS CLI: Installed and configured with credentials
  3. Terraform: Version 1.0 or later installed
  4. S3 Bucket: For storing Terraform state files (e.g., tfstate-core-dev)
  5. textlayer-core Repository: Cloned locally
Terraform state contains sensitive information. Always use encrypted S3 buckets for state storage and restrict access to authorized personnel only.

Deployment Workflow

Step 1: Initialize Terraform

The first step is to initialize Terraform with the S3 backend configuration for state storage:
cd ~/repos/textlayer-core/infra/network

terraform init \
  -backend-config="bucket=tfstate-core-dev" \
  -backend-config="key=state-core" \
  -backend-config="region=us-east-1" \
  -backend-config="encrypt=true"
This command initializes Terraform in your working directory, downloads the required provider plugins, and configures the S3 backend for state storage.

Step 2: Plan the Infrastructure Changes

Generate and review an execution plan to see what actions Terraform will take to change your infrastructure:
terraform plan -out=tfplan
Review the plan output carefully to ensure it matches your expected changes.

Step 3: Apply the Infrastructure Changes

Apply the execution plan to create or update the infrastructure:
terraform apply tfplan
Or, if you didn’t create a plan file:
terraform apply
Terraform will show the planned actions and ask for confirmation before proceeding.

Step 4: Verify the Deployment

After applying the changes, verify that the infrastructure was created correctly:
terraform output
This will display the outputs defined in your Terraform configuration, such as load balancer URLs, database endpoints, and other important information.

Infrastructure Components

TextLayer Core deployment involves several key infrastructure components:

Network Module

The network module creates the base networking infrastructure:
cd ~/repos/textlayer-core/infra/network

# Initialize and apply as shown in the workflow section
terraform init -backend-config="bucket=tfstate-core-dev" -backend-config="key=state-network" -backend-config="region=us-east-1" -backend-config="encrypt=true"
terraform plan
terraform apply
This creates:
  • VPC with public and private subnets
  • Internet Gateway and NAT Gateways
  • Route tables and security groups
  • DNS configurations (if specified)

Backend Module

The backend module deploys the TextLayer Core application infrastructure:
cd ~/repos/textlayer-core/infra/backend

# Initialize and apply
terraform init -backend-config="bucket=tfstate-core-dev" -backend-config="key=state-backend" -backend-config="region=us-east-1" -backend-config="encrypt=true"
terraform plan
terraform apply
This creates:
  • ECS cluster and task definitions
  • Load balancer and target groups
  • IAM roles and policies
  • CloudWatch log groups
  • Other required resources

Langfuse ECS Module

The Langfuse ECS module deploys the observability platform:
cd ~/repos/textlayer-core/infra/langfuse-ecs

# Initialize and apply
terraform init -backend-config="bucket=tfstate-core-dev" -backend-config="key=state-langfuse" -backend-config="region=us-east-1" -backend-config="encrypt=true"
terraform plan
terraform apply
This creates:
  • ECS cluster for Langfuse
  • RDS database
  • Load balancer
  • Supporting resources

Scaling Strategies

TextLayer Core can be scaled in several ways to accommodate increasing loads:

Horizontal Scaling

Adjust the ECS task count to scale the number of application instances:
# Example: Modify the desired_count in the ECS service
resource "aws_ecs_service" "textlayer" {
  # ... other configuration ...
  desired_count = 3  # Increase this number to add more tasks
}

Vertical Scaling

Increase the CPU and memory allocations for ECS tasks:
# Example: Modify the CPU and memory in the task definition
resource "aws_ecs_task_definition" "textlayer" {
  # ... other configuration ...
  cpu    = "2048"  # 2 vCPU
  memory = "4096"  # 4 GB
}

Auto Scaling

Implement ECS auto scaling based on CPU utilization or custom metrics:
# Example: Add auto scaling configuration
resource "aws_appautoscaling_target" "textlayer" {
  max_capacity       = 10
  min_capacity       = 2
  resource_id        = "service/textlayer-cluster/textlayer-service"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}

resource "aws_appautoscaling_policy" "textlayer_cpu" {
  name               = "textlayer-cpu-autoscaling"
  policy_type        = "TargetTrackingScaling"
  resource_id        = aws_appautoscaling_target.textlayer.resource_id
  scalable_dimension = aws_appautoscaling_target.textlayer.scalable_dimension
  service_namespace  = aws_appautoscaling_target.textlayer.service_namespace

  target_tracking_scaling_policy_configuration {
    target_value       = 70.0
    scale_in_cooldown  = 300
    scale_out_cooldown = 300

    predefined_metric_specification {
      predefined_metric_type = "ECSServiceAverageCPUUtilization"
    }
  }
}

Best Practices

State Management

  • Always use remote state with S3 and DynamoDB for state locking
  • Use separate state files for different environments and modules
  • Back up state files regularly

Environments

  • Use separate AWS accounts for development, staging, and production
  • Create environment-specific variable files (e.g., dev.tfvars, prod.tfvars)
  • Use consistent naming conventions that include the environment

Security

  • Follow the principle of least privilege for IAM roles
  • Encrypt sensitive data in transit and at rest
  • Use security groups to restrict network access
  • Enable AWS CloudTrail and VPC Flow Logs for auditing

Continuous Integration

  • Integrate Terraform with your CI/CD pipeline
  • Run terraform plan in CI to validate changes
  • Use automated testing for infrastructure code
  • Only apply changes after successful review

Troubleshooting

If you encounter issues during deployment:
  • State Lock Issues: Use terraform force-unlock if a previous operation was interrupted
  • AWS Permission Errors: Verify IAM permissions and role assumptions
  • Resource Creation Failures: Check AWS service quotas and region availability
  • Timeout Errors: Some resources take longer to create; adjust timeouts in the configuration
For more information, consult the Terraform Documentation and AWS Documentation.