Version: v3
Environment Variables
Langfuse (self-hosted) has extensive configuration options via environment variables. These need to be passed to all application containers.
Variable | Required / Default | Description |
---|---|---|
DATABASE_URL | Required | Connection string of your Postgres database. Instead of DATABASE_URL , you can also use DATABASE_HOST , DATABASE_USERNAME , DATABASE_PASSWORD , DATABASE_NAME , and DATABASE_ARGS . |
DIRECT_URL | DATABASE_URL | Connection string of your Postgres database used for database migrations. Use this if you want to use a different user for migrations or use connection pooling on DATABASE_URL . For large deployments, configure the database user with long timeouts as migrations might need a while to complete. |
SHADOW_DATABASE_URL | If your database user lacks the CREATE DATABASE permission, you must create a shadow database and configure the “SHADOW_DATABASE_URL”. This is often the case if you use a Cloud database. Refer to the Prisma docs for detailed instructions. | |
CLICKHOUSE_MIGRATION_URL | Required | Migration URL (TCP protocol) for the clickhouse instance. Pattern: clickhouse://<hostname>:(9000/9440) |
CLICKHOUSE_MIGRATION_SSL | false | Set to true to establish an SSL connection to Clickhouse for the database migration. |
CLICKHOUSE_URL | Required | Hostname of the clickhouse instance. Pattern: http(s)://<hostname>:(8123/8443) |
CLICKHOUSE_USER | Required | Username of the clickhouse database. Needs SELECT, ALTER, INSERT, CREATE, DELETE grants. |
CLICKHOUSE_PASSWORD | Required | Password of the clickhouse user. |
CLICKHOUSE_DB | default | Name of the ClickHouse database to use. |
CLICKHOUSE_CLUSTER_ENABLED | true | Whether to run ClickHouse commands ON CLUSTER . Set to false for single-container setups. |
LANGFUSE_AUTO_CLICKHOUSE_MIGRATION_DISABLED | false | Whether to disable automatic ClickHouse migrations on startup. |
REDIS_CONNECTION_STRING | Required | Connection string of your redis instance. Instead of REDIS_CONNECTION_STRING , you can also use REDIS_HOST , REDIS_PORT , REDIS_USERNAME and REDIS_AUTH . To configure TLS check the detailed Cache Configuration Documentation. |
REDIS_CLUSTER_ENABLED | false | Set to true to enable Redis cluster mode. When enabled, you must also provide REDIS_CLUSTER_NODES . |
REDIS_CLUSTER_NODES | Comma-separated list of Redis cluster nodes in the format host:port . Required when REDIS_CLUSTER_ENABLED is true . Example: redis-node1:6379,redis-node2:6379,redis-node3:6379 . | |
REDIS_AUTH | Authentication string for the Redis instance or cluster. | |
NEXTAUTH_URL | Required | URL of your Langfuse web deployment, e.g. https://yourdomain.com or http://localhost:3000 . Required for successful authentication via OAUTH and sending valid Links via Slack integration. |
NEXTAUTH_SECRET | Required | Used to validate login session cookies, generate secret with at least 256 entropy using openssl rand -base64 32 . |
SALT | Required | Used to salt hashed API keys, generate secret with at least 256 entropy using openssl rand -base64 32 . |
ENCRYPTION_KEY | Required | Used to encrypt sensitive data. Must be 256 bits, 64 string characters in hex format, generate via: openssl rand -hex 32 . |
LANGFUSE_CSP_ENFORCE_HTTPS | false | Set to true to set CSP headers to only allow HTTPS connections. |
PORT | 3000 / 3030 | Port the server listens on. 3000 for web, 3030 for worker. |
HOSTNAME | localhost | In some environments it needs to be set to 0.0.0.0 to be accessible from outside the container (e.g. Google Cloud Run). |
LANGFUSE_CACHE_API_KEY_ENABLED | true | Enable or disable API key caching. Set to false to disable caching of API keys. Plain-text keys are never stored in Redis, only hashed or encrypted keys. |
LANGFUSE_CACHE_API_KEY_TTL_SECONDS | 300 | Time-to-live (TTL) in seconds for cached API keys. Determines how long API keys remain in the cache before being refreshed. |
LANGFUSE_CACHE_PROMPT_ENABLED | true | Enable or disable prompt caching. Set to false to disable caching of prompts. |
LANGFUSE_CACHE_PROMPT_TTL_SECONDS | 300 | Time-to-live (TTL) in seconds for cached prompts. Determines how long prompts remain in the cache before being refreshed. |
LANGFUSE_S3_EVENT_UPLOAD_BUCKET | Required | Name of the bucket in which event information should be uploaded. |
LANGFUSE_S3_EVENT_UPLOAD_PREFIX | "" | Prefix to store events within a subpath of the bucket. Defaults to the bucket root. If provided, must end with a / . |
LANGFUSE_S3_EVENT_UPLOAD_REGION | Region in which the bucket resides. | |
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT | Endpoint to use to upload events. | |
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID | Access key for the bucket. Must have List, Get, and Put permissions. | |
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY | Secret access key for the bucket. | |
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE | Whether to force path style on requests. Required for MinIO. | |
LANGFUSE_S3_BATCH_EXPORT_ENABLED | false | Whether to enable Langfuse S3 batch exports. This must be set to true to enable batch exports. |
LANGFUSE_S3_BATCH_EXPORT_BUCKET | Required | Name of the bucket in which batch exports should be uploaded. |
LANGFUSE_S3_BATCH_EXPORT_PREFIX | "" | Prefix to store batch exports within a subpath of the bucket. Defaults to the bucket root. If provided, must end with a / . |
LANGFUSE_S3_BATCH_EXPORT_REGION | Region in which the bucket resides. | |
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT | Endpoint to use to upload batch exports. | |
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID | Access key for the bucket. Must have List, Get, and Put permissions. | |
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY | Secret access key for the bucket. | |
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE | Whether to force path style on requests. Required for MinIO. | |
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT | Optional external endpoint for generating presigned URLs. If not provided, the main endpoint is used. Useful, if langfuse traffic to the blobstorage should remain within the VPC. | |
BATCH_EXPORT_PAGE_SIZE | 500 | Optional page size for streaming exports to S3 to avoid memory issues. The page size can be adjusted if needed to optimize performance. |
BATCH_EXPORT_ROW_LIMIT | 1_500_000 | Maximum amount of rows that can be exported in a single batch export. |
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET | Required | Name of the bucket in which media files should be uploaded. |
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX | "" | Prefix to store media within a subpath of the bucket. Defaults to the bucket root. If provided, must end with a / . |
LANGFUSE_S3_MEDIA_UPLOAD_REGION | Region in which the bucket resides. | |
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT | Endpoint to use to upload media files. | |
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID | Access key for the bucket. Must have List, Get, and Put permissions. | |
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY | Secret access key for the bucket. | |
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE | Whether to force path style on requests. Required for MinIO. | |
LANGFUSE_S3_MEDIA_MAX_CONTENT_LENGTH | 1_000_000_000 | Maximum file size in bytes that is allowed for upload. Default is 1GB. |
LANGFUSE_S3_MEDIA_DOWNLOAD_URL_EXPIRY_SECONDS | 3600 | Presigned download URL expiry in seconds. Defaults to 1h. |
LANGFUSE_S3_CONCURRENT_WRITES | 50 | Maximum number of concurrent writes to S3. Useful for errors like @smithy/node-http-handler:WARN - socket usage at capacity=50 . |
LANGFUSE_S3_CONCURRENT_READS | 50 | Maximum number of concurrent reads from S3. Useful for errors like @smithy/node-http-handler:WARN - socket usage at capacity=50 . |
LANGFUSE_AUTO_POSTGRES_MIGRATION_DISABLED | false | Set to true to disable automatic database migrations on docker startup. Not recommended. |
LANGFUSE_LOG_LEVEL | info | Set the log level for the application. Possible values are trace , debug , info , warn , error , fatal . |
LANGFUSE_LOG_FORMAT | text | Set the log format for the application. Possible values are text , json . |
LANGFUSE_LOG_PROPAGATED_HEADERS | Comma-separated list of HTTP header names to propagate through logs via OpenTelemetry baggage. Header names are case-insensitive and will be normalized to lowercase. Useful for debugging and observability. Example: x-request-id,x-user-id . |
Additional Features
There are additional features that can be enabled and configured via environment variables.
Authentication & SSOAutomated Access ProvisioningCachingCustom Base PathEncryptionHeadless InitializationNetworkingOrganization Creators (EE)Organization Management API (EE)Health and Readiness CheckObservability via OpenTelemetryTransactional EmailsUI Customization (EE)
Was this page helpful?