Self-hosted deployment - v3 Preview - Open Source LLM Observability
This guide covers a developer preview which is not suitable for production use. v3 is under active development and we plan to ship a production-ready version by the end of November 2024. We share this information to gather feedback from our awesome developer community.
For a production-ready setup, follow the self-hosting guide or consider using Langfuse Cloud maintained by the Langfuse team.
If you are on a v2 setup and want to migrate to the v3 developer preview (not recommended in production environments), take a look at our migration guide.
If you face any questions or issues, please reach out to us on Discord, contact the maintainers at support@langfuse.com, or join the GitHub Discussion.
Components
Langfuse consists of multiple storage components and two Docker containers:
- Langfuse Web: The main web application serving the Langfuse UI and APIs.
- Langfuse Worker: A worker that asynchronously processes events.
- Postgres: The main database for transactional workloads.
- Redis: A fast in-memory data structure store. Used for queue and cache operations.
- S3/Blob Store: Object storage to persist all incoming events, multi-modal inputs, and large exports.
- Clickhouse: High-performance OLAP database which stores traces, observations, and scores.
See the chart below for an overview of the components and their interactions:
Postgres Database
Langfuse requires a persistent Postgres database to store its state. You can use a managed service on AWS, Azure, or GCP, or host it yourself. At least version 12 is required.
Redis
Langfuse uses Redis for caching and queue operations.
You can use a managed service on AWS, Azure, or GCP, or host it yourself.
At least version 7 is required and the instance must have maxmemory-policy=noeviction
configured.
You may use Valkey instead of Redis, but there is no active support from the Langfuse team as of now.
See Redis for more details on how to connect Redis to Langfuse.
S3/Blob Store
Langfuse requires an S3-compatible blob store to persist all incoming events, multi-modal inputs, and large exports. You can use a managed service on AWS, Azure, or GCP, or host it yourself using MinIO. See S3/Blob Store for more details on how to connect a blob store to Langfuse.
Clickhouse
Langfuse uses Clickhouse as an OLAP database to store traces, observations, and scores. You can use the managed service by Clickhouse Cloud, or host it yourself. See ClickHouse for more details on how to connect ClickHouse to Langfuse.
Deploying Langfuse
Deploy the application container to your infrastructure. You can use managed services like AWS ECS, Azure Container Instances, or GCP Cloud Run, or host it yourself.
During the container startup, all database migrations will be applied automatically. This can be optionally disabled via environment variables.
We recommend that you test V3 using the latest release candidate. You can find the newest tag in our GitHub Releases.
Run Langfuse Web
docker run --name langfuse-web \
-e DATABASE_URL=postgresql://hello \
-e NEXTAUTH_URL=http://localhost:3000 \
-e NEXTAUTH_SECRET=mysecret \
-e SALT=mysalt \
-e ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000 \ # generate via: openssl rand -hex 32
-e CLICKHOUSE_URL=http://clickhouse:8123 \
-e CLICKHOUSE_USER=clickhouse \
-e CLICKHOUSE_PASSWORD=clickhouse \
-e CLICKHOUSE_MIGRATION_URL=clickhouse://clickhouse:9000 \
-e REDIS_HOST=localhost \
-e REDIS_PORT=6379 \
-e REDIS_AUTH=redis \
-e LANGFUSE_S3_EVENT_UPLOAD_ENABLED=true \
-e LANGFUSE_S3_EVENT_UPLOAD_BUCKET=my-bucket \
-e LANGFUSE_S3_EVENT_UPLOAD_REGION=us-east-1 \
-e LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
-e LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY=bPxRfiCYEXAMPLEKEY \
# Temp env variables during migration
-e LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=true \
-e LANGFUSE_ASYNC_INGESTION_PROCESSING=true \
-e LANGFUSE_ASYNC_CLICKHOUSE_INGESTION_PROCESSING=true \
-e LANGFUSE_READ_DASHBOARDS_FROM_CLICKHOUSE=true \
-e LANGFUSE_READ_FROM_POSTGRES_ONLY=false \
-e LANGFUSE_RETURN_FROM_CLICKHOUSE=true \
-p 3000:3000 \
-a STDOUT \
langfuse/langfuse:3.0.0-rc.1
Run Langfuse Worker
docker run --name langfuse-worker \
-e DATABASE_URL=postgresql://hello \
-e SALT=mysalt \
-e ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000 \ # generate via: openssl rand -hex 32
-e CLICKHOUSE_URL=http://clickhouse:8123 \
-e CLICKHOUSE_USER=clickhouse \
-e CLICKHOUSE_PASSWORD=clickhouse \
-e REDIS_HOST=localhost \
-e REDIS_PORT=6379 \
-e REDIS_AUTH=redis \
-e LANGFUSE_S3_EVENT_UPLOAD_ENABLED=true \
-e LANGFUSE_S3_EVENT_UPLOAD_BUCKET=my-bucket \
-e LANGFUSE_S3_EVENT_UPLOAD_REGION=us-east-1 \
-e LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
-e LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY=bPxRfiCYEXAMPLEKEY \
# Temp env variables during migration
-e LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=true \
-e LANGFUSE_ASYNC_INGESTION_PROCESSING=true \
-e LANGFUSE_ASYNC_CLICKHOUSE_INGESTION_PROCESSING=true \
-p 3030:3030 \
-a STDOUT \
langfuse/langfuse-worker:3.0.0-rc.1
Recommended sizing
For production environments, we recommend to use at least 2 CPUs and 4 GB of RAM for all containers. You should have at least two instances of the Langfuse Web container for high availability. For auto-scaling, we recommend to add instances once the CPU utilization exceeds 50% on either container.
Environment Variables
Langfuse accepts additional environment variables to fine-tune your deployment. You can use the same environment variables for the Langfuse Web and Langfuse Worker containers.
Variable | Required / Default | Description |
---|---|---|
DATABASE_URL | Required | Connection string of your Postgres database. Instead of DATABASE_URL , you can also use DATABASE_HOST , DATABASE_USERNAME , DATABASE_PASSWORD and DATABASE_NAME . |
DIRECT_URL | DATABASE_URL | Connection string of your Postgres database used for database migrations. Use this if you want to use a different user for migrations or use connection pooling on DATABASE_URL . For large deployments, configure the database user with long timeouts as migrations might need a while to complete. |
SHADOW_DATABASE_URL | If your database user lacks the CREATE DATABASE permission, you must create a shadow database and configure the “SHADOW_DATABASE_URL”. This is often the case if you use a Cloud database. Refer to the Prisma docs for detailed instructions. | |
CLICKHOUSE_MIGRATION_URL | Required | Migration URL (TCP protocol) for the clickhouse instance. Pattern: clickhouse://<hostname>:(9000/9440) |
CLICKHOUSE_MIGRATION_SSL | false | Set to true to establish an SSL connection to Clickhouse for the database migration. |
CLICKHOUSE_URL | Required | Hostname of the clickhouse instance. Pattern: http(s)://<hostname>:8123 |
CLICKHOUSE_USER | Required | Username of the clickhouse database. Needs SELECT, ALTER, INSERT, CREATE, DELETE grants. |
CLICKHOUSE_PASSWORD | Required | Password of the clickhouse user. |
CLICKHOUSE_CLUSTER_ENABLED | true | Whether to run ClickHouse commands ON CLUSTER . Set to false for single-container setups. |
LANGFUSE_AUTO_CLICKHOUSE_MIGRATION_DISABLED | false | Whether to disable the cluster mode in ClickHouse migrations. |
REDIS_CONNECTION_STRING | Required | Connection string of your redis instance. Instead of REDIS_CONNECTION_STRING , you can also use REDIS_HOST , REDIS_PORT , and REDIS_AUTH . |
LANGFUSE_S3_EVENT_UPLOAD_ENABLED LANGFUSE_S3_EVENT_UPLOAD_BUCKET LANGFUSE_S3_EVENT_UPLOAD_REGION LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE | Required | S3 configuration to upload incoming events into S3. LANGFUSE_S3_EVENT_UPLOAD_BUCKET is required for Langfuse to accept events. The other variables are optional if you use the default credentials on AWS instances and must be supplied in other cloud environments. |
NEXTAUTH_URL | Required | URL of your Langfuse web deployment, e.g. https://yourdomain.com or http://localhost:3000 . Required for successful authentication via OAUTH. |
NEXTAUTH_SECRET | Required | Used to validate login session cookies, generate secret with at least 256 entropy using openssl rand -base64 32 . |
SALT | Required | Used to salt hashed API keys, generate secret with at least 256 entropy using openssl rand -base64 32 . |
ENCRYPTION_KEY | Required | Used to encrypt sensitive data. Must be 256 bits, 64 string characters in hex format, generate via: openssl rand -hex 32 . |
LANGFUSE_CSP_ENFORCE_HTTPS | false | Set to true to set CSP headers to only allow HTTPS connections. |
PORT | 3000 / 3030 | Port the server listens on. 3000 for web, 3030 for worker. |
HOSTNAME | localhost | In some environments it needs to be set to 0.0.0.0 to be accessible from outside the container (e.g. Google Cloud Run). |
LANGFUSE_DEFAULT_ORG_ID | Configure optional default organization for new users. When users create an account they will be automatically added to this organization. | |
LANGFUSE_DEFAULT_ORG_ROLE | VIEWER | Role of the user in the default organization (if set). Possible values are OWNER , ADMIN , MEMBER , VIEWER . See roles for details. |
LANGFUSE_DEFAULT_PROJECT_ID | Configure optional default project for new users. When users create an account they will be automatically added to this project. | |
LANGFUSE_DEFAULT_PROJECT_ROLE | VIEWER | Role of the user in the default project (if set). Possible values are OWNER , ADMIN , MEMBER , VIEWER . See roles for details. |
SMTP_CONNECTION_URL | Configure optional SMTP server connection for transactional email. Connection URL is passed to Nodemailer (docs). | |
EMAIL_FROM_ADDRESS | Configure from address for transactional email. Required if SMTP_CONNECTION_URL is set. | |
S3_ENDPOINT S3_ACCESS_KEY_ID S3_SECRET_ACCESS_KEY S3_BUCKET_NAME S3_REGION | Optional S3 configuration for enabling large exports from the UI. S3_BUCKET_NAME is required to enable exports. The other variables are optional and will use the default provider credential chain if not specified. | |
DB_EXPORT_PAGE_SIZE | 1000 | Optional page size for streaming exports to S3 to avoid memory issues. The page size can be adjusted if needed to optimize performance. |
LANGFUSE_AUTO_POSTGRES_MIGRATION_DISABLED | false | Set to true to disable automatic database migrations on docker startup. |
LANGFUSE_LOG_LEVEL | info | Set the log level for the application. Possible values are trace , debug , info , warn , error , fatal . |
LANGFUSE_LOG_FORMAT | text | Set the log format for the application. Possible values are text , json . |
NEXT_PUBLIC_BASE_PATH | Set the base path for the application. This is useful if you want to deploy Langfuse on a subpath, especially when integrating Langfuse into existing infrastructure. Refer to the section below for details. |
Authentication
Email/Password
Email/password authentication is enabled by default. Users can sign up and log in using their email and password.
To disable email/password authentication, set AUTH_DISABLE_USERNAME_PASSWORD=true
. In this case, you need to set up SSO instead.
If you want to provision a default user for your Langfuse instance, you can use the LANGFUSE_INIT_*
environment variables.
Password Reset
-
If transactional emails are configured on your instance via the
SMTP_CONNECTION_URL
andEMAIL_FROM_ADDRESS
environments, users can reset their password by using the “Forgot password” link on the login page. -
If transactional emails are not set up, passwords can be reset by following these steps:
- Update the email associated with your user account in database, such as by adding a prefix.
- You can then sign up again with a new password.
- Reassign any organizations you were associated with via the
organization_memberships
table in database. - Finally, remove the old user account from the
users
table in database.
SSO
To enable OAuth/SSO provider sign-in for Langfuse, add the following environment variables:
Provider | Variables | OAuth Redirect URL |
---|---|---|
AUTH_GOOGLE_CLIENT_ID AUTH_GOOGLE_CLIENT_SECRET AUTH_GOOGLE_ALLOW_ACCOUNT_LINKING=true (optional)AUTH_GOOGLE_ALLOWED_DOMAINS=langfuse.com,google.com (optional, list of allowed domains based on hd OAuth claim) | /api/auth/callback/google | |
GitHub | AUTH_GITHUB_CLIENT_ID AUTH_GITHUB_CLIENT_SECRET AUTH_GITHUB_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/github |
GitHub Enterprise | AUTH_GITHUB_ENTERPRISE_CLIENT_ID AUTH_GITHUB_ENTERPRISE_CLIENT_SECRET AUTH_GITHUB_ENTERPRISE_BASE_URL AUTH_GITHUB_ENTERPRISE_ALLOW_ACCOUNT_LINKING=false (optional) | /api/auth/callback/github-enterprise |
GitLab | AUTH_GITLAB_CLIENT_ID AUTH_GITLAB_CLIENT_SECRET AUTH_GITLAB_ISSUER (optional)AUTH_GITLAB_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/gitlab |
AzureAD/Entra ID | AUTH_AZURE_AD_CLIENT_ID AUTH_AZURE_AD_CLIENT_SECRET AUTH_AZURE_AD_TENANT_ID AUTH_AZURE_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/azure-ad |
Okta | AUTH_OKTA_CLIENT_ID AUTH_OKTA_CLIENT_SECRET AUTH_OKTA_ISSUER AUTH_OKTA_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/okta |
Auth0 | AUTH_AUTH0_CLIENT_ID AUTH_AUTH0_CLIENT_SECRET AUTH_AUTH0_ISSUER AUTH_AUTH0_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/auth0 |
AWS Cognito | AUTH_COGNITO_CLIENT_ID AUTH_COGNITO_CLIENT_SECRET AUTH_COGNITO_ISSUER AUTH_COGNITO_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/cognito |
Custom OAuth (source) | AUTH_CUSTOM_CLIENT_ID AUTH_CUSTOM_CLIENT_SECRET AUTH_CUSTOM_ISSUER AUTH_CUSTOM_NAME (any, used only in UI)AUTH_CUSTOM_ALLOW_ACCOUNT_LINKING=true (optional)AUTH_CUSTOM_SCOPE (optional, defaults to "openid email profile" ) | /api/auth/callback/custom |
Use *_ALLOW_ACCOUNT_LINKING
to allow merging accounts with the same email address. This is useful when users sign in with different providers or email/password but have the same email address. You need to be careful with this setting as it can lead to security issues if the emails are not verified.
Need another provider? Langfuse uses Auth.js, which integrates with many providers. Add a feature request on GitHub if you want us to add support for a specific provider.
Additional configuration
Variable | Description |
---|---|
AUTH_DOMAINS_WITH_SSO_ENFORCEMENT | Comma-separated list of domains that are only allowed to sign in using SSO. Email/password sign in is disabled for these domains. E.g. domain1.com,domain2.com |
AUTH_DISABLE_SIGNUP | Set to true to disable sign up for new users. Only existing users can sign in. This affects all new users that try to sign up, also those who received an invite to a project and have no account yet. |
AUTH_SESSION_MAX_AGE | Set the maximum age of the session (JWT) in minutes. The default is 30 days (43200 ). The value must be greater than 5 minutes, as the front-end application refreshes its session every 5 minutes. |
Headless Initialization
By default, you need to create a user account, organization and project via the Langfuse UI before being able to use the API. You can find the API keys in the project settings within the UI.
If you want to automatically initialize these resources, you can optionally use the following LANGFUSE_INIT_*
environment variables. When these variables are set, Langfuse will automatically create the specified resources on startup if they don’t already exist. This allows for easy integration with infrastructure-as-code and automated deployment pipelines.
Environment Variable | Description | Required to Create Resource | Example |
---|---|---|---|
LANGFUSE_INIT_ORG_ID | Unique identifier for the organization | Yes | my-org |
LANGFUSE_INIT_ORG_NAME | Name of the organization | No | My Org |
LANGFUSE_INIT_PROJECT_ID | Unique identifier for the project | Yes | my-project |
LANGFUSE_INIT_PROJECT_NAME | Name of the project | No | My Project |
LANGFUSE_INIT_PROJECT_PUBLIC_KEY | Public API key for the project | Yes | lf_pk_1234567890 |
LANGFUSE_INIT_PROJECT_SECRET_KEY | Secret API key for the project | Yes | lf_sk_1234567890 |
LANGFUSE_INIT_USER_EMAIL | Email address of the initial user | Yes | user@example.com |
LANGFUSE_INIT_USER_NAME | Name of the initial user | No | John Doe |
LANGFUSE_INIT_USER_PASSWORD | Password for the initial user | Yes | password123 |
The different resources depend on each other in the following way. You can e.g. intialize an organization and a user wihtout having to also initialize a project and API keys, but you cannot initialize a project without also initializing an organization.
Organization
├── Project (part of organization)
│ └── API Keys (set for project)
└── User (owner of organization)
Troubleshooting:
- If you use
LANGFUSE_INIT_*
in Docker Compose, do not double-quote the values (GitHub issue). - The resources depend on one another (see note above). For example, you must create an organization to initialize a project.
Configuring the Enterprise Edition
The Enterprise Edition (compare versions) of Langfuse includes additional optional configuration options that can be set via environment variables.
Variable | Description |
---|---|
LANGFUSE_ALLOWED_ORGANIZATION_CREATORS | Comma-separated list of allowlisted users that can create new organizations. By default, all users can create organizations. E.g. user1@langfuse.com,user2@langfuse.com . |
LANGFUSE_UI_API_HOST | Customize the hostname that is referenced in the settings. Defaults to window.origin . |
LANGFUSE_UI_DOCUMENTATION_HREF | Customize the documentation link reference in the menu and settings. |
LANGFUSE_UI_SUPPORT_HREF | Customize the support link reference in the menu and settings. |
LANGFUSE_UI_FEEDBACK_HREF | Replace the default feedback widget with your own feedback link. |
LANGFUSE_UI_LOGO_DARK_MODE_HREF LANGFUSE_UI_LOGO_LIGHT_MODE_HREF | Co-brand the Langfuse interface with your own logo. Langfuse adapts to the logo width, with a maximum aspect ratio of 1:3. Narrower ratios (e.g., 2:3, 1:1) also work. The logo is fitted into a bounding box, so there are no specific pixel constraints. For reference, the example logo is 160px x 400px. |
LANGFUSE_UI_DEFAULT_MODEL_ADAPTER | Set the default model adapter for the LLM playground and evals. Options: OpenAI , Anthropic , Azure . Example: Anthropic |
LANGFUSE_UI_DEFAULT_BASE_URL_OPENAI | Set the default base URL for OpenAI API in the LLM playground and evals. Example: https://api.openai.com/v1 |
LANGFUSE_UI_DEFAULT_BASE_URL_ANTHROPIC | Set the default base URL for Anthropic API in the LLM playground and evals. Example: https://api.anthropic.com |
LANGFUSE_UI_DEFAULT_BASE_URL_AZURE_OPENAI | Set the default base URL for Azure OpenAI API in the LLM playground and evals. Example: https://{instanceName}.openai.azure.com/openai/deployments |
Health and Readiness Check Endpoint
Langfuse web includes a health check endpoint at /api/public/health
and a readiness check endpoint at /api/public/ready
and the
Langfuse worker a health check endpoint at /api/health
.
The health check endpoint indicates if the application is alive and the readiness check endpoint indicates if the application is ready to serve traffic.
Access the health and readiness check endpoints:
curl http://localhost:3000/api/public/health
curl http://localhost:3000/api/public/ready
curl http://localhost:3030/api/health
The potential responses from the health check endpoint are:
200 OK
: Both the API is functioning normally and a successful connection to the database was made.503 Service Unavailable
: Either the API is not functioning or it couldn’t establish a connection to the database.
The potential responses from the readiness check endpoint are:
200 OK
: The application is ready to serve traffic.500 Internal Server Error
: The application received a SIGTERM or SIGINT and should not receive traffic.
Applications and monitoring services can call this endpoint periodically for health updates.
Per default, the Langfuse web healthcheck endpoint does not validate if the database is reachable, as there are cases where the
database is unavailable, but the application still serves traffic.
If you want to run database healthchecks, you can add ?failIfDatabaseUnavailable=true
to the healthcheck endpoint.
Encryption
Encryption in transit (HTTPS)
For encryption in transit, HTTPS is strongly recommended. Langfuse itself does not handle HTTPS directly. Instead, HTTPS is typically managed at the infrastructure level. There are two main approaches to handle HTTPS for Langfuse:
- Load Balancer Termination: In this approach, HTTPS is terminated at the load balancer level. The load balancer handles the SSL/TLS certificates and encryption, then forwards the decrypted traffic to the Langfuse container over HTTP. This is a common and straightforward method, especially in cloud environments.
- Pros: Simplifies certificate management as it is usually a fully managed service (e.g. AWS ALB), offloads encryption overhead from application servers.
- Cons: Traffic between load balancer and Langfuse container is unencrypted (though typically within a secure network).
- Service Mesh Sidecar: This method involves using a service mesh like Istio or Linkerd. A sidecar proxy is deployed alongside each Langfuse container, handling all network traffic including HTTPS.
- Pros: Provides end-to-end encryption (mutual TLS), offers advanced traffic management and observability.
- Cons: Adds complexity to the deployment, requires understanding of service mesh concepts.
Once HTTPS is enabled, you can configure add LANGFUSE_CSP_ENFORCE_HTTPS=true
to ensure browser only allow HTTPS connections when using Langfuse.
Encryption at rest (database)
All Langfuse data is stored in your Postgres database, Clickhouse, Redis, or S3/Blob Store. Database-level encryption is recommended for a secure production deployment and available across cloud providers.
The Langfuse team has implemented this for Langfuse Cloud and it is fully ISO27001, SOC2 Type 2 and GDPR compliant (security page).
Additional application-level encryption
In addition to in-transit and at-rest encryption, sensitive data is also encrypted or hashed at the application level.
Data | Encryption |
---|---|
API keys | Hashed using SALT |
Langfuse Console JWTs | Encrypted via NEXTAUTH_SECRET |
LLM API credentials stored in Langfuse | Encrypted using ENCRYPTION_KEY |
Integration credentials (e.g. PostHog) | Encrypted using ENCRYPTION_KEY |
Input/Outputs of LLM Calls, Traces, Spans | Work in progress, reach out to enterprise@langfuse.com if you are interested in this |
Deployment Guides
The Langfuse team and our community maintain a collection of deployment guides to illustrate how you can run Langfuse in various environments. This section is work in progress and relies on community contributions. If you have successfully deployed Langfuse on a specific platform, consider contributing a guide either via a GitHub PR/Issue or by reaching out to the maintainers. Please also let us know if one of these guides does not work anymore or if you have a better solution.
Support
If you experience any issues, please join us on Discord or contact the maintainers at support@langfuse.com.
For support with production deployments, the Langfuse team provides dedicated enterprise support. To learn more, reach out to enterprise@langfuse.com or schedule a demo.
Alternatively, you may consider using Langfuse Cloud, which is a fully managed version of Langfuse. You can find information about its security and privacy here.
FAQ
- Are prebuilt ARM images available?
- Are older versions of the SDK compatible with newer versions of Langfuse?
- I cannot connect to my docker deployment, what should I do?
- I have forgotten my password
- How can I restrict access on my self-hosted instance to internal users?
- How to manage different environments in Langfuse?
- Can I deploy multiple instances of Langfuse behind a load balancer?
- What kind of telemetry does Langfuse collect?