Self-Hosting Analytics
The Supabase Analytics server is a Logflare self-hostable instance that manages the ingestion and query pipelines for searching and aggregating structured analytics events.
When self-hosting the Analytics server, the full logging experience matching that of the Supabase Platform is available in the Studio instance, allowing for an integrated and enhanced development experience. However, it's important to note that certain differences may arise due to the platform's infrastructure.
Logflare Technical Docs
All Logflare technical documentation is available at https://docs.logflare.app.
Backends Supported#
The Analytics server supports either Postgres or BigQuery as the backend. The supabase-cli
experience uses the Postgres backend out-of-the-box. However, the Supabase Platform uses the BigQuery backend for storing all platform logs.
When using the BigQuery backend, a BigQuery dataset is created in the provided Google Cloud project, and tables are created for each source. Log events are streamed into each table, and all queries generated by Studio or by the Logs Explorer are executed against the BigQuery API. This backend requires internet access to work, and cannot be run fully locally.
When using the Postgres backend, tables are created for each source within the provided schema (for supabase-cli, this would be _analytics
). Log events received by Logflare are inserted directly into the respective tables. All BigQuery-dialect SQL queries from Studio will be handled by a translation layer within the Analytics server. This translation layer translates the query to PostgreSQL dialect, and then executes it against the Postgres database.
The Postgres backend is not yet optimized for a high volume of inserts, or for heavy query usage. Today the translation layer only handles a limited subset of the BigQuery dialect. As such, the Log Explorer may produce errors for more advanced queries when using the Postgres Backend.
Getting Started#
The Postgres backend is recommended when familiarizing and experimenting with self-hosting Supabase. For production, we recommend using the BigQuery backend. See production recommendations for more information.
To set up logging in self-hosted Supabase, see the docker-compose example. Two compose services are required: Logflare, and Vector. Logflare is the HTTP Analytics server, while Vector is the logging pipeline to route all compose services' syslog to the Logflare sever.
Regardless of the backend chosen, the following environment variables must be set for the supabase/logflare
docker image:
LOGFLARE_SINGLE_TENANT_MODE=true
: The feature flag for enabling single tenant mode for Logflare. Must be set totrue
LOGFLARE_SUPABASE_MODE=true
: The feature flag for seeding Supabase-related data. Must be set totrue
For all other configuration environment variables, please refer to the Logflare self-hosting documentation.
Postgres Backend Setup#
The example docker-compose uses the Postgres backend out of the box.
_10# clone the supabase/supabase repo, and run the following_10cd docker_10docker compose -f docker-compose.yml up
Configuration and Requirements#
supabase/logflare:1.4.0
or above- Relevant environment variables:
POSTGRES_BACKEND_URL
: Required. The connection string to the Postgres database.POSTGRES_BACKEND_SCHEMA
: Optional. Allows customization of the schema used to scope all backend operations within the database.
BigQuery Backend Setup#
The BigQuery backend is a more robust and scalable backend option that is battle-tested and production ready. Use this backend if you intend to have heavy logging usage and require advanced querying features such as the Logs Explorer.
Configuration and Requirements#
The requirements are as follows after creating the project:
- Google Cloud project with billing enabled
- Project ID
- Project number
- A service account key.
You must enable billing on your Google Cloud project, as a valid billing account is required for streaming inserts.
Setting up BigQuery Service Account#
The service account used must have sufficient permissions to insert into your Google Cloud BigQuery. Ensure that the service account has either:
- BigQuery Admin role; or
- The following permissions:
bigquery.datasets.create
bigquery.datasets.get
bigquery.datasets.getIamPolicy
bigquery.datasets.update
bigquery.jobs.create
bigquery.routines.create
bigquery.routines.update
bigquery.tables.create
bigquery.tables.delete
bigquery.tables.get
bigquery.tables.getData
bigquery.tables.update
bigquery.tables.updateData
You can create the service account via the web console or gcloud
CLI, as per the Google Cloud documentation. In the web console, you can create the key by navigating to IAM > Service Accounts > Actions (dropdown) > Manage Keys
We recommend setting the BigQuery Admin role, as it simplifies permissions setup.
Download the Service Account Keys#
After the service account is created, you will need to create a key for the service account. This key will sign the JWTs for API requests that the Analytics server makes with BigQuery. This can be done through the IAM section in Google Cloud console.
Docker Image Configuration#
Using the example self-hosting stack based on docker-compose, you include the logging related services using the following command
- Update the
.env.example
file with the necessary environment variables.
GOOGLE_PROJECT_ID
GOOGLE_PROJECT_NUMBER
- Place your Service Account key in your present working directory with the filename
gcloud.json
. - On
docker-compose.yml
, uncomment the block section below the commentary# Uncomment to use Big Query backend for analytics
- On
docker-compose.yml
, comment the block section below the commentary# Comment variables to use Big Query backend for analytics
Thereafter, you can start the example stack using the following command:
_10# assuming you clone the supabase/supabase repo._10cd docker_10docker compose -f docker-compose.yml
BigQuery Datset Storage Location#
Currently, all BigQuery datasets stored and managed by Analytics, whether via CLI or self-hosted, will default to the US region.
Vector Usage#
In the Docker Compose example, Vector is used for the logging pipieline, where log events are forwarded to the Analytics API for ingestion.
Please refer to the Vector configuration file when customizing your own setup.
You must ensure that the payloads matches the expected event schema structure. Without the correct structure, it would cause the Studio Logs UI features to break.
Differences from Platform#
API logs rely on Kong instead of the Supabase Cloud API Gateway. Logs from Kong are not enriched with platform-only data.
Within the self-hosted setup, all logs are routed to Logflare via Vector. As Kong routes API requests to PostgREST, self-hosted or local deployments will result in Kong request logs instead. This would result in differences in the log event metadata between self-hosted API requests and Supabase Platform requests.
Production Recommendations#
To self-host in a production setting, we recommend performing the following for a better experience.
Ensure that Logflare is behind a firewall and restrict all network access to it besides safe requests.#
Self-hosted Logflare has UI authentication disabled and is intended for exposure to the internet. We recommend restricting access to the dashboard, accessible at the /dashboard
path.
If dashboard access is required for managing sources, we recommend having an authentication layer, such as a VPN.
Use a different Postgres Database to store Logflare data.#
Logflare requires a Postgres database to function. However, if there is an issue with you self-hosted Postgres service, you would not be able to debug it as it would also bring Logflare down together.
The self-hosted example is only used as a minimal example on running the entire stack, however it is not recommended to use the same database server for both production and observability.
Use Big Query as the Logflare Backend#
The current Postgres Ingestion backend isn't optimized for production usage. We recommend using Big Query for more heavy use cases.
We recommend using the BigQuery backend for production environments as it offers better scaling and querying/debugging experiences.
Client libraries#
- JavaScript - Pino Transport
- Elixir
- Elixir - Logger Backend
- Erlang
- Erlang - Lager Backend
- Cloudflare Worker