Any questions, problems or suggestions with this guide? Ask a question in our community or contribute the change yourself at https://gitlab.com/bramw/baserow/-/tree/develop/docs .
The table below shows all available environment variables supported by Baserow. Some environment variables have different defaults, are not supported, are optional etc depending on how you installed Baserow. See the specific installation guides for how to set these environment variables.
The installation methods referred to in the variable descriptions are:
Variables marked with Internal should only be changed if you know what you are doing.
Name | Description | Defaults |
---|---|---|
BASEROW_PUBLIC_URL | The public URL or IP that will be used to access baserow. Always should start with http:// https:// even if accessing via an IP address. If you are accessing Baserow over a non-standard (80) http port then make sure you append :YOUR_PORT to this variable. Setting this will override PUBLIC_BACKEND_URL and PUBLIC_WEB_FRONTEND_URL with BASEROW_PUBLIC_URL’s value. Set to empty to disable default of http://localhost in the compose to instead set PUBLIC_X_URLs. |
http://localhost |
BASEROW_CADDY_ADDRESSES | Not supported by standalone images. A comma separated list of supported Caddy addresses( https://caddyserver.com/docs/caddyfile/concepts#addresses). If a https:// url is provided the Caddy reverse proxy will attempt to automatically setup HTTPS with lets encrypt for you. If you wish your Baserow to still be accessible on localhost and you set this value away from the default of :80 ensure you append “,http://localhost” | :80 |
PUBLIC_BACKEND_URL | Please use BASEROW_PUBLIC_URL unless you are using the standalone baserow/backend or baserow/web-frontend images. The publicly accessible URL of the backend. Should include the port if non-standard. Ensure BASEROW_PUBLIC_URL is set to an empty value to use this variable in the compose setup. | $BASEROW_PUBLIC_URL, http://localhost:8000/ in the standalone images. |
PUBLIC_WEB_FRONTEND_URL | Please use BASEROW_PUBLIC_URL unless you are using the standalone baserow/backend or baserow/web-frontend images. The publicly accessible URL of the web-frontend. Should include the port if non-standard. Ensure BASEROW_PUBLIC_URL is set to an empty value to use this variable in the compose setup. | $BASEROW_PUBLIC_URL, http://localhost:3000/ in the standalone images. |
WEB_FRONTEND_PORT | The HTTP port that is being used to access Baserow using. Only used by the docker-compose files. | Only used by the docker-compose.yml files, defaults to 80 but prior to 1.9 defaulted to 3000. |
BASEROW_EXTRA_ALLOWED_HOSTS | An optional comma separated list of hostnames which will be added to the Baserow's Django backend ALLOWED_HOSTS setting. In most situations you will not need to set this as the hostnames from BASEROW_PUBLIC_URL or PUBLIC_BACKEND_URL will be added to the ALLOWED_HOSTS automatically. This is only needed if you need to allow additional different hosts to be able to access your Baserow. | |
PRIVATE_BACKEND_URL | Only change this with standalone images. This is the URL used when the web-frontend server directly queries the backend itself when doing server side rendering. As such not only the browser, but also the web-frontend server should be able to make HTTP requests to the backend. The web-frontend nuxt server might not have access to the `PUBLIC_BACKEND_URL` or there could be a more direct route, (e.g. from container to container instead of via the internet). For example if the web-frontend and backend were containers on the same docker network this could be set to http://backend:8000. |
|
BASEROW_CADDY_GLOBAL_CONF | Not supported by standalone images. Will be substituted into the Caddyfiles global config section. Set to “debug” to enable Caddies debug logging. | |
BASEROW_CADDY_EXTRA_CONF | Not supported by standalone images. Will be substituted into the end of the Caddyfiles server block | |
BASEROW_MAX_IMPORT_FILE_SIZE_MB | The maximum file size in mb you can import to create a new table. Default 512Mb. | 512 |
Name | Description | Defaults |
---|---|---|
SECRET_KEY | The Secret key used by Django for cryptographic signing such as generating secure password reset links and managing sessions. See https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-SECRET_KEY for more details | Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.secret. |
SECRET_KEY_FILE | Only supported by the baserow/baserow image If set Baserow will attempt to read the above SECRET_KEY from this file location instead. |
|
BASEROW_JWT_SIGNING_KEY | The signing key that is used to sign the content of generated tokens. For HMAC signing, this should be a random string with at least as many bits of data as is required by the signing protocol. See https://django-rest-framework-simplejwt.readthedocs.io/en/latest/settings.html#signing-key for more details | Recommended to be set by you in the docker-compose and standalone installs (default to the SECRET_KEY). Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.jwt_signing_key. |
BASEROW_ACCESS_TOKEN_LIFETIME_MINUTES | The number of minutes which specifies how long access tokens are valid. This will be converted in a timedelta value and added to the current UTC time during token generation to obtain the token’s default “exp” claim value. | 10 minutes. |
BASEROW_REFRESH_TOKEN_LIFETIME_HOURS | The number of hours which specifies how long refresh tokens are valid. This will be converted in a timedelta value and added to the current UTC time during token generation to obtain the token’s default “exp” claim value. | 168 hours (7 days). |
BASEROW_BACKEND_LOG_LEVEL | The default log level used by the backend, supports ERROR, WARNING, INFO, DEBUG, TRACE | INFO |
BASEROW_BACKEND_DATABASE_LOG_LEVEL | The default log level used for database related logs in the backend. Supports the same values as the normal log level. If you also enable BASEROW_BACKEND_DEBUG and set this to DEBUG you will be able to see all SQL queries in the backend logs. | ERROR |
BASEROW_BACKEND_DEBUG | If set to “on” then will enable the non production safe debug mode for the Baserow django backend. Defaults to “off” | |
BASEROW_AMOUNT_OF_GUNICORN_WORKERS | The number of concurrent worker processes used by the Baserow backend gunicorn server to process incoming requests | |
BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT | The maximum amount of seconds an Airtable migration import job can run. | 1800 seconds - 30 minutes |
INITIAL_TABLE_DATA_LIMIT | The amount of rows that can be imported when creating a table. Defaults to empty which means unlimited rows. | |
BASEROW_ROW_PAGE_SIZE_LIMIT | The maximum number of rows that can be requested at once. | 200 |
BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB | The max file size in MB allowed to be uploaded by users into a Baserow File Field. | 1048576 (1 TB or 1024*1024) |
BATCH_ROWS_SIZE_LIMIT | Controls how many rows can be created, deleted or updated at once using the batch endpoints. | 200 |
BATCH_ROWS_SIZE_LIMIT | Controls how many rows can be created, deleted or updated at once using the batch endpoints. | 200 |
BASEROW_MAX_SNAPSHOTS_PER_GROUP | Controls how many application snapshots can be created per group. | -1 (unlimited) |
BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS | Controls when snapshots expire, set in number of days. Expired snapshots will be automatically deleted. | 360 |
Name | Description | Defaults |
---|---|---|
DATABASE_HOST | The hostname of the postgres database Baserow will use to store its data in. | Defaults to db in the standalone and compose installs. If not provided in the `baserow/baserow` install then the embedded Postgres will be setup and used. |
DATABASE_USER | The username of the database user Baserow will use to connect to the database at DATABASE_HOST | baserow |
DATABASE_PORT | The port Baserow will use when trying to connect to the postgres database at DATABASE_HOST | 5432 |
DATABASE_NAME | The database name Baserow will use to store data in. | baserow |
DATABASE_PASSWORD | The password of DATABASE_USER on the postgres server at DATABASE_HOST | Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.pgpass. |
DATABASE_PASSWORD_FILE | Only supported by the baserow/baserow image If set Baserow will attempt to read the above DATABASE_PASSWORD from this file location instead. |
|
DATABASE_URL | Alternatively to setting the individual DATABASE_ parameters above instead you can provide one standard postgres connection string in the format of: postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&…] | |
MIGRATE_ON_STARTUP | If set to “true” when the Baserow backend service starts up it will automatically apply database migrations. Set to any other value to disable. If you disable this then you must remember to manually apply the database migrations when upgrading Baserow to a new version. | true |
BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION | If set to “true” when after a migration Baserow will automatically sync all builtin Baserow templates in the background. If you are using a postgres database which is constrained to fewer than 10000 rows then we recommend you disable this as the Baserow templates will go over that row limit. To disable this set to any other value than “true” | true |
BASEROW_SYNC_TEMPLATES_TIME_LIMIT | The number of seconds before the background sync templates job will timeout if not yet completed. | 1800 |
SYNC_TEMPLATES_ON_STARTUP | Deprecated please use BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION If provided has the same effect of BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION for backwards compatibility reasons. If BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION is set it will override this value. | true |
DONT_UPDATE_FORMULAS_AFTER_MIGRATION | Baserow’s formulas have an internal version number. When upgrading Baserow if the formula language has also changed then after the database migration has run Baserow will also automatically recalculate all formulas if they have a different version. Set this to any non empty value to disable this automatic update if you would prefer to run the update_formulas management command manually yourself. Formulas might break if you forget to do so after an upgrade of Baserow until and so it is recommended to leave this empty. | |
POSTGRES_STARTUP_CHECK_ATTEMPTS | When Baserow's Backend service starts up it first checks to see if the postgres database is available. It checks 5 times by default, after which if it still has not connected it will crash. | 5 |
Name | Description | Defaults |
---|---|---|
REDIS_HOST | The hostname of the redis database Baserow will use for caching and real time collaboration. | Defaults to redis in the standalone and compose installs. If not provided in the `baserow/baserow` install then the embedded Redis will be setup and used. |
REDIS_PORT | The port Baserow will use when trying to connect to the redis database at REDIS_HOST | 6379 |
REDIS_USER | The username of the redis user Baserow will use to connect to the redis at REDIS_HOST | |
REDIS_PASSWORD | The password of REDIS_USER on the redis server at REDIS_HOST | Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.redispass. |
REDIS_PASSWORD_FILE | Only supported by the baserow/baserow image If set Baserow will attempt to read the above REDIS_PASSWORD from this file location instead. |
|
REDIS_PROTOCOL | The redis protocol used when connecting to the redis at REDIS_HOST Can either be ‘redis’ or ‘rediss’. | redis |
REDIS_URL | Alternatively to setting the individual REDIS_ parameters above instead you can provide one standard redis connection string in the format of: redis://:[password]@[redishost]:[redisport] |
Name | Description | Defaults |
---|---|---|
BASEROW_CELERY_BEAT_STARTUP_DELAY | The number of seconds the celery beat worker sleeps before starting up. | 15 |
BASEROW_CELERY_BEAT_DEBUG_LEVEL | The logging level for the celery beat service. | INFO |
BASEROW_AMOUNT_OF_WORKERS | The number of concurrent celery worker processes used to process asynchronous tasks. If not set will default to the number of available cores. Each celery process uses memory, to reduce Baserow's memory footprint consider setting and reducing this variable. | 1 for the All-in-one, Heroku and Cloudron images. Defaults to empty and hence the number of available cores in the standalone images. |
BASEROW_RUN_MINIMAL | When BASEROW_AMOUNT_OF_WORKERS is 1 and this is set to a non empty value Baserow will not run the export-worker but instead run both the celery export and normal tasks on the normal celery worker. Set this to lower the memory usage of Baserow in expense of performance. |
Name | Description | Defaults |
---|---|---|
BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS | If set to any non empty value allows webhooks to access all addresses. Enabling this flag is a security risk as it will allow users to send webhook requests to internal addresses on your network. Instead consider using the three variables below first to allow access to only some internal network hostnames or IPs. | |
BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST | Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated regexes used to validate user configured webhook URLs, will show the user an error if any regexes match their webhook URL and prevent it from running. Applied before and so supersedes BASEROW_WEBHOOKS_IP_WHITELIST and BASEROW_WEBHOOKS_IP_BLACKLIST. Do not include any schema like http:// , https:// as regexes will only be run against the hostname/IP of the user configured URL. For example set this to ^(?!(www\.)?allowedhost\.com).* to block all hostnames and IPs other than allowedhost.com or www.allowedhost.com . |
|
BASEROW_WEBHOOKS_IP_WHITELIST | Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated IP addresses or ranges that webhooks will be allowed to use after the webhook URL has been resolved to an IP using DNS. Only checked if the URL passes the BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST. Takes precedence over BASEROW_WEBHOOKS_IP_BLACKLIST meaning that a whitelisted IP will always be let through regardless of the ranges in BASEROW_WEBHOOKS_IP_BLACKLIST. So use BASEROW_WEBHOOKS_IP_WHITELIST to punch holes the ranges in BASEROW_WEBHOOKS_IP_BLACKLIST, and not the other way around. Accepts a string in the format: "127.0.0.1/32,192.168.1.1/32" | |
BASEROW_WEBHOOKS_IP_BLACKLIST | Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated IP addresses or ranges that webhooks will be denied from using after the URL has been resolved to an IP using DNS. Only checked if the URL passes the BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST. BASEROW_WEBHOOKS_IP_WHITELIST supersedes any ranges specified in this variable. Accepts a string in the format: "127.0.0.1/32,192.168.1.1/32" | |
BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS | Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. How long to wait before timing out and returning an error when checking if an url can be accessed for a webhook. | 10 seconds |
BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES | The number of consecutive trigger failures that can occur before a webhook is disabled. | 8 |
BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL | The max number of retries per webhook call. | 8 |
BASEROW_WEBHOOKS_MAX_PER_TABLE | The max number of webhooks per Baserow table. | 20 |
BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES | The maximum number of call log entries stored per webhook. | 10 |
BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS | How long to wait on making the webhook request before timing out. | 5 |
Name | Description | Defaults |
---|---|---|
BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER | Set to any non-empty value to ensure Baserow generates https:// next links provided by paginated API endpoints. Baserow will still work correctly if not enabled, this is purely for giving the correct https url for clients of the API. If you have setup Baserow to use Caddy's auto HTTPS or you have put Baserow behind a reverse proxy which: * Handles HTTPS * Strips the X-Forwarded-Proto header from all incoming requests. * Sets the X-Forwarded-Proto header and sends it to Baserow. Then you can safely set BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER=yes to ensure Baserow generates https links for pagination correctly. |
|
ADDITIONAL_APPS | A comma separated list of additional django applications to add to the INSTALLED_APPS django setting | |
HOURS_UNTIL_TRASH_PERMANENTLY_DELETED | Items from the trash will be permanently deleted after this number of hours. | |
DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS | When sharing views publicly a websocket connection is opened to provide realtime updates to viewers of the public link. To disable this set any non empty value. When disabled publicly shared links will need to be refreshed to see any updates to the view. | |
BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR | When updating or creating various resources in Baserow if another concurrent operation is ongoing (like a snapshot, duplication, import etc) which would be affected by your modification a 409 HTTP error will be returned. If you instead would prefer Baserow to not return a 409 and just block waiting until the operation finishes and then to perform the requested operation set this flag to any non-empty value. | |
BASEROW_FULL_HEALTHCHECKS | When set to any non empty value will additionally check in the backend's healthcheck at /_health/ if storage can be written to (causes lots of small filesystem writes) and a more general check if enough disk and memory is available. | |
BASEROW_JOB_CLEANUP_INTERVAL_MINUTES | How often the job cleanup task will run. | 5 |
BASEROW_JOB_EXPIRATION_TIME_LIMIT | How long before a Baserow job will be kept before being cleaned up. | 30 * 24 * 60 (24 days) |
BASEROW_JOB_SOFT_TIME_LIMIT | The number of seconds a Baserow job can run before being terminated. | 1800 |
BASEROW_MAX_FILE_IMPORT_ERROR_COUNT | The max number of per row errors than can occur in a file import before an overall failure is declared | 30 |
MINUTES_UNTIL_ACTION_CLEANED_UP | How long before actions are cleaned up, actions are used to let you undo/redo so this is effectively the max length of time you can undo/redo can action. | 120 |
BASEROW_DISABLE_MODEL_CACHE | When set to any non empty value the model cache used to speed up Baserow will be disabled. Useful to enable when debugging Baserow errors if they are possibly caused by the model cache itself. | |
BASEROW_IMPORT_TOLERATED_TYPE_ERROR_THRESHOLD | The percentage of rows when importing that are allowed to not match the detected column type and be blanked out instead of imported when creating a table. | 0 |
DJANGO_SETTINGS_MODULE | INTERNAL The settings python module to load when starting up the Backend django server. You shouldn’t need to set this yourself unless you are customizing the settings manually. | |
BASEROW_BACKEND_BIND_ADDRESS | INTERNAL The address that Baserow’s backend service will bind to. | |
BASEROW_BACKEND_PORT | INTERNAL Controls which port the Baserow backend service binds to. | |
BASEROW_WEBFRONTEND_BIND_ADDRESS | INTERNAL The address that Baserow’s web-frontend service will bind to. | |
BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT | The maximum number of rows you can import in a synchronous way | 5000 |
BASEROW_MAX_ROW_REPORT_ERROR_COUNT | The maximum row error count tolerated before a file import fails. Before this max error count the import will continue and the non failing rows will be imported and after it, no rows are imported at all. | 30 |
Name | Description | Defaults |
---|---|---|
MEDIA_URL | INTERNAL The URL at which user uploaded media files will be made available | $PUBLIC_BACKEND_URL/media/ |
MEDIA_ROOT | INTERNAL The folder in which the backend will store user uploaded files | /baserow/media |
AWS_ACCESS_KEY_ID | The access key for your AWS account. When set to anything other than empty will switch Baserow to use a S3 compatible bucket for storing user file uploads. | |
AWS_SECRET_ACCESS_KEY | The access secret key for your AWS account. | |
AWS_STORAGE_BUCKET_NAME | Your Amazon Web Services storage bucket name. | |
AWS_S3_REGION_NAME | Optional Name of the AWS S3 region to use (eg. eu-west-1) | |
AWS_S3_ENDPOINT_URL | Optional Custom S3 URL to use when connecting to S3, including scheme. | |
AWS_S3_CUSTOM_DOMAIN | Optional Your custom domain where the files can be downloaded from. |
Name | Description | Defaults |
---|---|---|
FROM_EMAIL | The email address Baserow will send emails from. | |
EMAIL_SMTP | If set to any non empty value then Baserow will start sending emails using the configuration options below. If not set then Baserow will not send emails and just log them to the Celery worker logs instead. | |
EMAIL_SMTP_USE_TLS | If set to any non empty value then Baserow will attempt to send emails using TLS . | |
EMAIL_SMTP_HOST | The host of the external SMTP server that Baserow should use to send emails. | |
EMAIL_SMTP_PORT | The port used to connect to $EMAIL_SMTP_HOST on. | |
EMAIL_SMTP_USER | The username to authenticate with $EMAIL_SMTP_HOST when sending emails. | |
EMAIL_SMTP_PASSWORD | The password to authenticate with $EMAIL_SMTP_HOST when sending emails. |
Name | Description | Defaults |
---|---|---|
DOWNLOAD_FILE_VIA_XHR | Set to `1` to force download links to download files via XHR query to bypass `Content-Disposition: inline` that can't be overridden in another way. If your files are stored under another origin, you also must add CORS headers to your server. | 0 |
BASEROW_DISABLE_PUBLIC_URL_CHECK | When opening the Baserow login page a check is run to ensure the PUBLIC_BACKEND_URL/BASEROW_PUBLIC_URL variables are set correctly and your browser can correctly connect to the backend. If misconfigured an error is shown. If you wish to disable this check and warning set this to any non empty value. | |
ADDITIONAL_MODULES | Internal A list of file paths to Nuxt module.js files to load as additional Nuxt modules into Baserow on startup. | |
BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW | Set to `true` or `1` to disable Google docs file preview. | |
BASEROW_MAX_SNAPSHOTS_PER_GROUP | Controls how many application snapshots can be created per group. | -1 (unlimited) |
Name | Description | Defaults |
---|---|---|
NO_COLOR | Set this to any non empty value to disable colored logging in the all-in-one baserow image. | |
DATA_DIR | For the all-in-one image only, this controls where Baserow will store all data that needs to be persisted. Inside this folder Baserow will store - Its postgres database - Redis database - Any autogenerated secrets like the django SECRET_KEY, the postgresql database user password and the redis user password - Caddy will store its state + any certificates and keys it uses during auto https |
|
DISABLE_VOLUME_CHECK | For the all-in-one image only setting this to any non empty value will disable the check it runs on startup that the “/baserow/data/” directory is mounted to a volume. |
| BASEROW_PLUGIN_GIT_REPOS | A comma separated list of plugin git repos to install on startup. | |
| BASEROW_PLUGIN_URLS | A comma separated list of plugin urls to install on startup. | |
| BASEROW_DISABLE_PLUGIN_INSTALL_ON_STARTUP | When set to any non-empty values no automatic startup check and/or install of plugins will be run. Disables the above two env variables. |
| BASEROW_PLUGIN_DIR |INTERNAL Sets the folder where the Baserow plugin scripts look for plugins.|In the all-in-one image /baserow/data/plugins
, otherwise /baserow/plugins
|