Skip to content

Workspace on AWS and S3 Storage

dbeaver-devops edited this page Feb 27, 2026 · 1 revision

Note: This feature is available in Enterprise, AWS, and Team editions only.

Table of contents

CloudBeaver supports storing its workspace in an AWS S3 bucket. To enable this, update your docker-compose.yml and configure the correct environment variables.

For more details on AWS S3 configuration, including setting up buckets, permissions, and best practices, see the official Amazon S3 Documentation

Configure workspace storage on AWS

This section describes how to configure workspace storage using Amazon S3 when running CloudBeaver on AWS with Docker.

Update docker compose

Make sure your CloudBeaver service includes the following environment variables:

services:
  cloudbeaver:
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_REGION=${AWS_REGION}
      - CLOUDBEAVER_WORKSPACE_LOCATION=${CLOUDBEAVER_WORKSPACE_LOCATION}

Configure environmental variables

Define these variables in your .env file:

AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=your-region
CLOUDBEAVER_WORKSPACE_LOCATION=s3:///{bucket_name}/{subfolders}

Important:

  • The CLOUDBEAVER_WORKSPACE_LOCATION path must use triple slashes (s3:///) before the bucket name. This is required for proper S3 path handling.
  • {bucket_name} is the first path segment and is treated as the bucket name
  • {subfolders} defines the workspace location inside the bucket

Configure workspace storage using S3-compatible object storage

CloudBeaver supports storing its workspace in S3-compatible object storage. To enable this, update your docker-compose.yml and configure the required environment variables.

For details on configuring your specific S3-compatible storage system, see the documentation provided by your storage vendor.

Update docker compose

Make sure your CloudBeaver service includes the required environment variables.

services:
  cloudbeaver:
    environment:
      - s3fs.access.key=${S3_ACCESS_KEY}
      - s3fs.secret.key=${S3_SECRET_KEY}
      - s3fs.region=${S3_REGION}
      - s3fs.protocol=${S3_PROTOCOL}
      - s3fs.force.path.style=${S3_FORCE_PATH_STYLE}
      - CLOUDBEAVER_WORKSPACE_LOCATION=${CLOUDBEAVER_WORKSPACE_LOCATION}

Configure environmental variables

Define these variables in your .env file:

S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=your-region
S3_PROTOCOL=https
S3_FORCE_PATH_STYLE=true
CLOUDBEAVER_WORKSPACE_LOCATION=s3://{service_host}/{bucket_name}/{subfolders}

Important:

  • CLOUDBEAVER_WORKSPACE_LOCATION must use the format s3://{service_host}/{bucket_name}/{subfolders}
  • {service_host} is the hostname (and optional port) of the S3-compatible service
  • {bucket_name} is the first path segment and is treated as the bucket name
  • {subfolders} defines the workspace location inside the bucket
  • Set S3_FORCE_PATH_STYLE=true if the storage does not support URLs in the https://{bucketName}.{host}/ format
  • Set S3_PROTOCOL=http to use an unencrypted connection. The default value is https

Limitations of object storage workspaces

  1. No embedded databases

    • CloudBeaver cannot use embedded databases (such as H2) with an external S3-based workspace.
    • Storing an embedded database in S3 would cause severe performance issues.
  2. Separate database node required

    • To use an S3 workspace, you must configure an external database such as PostgreSQL, MySQL, or another supported DB.
    • Make sure the database is properly defined in docker-compose.yml and CLOUDBEAVER_DB_* environment variables.

For more information on CloudBeaver's database, see Server Database.

CloudBeaver Documentation

Clone this wiki locally