Skip to content

Conversation

@datnguyennnx
Copy link
Contributor

@datnguyennnx datnguyennnx commented Sep 19, 2025

Feature: BasePath Support for Subpath Deployments

Problem

Self-hosted users cannot deploy HyperDX under custom subpaths with nginx/traefik reverse proxies (e.g., domain.com/hyperdx).

Current User Pain: Users who want subpath deployment have to:

  • Fork the repository and maintain custom builds
  • Manually modify source code for every update
  • Miss out on official Docker image updates
  • Manage their own build pipeline and maintenance

Solution: Docker Environment Variable Approach

Added three environment variables to enable subpath deployment without requiring source code modifications:

  • HYPERDX_BASE_PATH - Frontend Next.js basePath (e.g., /hyperdx)
  • HYPERDX_API_BASE_PATH - Backend API basePath (e.g., /hyperdx/api)
  • HYPERDX_OTEL_BASE_PATH - OTEL collector basePath (e.g., /hyperdx/otel)

Key Innovation: Use Official Docker Images + Environment Variables

Before (Complex):

# Users had to maintain custom builds
git clone hyperdx
# Modify source files manually
# Build custom Docker image  
# Miss official updates
# Maintain fork forever

After (Simple):

# Users just pull official images and set env vars
docker pull hyperdx/hyperdx:latest
# Set environment variables in docker-compose.yml or .env
# Deploy with basePath - that's it!
# Get updates automatically with official images

Benefits

For Users:

  • Easy Updates: Pull latest HyperDX images without losing basePath configuration
  • No Source Maintenance: No need to fork repository or maintain custom builds
  • Production Ready: Uses official Docker images with runtime configuration
  • Zero Setup: Just set environment variables and deploy

For HyperDX Project:

  • Wider Adoption: Enables enterprise deployment scenarios without support burden
  • Lower Maintenance: Users don't create forks or custom builds
  • Standard Pattern: Follows Docker environment variable best practices
  • Backward Compatible: Existing deployments work exactly the same

Usage Example

Docker Compose Deployment

# docker-compose.yml
services:
  app:
    image: hyperdx/hyperdx:latest
    environment:
      HYPERDX_BASE_PATH: /hyperdx
      HYPERDX_API_BASE_PATH: /hyperdx/api
      HYPERDX_OTEL_BASE_PATH: /hyperdx/otel

Environment Variables

# Set in .env file
HYPERDX_BASE_PATH=/hyperdx
HYPERDX_API_BASE_PATH=/hyperdx/api
HYPERDX_OTEL_BASE_PATH=/hyperdx/otel

nginx Reverse Proxy

# nginx.conf
location /hyperdx/ {
    proxy_pass http://hyperdx-container:8080/hyperdx/;
    proxy_set_header Host $host;
}

Deployment Patterns Enabled

  • https://domain.com/hyperdx → HyperDX frontend
  • https://domain.com/hyperdx/api → HyperDX API
  • https://domain.com/hyperdx/otel → OTEL collector

Update Workflow Improvement

Traditional Approach (Maintenance Burden):

# User maintains fork
git pull upstream/main
# Resolve merge conflicts in modified files
# Rebuild custom Docker image
# Test everything again

Our Approach (Effortless Updates):

# User gets latest official image
docker pull hyperdx/hyperdx:latest
docker-compose up -d
# Same environment variables, new features automatically

Testing

  • Verified working at /hyperdx subpath
  • All API endpoints respond correctly (/hyperdx/api/config, /hyperdx/api/health)
  • Docker production build successful
  • Frontend assets load with correct basePath
  • Backward compatibility maintained

Files Modified

  • packages/app/next.config.js
  • packages/app/src/api.ts
  • packages/app/pages/api/[...all].ts
  • packages/api/src/api-app.ts
  • packages/api/src/opamp/app.ts
  • docker-compose.yml
  • docker/hyperdx/Dockerfile
  • Makefile

Impact

This addresses a common deployment need for self-hosted users who want to run HyperDX under reverse proxies with custom paths without the maintenance burden of custom builds.

Result: Transform HyperDX from "clone sources and modify code" to "pull official image and set environment variables" - enabling enterprise deployment flexibility while maintaining the official update path.

@changeset-bot
Copy link

changeset-bot bot commented Sep 19, 2025

🦋 Changeset detected

Latest commit: 70fa0a7

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
@hyperdx/app Minor
@hyperdx/api Minor
@hyperdx/common-utils Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@vercel
Copy link

vercel bot commented Sep 19, 2025

@datnguyennnx is attempting to deploy a commit to the HyperDX Team on Vercel.

A member of the Team first needs to authorize it.

@wrn14897
Copy link
Member

I don’t think this change is necessary. The /api route is already proxied on the Next.js server side. For production deployments, we recommend using the HyperDX Helm charts
, which leverage a fullstack build.

Additionally, the OpAMP endpoint should not be exposed (it runs on the same API server but on a different port), so I don’t see a reason to introduce a reverse proxy in this case. If you want to prefix a subpath, I’d suggest configuring that at your reverse proxy layer.

@datnguyennnx
Copy link
Contributor Author

datnguyennnx commented Sep 21, 2025

From my experience, setting up subpaths using only Traefik/nginx is tough and often breaks URLs.
Example from our Traefik config:

middlewares:
  hyperdx-strip-prefix:
    stripPrefix:
      prefixes:
        - "/hyperdx"
routers:
  hyperdx-app-router:
    rule: 'PathPrefix(`/hyperdx`) || PathPrefix(`/api`)  || PathPrefix(`/_next`) || PathPrefix(`/__ENV.js`)'
    middlewares:
      - 'hyperdx-strip-prefix'
services:
   hyperdx-app-service:
     loadBalancer:
       servers:
       - url: 'http://hyperdx-app:8050'

Even with this setup, many frontend links break or ignore /hyperdx and redirect to root paths, causing broken navigation.

This shows how hard it is to handle subpaths fully in proxy layer. Having basePath config inside the app itself is much simpler and more reliable.

The OpAMP endpoint is fine (if you accept this, I’ll remove the OpAMP endpoint—we only expose the API and app endpoint).

@wrn14897
Copy link
Member

Thanks for the effort here — I see the motivation to make updating the URL subpath easier, and that’s definitely valuable. That said, I have a few concerns with the current approach:

  • Backward compatibility – This change risks breaking existing setups.
  • Environment variable in next.config.js or _document.tsx – It doesn’t work dynamically, and I suspect it still requires an image rebuild.
  • Code duplication – The repeated logic makes the code harder to read and maintain.
  • Subpath switching – It’s still unclear how well this works in local mode.

For context: this proxy issue was problematic in HyperDX v1, but with the flexibility of Helm deployments and the internal API proxy (removing the need to deploy both services), things should already be simpler.

I agree that configuring subpaths is a bit annoying. However, I think we should take a step back before spreading environment variables across the codebase in ways that risk breaking existing behavior. There might be a cleaner approach.

Suggestion:
If you’re open to it, I’d propose creating a new proxy/ directory with Traefik or Nginx configs. I’d be happy to help review and make sure that works smoothly.

@datnguyennnx
Copy link
Contributor Author

datnguyennnx commented Sep 27, 2025

  • Proxy-Centric: New proxy/ dir with Nginx/Traefik configs, no OpAMP exposure.
  • Backward Compat: Defaults to '/' (root unchanged); E2E verified no regressions.
  • Dynamic/No Rebuild: Runtime process.env via centralized utils (basePath.ts)
  • No Duplication: All env reads in utils; refactors in 5 files use get*BasePath()

All problems solved, please re-review and test. Ready to merge!

@datnguyennnx datnguyennnx force-pushed the feat/add-basepath-support branch from 3e5b0d1 to 007bf4f Compare September 27, 2025 09:19
Makefile Outdated
Comment on lines 113 to 115
--build-arg HYPERDX_BASE_PATH="${HYPERDX_BASE_PATH}" \
--build-arg HYPERDX_API_BASE_PATH="${HYPERDX_API_BASE_PATH}" \
--build-arg HYPERDX_OTEL_BASE_PATH="${HYPERDX_OTEL_BASE_PATH}" \
Copy link
Member

@wrn14897 wrn14897 Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this mean that devs need to rebuild the image whenever the env var changes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wrn14897 No rebuild needed! Those build-args were a holdover from an earlier iteration - I've removed them entirely from the Makefile.

The base paths are now read at runtime via process.env in centralized utils (basePath.ts), so you can switch subpaths dynamically with docker run -e HYPERDX_BASE_PATH=/hyperdx or compose restart - no image rebuilds required. Verified: Build once (make build-app), run root → subpath via -e, and curls/UI/API prefix correctly (e.g., /hyperdx/api/health 200 OK).

This aligns with your dynamic config concern. Please re-review—tests/E2E confirm no regressions!

@wrn14897
Copy link
Member

wrn14897 commented Sep 29, 2025

  • Proxy-Centric: New proxy/ dir with Nginx/Traefik configs, no OpAMP exposure.
  • Backward Compat: Defaults to '/' (root unchanged); E2E verified no regressions.
  • Dynamic/No Rebuild: Runtime process.env via centralized utils (basePath.ts)
  • No Duplication: All env reads in utils; refactors in 5 files use get*BasePath()

All problems solved, please re-review and test. Ready to merge!

Thanks for following up, @datnguyennnx . I may not have explained it clearly earlier. The /api should be proxied by the nextjs server, and in most cases people won’t really care about what the API base path is (since the API and app are effectively bundled together in a single build).

So for example, if the subpath is foo, the API route should resolve to foo/api. My preference is to handle this primarily at the proxy layer, minimizing changes to the application code.
I’ve created a branch with a proof of concept here: link
. In theory, the only required changes are injecting HYPERDX_BASE_PATH into next.config.js and updating FRONTEND_URL. Once the nextjs server restarts and picks up the new config, the subpath should work as expected.

@wrn14897
Copy link
Member

wrn14897 commented Oct 8, 2025

Replaced with PR #1236

@wrn14897 wrn14897 closed this Oct 8, 2025
@simadimonyan
Copy link

I get 404. What did I wrong?

compose.yaml

  hyperdx:
    image: docker.hyperdx.io/hyperdx/hyperdx-all-in-one
    container_name: hyperdx-ui
    depends_on:
      - clickhouse
      - otel-collector
    environment:
      - HYPERDX_API_URL=${HYPERDX_API_URL}
      - HYPERDX_API_PORT=${HYPERDX_API_PORT}
      - HYPERDX_APP_URL=${HYPERDX_APP_URL}
      - HYPERDX_APP_PORT=${HYPERDX_APP_PORT}
      - FRONTEND_URL=${FRONTEND_URL}
      - HYPERDX_BASE_PATH=${HYPERDX_BASE_PATH}
      - NEXT_PUBLIC_HYPERDX_BASE_PATH=${NEXT_PUBLIC_HYPERDX_BASE_PATH}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/hyperdx/data:/data/db
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    networks:
      - schedule_net
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1

.env

  HYPERDX_API_URL=https://schedule-imsit.ru
  HYPERDX_API_PORT=8000
  HYPERDX_APP_URL=https://schedule-imsit.ru
  HYPERDX_APP_PORT=8081
  FRONTEND_URL=https://schedule-imsit.ru/hyperdx
  HYPERDX_BASE_PATH=/hyperdx
  NEXT_PUBLIC_HYPERDX_BASE_PATH=/hyperdx

nginx.conf

  location /hyperdx/ {
              limit_req zone=req_limit_per_ip burst=10 nodelay;
  
              proxy_pass http://hyperdx-ui:8081/;
  
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
              proxy_set_header X-Forwarded-Host $host;
              proxy_set_header Authorization $http_authorization;
  
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection $connection_upgrade;
  
              proxy_buffering off;
   }

@datnguyennnx
Copy link
Contributor Author

@simadimonyan

Full nginx.conf.template (adapted from nginx template, with your rate limiting and other headers merged in):

upstream app {
    server hyperdx-ui:8081;
}

server {
    listen 80;  # Or 443 ssl if handling HTTPS here

    set $base_path "${HYPERDX_BASE_PATH}";
    if ($base_path = "/") {
        set $base_path "";
    }

    # Rate limiting from your config
    limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=1r/s;

    # Common proxy headers (merged from your config and template)
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Authorization $http_authorization;
    proxy_buffering off;

    # Redirect root to base path, if a base path is set
    location = / {
        limit_req zone=req_limit_per_ip burst=10 nodelay;
        if ($base_path != "") {
            return 301 $base_path;
        }
        # If no base path, just proxy to the app
        proxy_pass http://app;
    }

    # This handles assets and api calls made to the root and rewrites them to include the base path
    location ~ ^(/api/|/_next/|/__ENV\.js$|/Icon32\.png$) {
        limit_req zone=req_limit_per_ip burst=10 nodelay;
        # Note: $request_uri includes the original full path including query string
        proxy_pass http://app$base_path$request_uri;
    }

    # Proxy requests that are already prefixed with the base path to the app
    location ${HYPERDX_BASE_PATH} {
        limit_req zone=req_limit_per_ip burst=10 nodelay;
        # The full request URI (e.g., /hyperdx/settings) is passed to the upstream without stripping
        proxy_pass http://app;
    }
}

Mount the template in Docker Compose: In your compose.yaml, update the nginx service to mount nginx.conf.template (adjust paths as needed):

services:
  nginx:
    image: nginx:latest
    volumes:
      - ./nginx.conf.template:/etc/nginx/conf.d/default.conf.template:ro
    environment:
      - HYPERDX_BASE_PATH=/hyperdx  # Pass the env var here
    command: /bin/sh -c "envsubst < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
    # ... other configs like ports, depends_on, etc.

Restart the stack: build images again -> it's current form because this value must be set at build time, not runtime.

@simadimonyan
Copy link

simadimonyan commented Dec 25, 2025

@datnguyennnx

How can I adapt this template to the extended configuration where $host is used to be multiple service provider? I would like to add lending page on / path and shift hyperdx service from / to /hyperdx/ subpath.

nginx.conf

events {
    worker_connections 1024;
}

stream {

    limit_conn_zone $binary_remote_addr zone=stream_conn_limit_per_ip:10m;

    server {
        listen 6379; 
        proxy_pass redis:6379;
        proxy_connect_timeout 30s;
        proxy_timeout 30s;

        limit_conn stream_conn_limit_per_ip 2;
    }
    
}

http {

    limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=10r/s;

    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }

    server {
        listen 80;
        server_name schedule-imsit.ru; 
        location /.well-known/acme-challenge/ { root /var/www/certbot; }

        location / {
            return 301 https://$host$request_uri;
        }
    }

    server {
        listen 443 ssl;
        server_name schedule-imsit.ru; # Заменить на домен

        ssl_certificate /etc/nginx/ssl/fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/privkey.pem;

        location /schedule/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            proxy_cache_valid 200 10m

            proxy_pass http://app:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $host;
        }

        location /console/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            rewrite ^/console/(.*)$ /$1 break;
            proxy_pass http://minio:9001;

            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
    
            proxy_connect_timeout 300;
            proxy_send_timeout 300;
            proxy_read_timeout 300;
        }

        location /pgadmin/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;
            
            rewrite ^/pgadmin/(.*)$ /$1 break;
            proxy_pass http://pgadmin:80;

            proxy_set_header Host $host;
            proxy_set_header X-Script-Name /pgadmin;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /grafana/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            proxy_pass http://grafana:3000;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Server $host;
        }

        location /hyperdx/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            rewrite ^/hyperdx/(.*)$ /$1 break;
            proxy_pass http://hyperdx-ui:8081;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header Authorization $http_authorization;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;

            proxy_buffering off;
        }

    }

}

compose.yaml

services:
  nginx:
    image: nginx:stable
    container_name: nginx
    restart: always
    ports:
      - "80:80"
      - "443:443"
      - "6379:6379"
    volumes:
     - ./configs/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./volumes/nginx/ssl:/etc/nginx/ssl
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    environment:
      - TZ=Europe/Moscow
    depends_on:
      - app
      - minio
      - pgadmin
      - hyperdx
      - grafana
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  app:
    container_name: app
    build:
      context: .
      dockerfile: ./Dockerfile
    volumes: 
      - ./:/service/schedule-parse-service
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - app_db
      - otel-collector
    environment:
      - TZ=Europe/Moscow
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  minio:
    restart: always
    image: minio/minio:latest
    container_name: minio
    hostname: minio.local
    environment:
      - MINIO_NOTIFY_WEBHOOK_AUTH_TOKEN_1=${MINIO_WEBHOOK_AUTH_TOKEN}
      - MINIO_NOTIFY_WEBHOOK_QUEUE_DIR=./volumes/minio/events
    
      - MINIO_ROOT_USER=${MINIO_ROOT_USER}
      - MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
      - MINIO_SKIP_CLIENT=yes

      - MINIO_SCHEME=http

      - MINIO_CONSOLE_PORT_NUMBER=9001
      - MINIO_LOG_LEVEL=debug

      - MINIO_SERVER_URL=${MINIO_SERVER_URL}
      - MINIO_BROWSER_REDIRECT_URL=${MINIO_BROWSER_REDIRECT_URL}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/minio/data:/data
      - ./volumes/minio/certs:/certs
      - ./minio-manual-init-webhook.sh:/minio-manual-init-webhook.sh
      - ./minio-auto-init-webhook.sh:/minio-auto-init-webhook.sh
      - ./minio-generate-keys.sh:/minio-generate-keys.sh
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    command: ["server", "/data", "--console-address", ":9001"]
    healthcheck:
      test: [ "CMD", "curl", "-k", "http://minio:9000/minio/health/live" ]
      interval: 30s
      timeout: 20s
      retries: 3
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: pgadmin
    environment:
      - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
      - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
      - PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION=True
      - PGADMIN_CONFIG_WTF_CSRF_ENABLED=True

      - TZ=Europe/Moscow

    depends_on:
      - app_db
    volumes:
      - ./volumes/pgadmin-data:/var/lib/pgadmin
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  app_db:
    image: postgres:17.6
    container_name: app_db
    restart: always
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/app_db/postgres:/var/lib/postgresql/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  redis:
    image: redis:8.2.1
    container_name: redis
    environment:
      - REDIS_USER_PASSWORD=${REDIS_USER_PASSWORD}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/redis:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  clickhouse:
    image: clickhouse/clickhouse-server:25.9
    container_name: clickhouse
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    environment:
      - CLICKHOUSE_USER=${CLICKHOUSE_USER}
      - CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/clickhouse/data:/var/lib/clickhouse
      - ./volumes/clickhouse/logs:/var/log/clickhouse-server
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    container_name: open-telemetry
    restart: always
    depends_on:
      - clickhouse
    environment:
      - TZ=Europe/Moscow
    volumes:
      - ./configs/opentelemetry/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml:ro
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net
    
  hyperdx:
    image: docker.hyperdx.io/hyperdx/hyperdx-all-in-one
    container_name: hyperdx-ui
    depends_on:
      - clickhouse
      - otel-collector
    environment:
      - HYPERDX_API_URL=${HYPERDX_API_URL}
      - HYPERDX_API_PORT=${HYPERDX_API_PORT}
      - HYPERDX_APP_URL=${HYPERDX_APP_URL}
      - HYPERDX_APP_PORT=${HYPERDX_APP_PORT}
      - FRONTEND_URL=${FRONTEND_URL} - для production
      - HYPERDX_BASE_PATH=${HYPERDX_BASE_PATH} - для production
      - NEXT_PUBLIC_HYPERDX_BASE_PATH=${NEXT_PUBLIC_HYPERDX_BASE_PATH} - для production

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/hyperdx/data:/data/db
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

  grafana:
    image: grafana/grafana-enterprise
    container_name: grafana
    restart: always
    depends_on:
      - clickhouse
      - otel-collector
    environment:
      - GF_SECURITY_ADMIN_USER=${GF_SECURITY_ADMIN_USER}
      - GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD}

      - TZ=Europe/Moscow
    volumes:
      - ./volumes/grafana:/var/lib/grafana
      - ./configs/grafana/grafana.ini:/etc/grafana/grafana.ini
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: 1
    networks:
      - schedule_net

networks:
  schedule_net:
    driver: bridge

@datnguyennnx
Copy link
Contributor Author

datnguyennnx commented Dec 25, 2025

events {
    worker_connections 1024;
}

stream {

    limit_conn_zone $binary_remote_addr zone=stream_conn_limit_per_ip:10m;

    server {
        listen 6379; 
        proxy_pass redis:6379;
        proxy_connect_timeout 30s;
        proxy_timeout 30s;

        limit_conn stream_conn_limit_per_ip 2;
    }
    
}

http {

    limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=10r/s;

    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }

    server {
        listen 80;
        server_name schedule-imsit.ru; 
        location /.well-known/acme-challenge/ { root /var/www/certbot; }

        location / {
            return 301 https://$host$request_uri;
        }
    }

    server {
        listen 443 ssl;
        server_name schedule-imsit.ru; # Заменить на домен

        ssl_certificate /etc/nginx/ssl/fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/privkey.pem;

        location /schedule/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            proxy_cache_valid 200 10m

            proxy_pass http://app:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $host;
        }

        location /console/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            rewrite ^/console/(.*)$ /$1 break;
            proxy_pass http://minio:9001;

            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
    
            proxy_connect_timeout 300;
            proxy_send_timeout 300;
            proxy_read_timeout 300;
        }

        location /pgadmin/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;
            
            rewrite ^/pgadmin/(.*)$ /$1 break;
            proxy_pass http://pgadmin:80;

            proxy_set_header Host $host;
            proxy_set_header X-Script-Name /pgadmin;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /grafana/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            proxy_pass http://grafana:3000;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Server $host;
        }

        location /hyperdx/ {

            limit_req zone=req_limit_per_ip burst=10 nodelay;

            proxy_pass http://hyperdx-ui:8081/hyperdx/;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header Authorization $http_authorization;

            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;

            proxy_buffering off;
        }

    }

}

Try this one config sir @simadimonyan

@simadimonyan
Copy link

@datnguyennnx

I get 404 and redirection process from https://schedule-imsit.ru/hyperdx/ to https://schedule-imsit.ru/hyperdx/hyperdx

image image

Browser console logs
schedule-imsit.ru-1766684644060.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants