-
Notifications
You must be signed in to change notification settings - Fork 344
feat: add basepath config #1188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add basepath config #1188
Conversation
🦋 Changeset detectedLatest commit: 70fa0a7 The changes in this PR will be included in the next version bump. This PR includes changesets to release 3 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
@datnguyennnx is attempting to deploy a commit to the HyperDX Team on Vercel. A member of the Team first needs to authorize it. |
|
I don’t think this change is necessary. The Additionally, the OpAMP endpoint should not be exposed (it runs on the same API server but on a different port), so I don’t see a reason to introduce a reverse proxy in this case. If you want to prefix a subpath, I’d suggest configuring that at your reverse proxy layer. |
|
From my experience, setting up subpaths using only Traefik/nginx is tough and often breaks URLs. Even with this setup, many frontend links break or ignore /hyperdx and redirect to root paths, causing broken navigation. This shows how hard it is to handle subpaths fully in proxy layer. Having basePath config inside the app itself is much simpler and more reliable. The OpAMP endpoint is fine (if you accept this, I’ll remove the OpAMP endpoint—we only expose the API and app endpoint). |
|
Thanks for the effort here — I see the motivation to make updating the URL subpath easier, and that’s definitely valuable. That said, I have a few concerns with the current approach:
For context: this proxy issue was problematic in HyperDX v1, but with the flexibility of Helm deployments and the internal API proxy (removing the need to deploy both services), things should already be simpler. I agree that configuring subpaths is a bit annoying. However, I think we should take a step back before spreading environment variables across the codebase in ways that risk breaking existing behavior. There might be a cleaner approach. Suggestion: |
All problems solved, please re-review and test. Ready to merge! |
3e5b0d1 to
007bf4f
Compare
Makefile
Outdated
| --build-arg HYPERDX_BASE_PATH="${HYPERDX_BASE_PATH}" \ | ||
| --build-arg HYPERDX_API_BASE_PATH="${HYPERDX_API_BASE_PATH}" \ | ||
| --build-arg HYPERDX_OTEL_BASE_PATH="${HYPERDX_OTEL_BASE_PATH}" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean that devs need to rebuild the image whenever the env var changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wrn14897 No rebuild needed! Those build-args were a holdover from an earlier iteration - I've removed them entirely from the Makefile.
The base paths are now read at runtime via process.env in centralized utils (basePath.ts), so you can switch subpaths dynamically with docker run -e HYPERDX_BASE_PATH=/hyperdx or compose restart - no image rebuilds required. Verified: Build once (make build-app), run root → subpath via -e, and curls/UI/API prefix correctly (e.g., /hyperdx/api/health 200 OK).
This aligns with your dynamic config concern. Please re-review—tests/E2E confirm no regressions!
Thanks for following up, @datnguyennnx . I may not have explained it clearly earlier. The /api should be proxied by the nextjs server, and in most cases people won’t really care about what the API base path is (since the API and app are effectively bundled together in a single build). So for example, if the subpath is foo, the API route should resolve to foo/api. My preference is to handle this primarily at the proxy layer, minimizing changes to the application code. |
|
Replaced with PR #1236 |
|
I get 404. What did I wrong? compose.yaml hyperdx:
image: docker.hyperdx.io/hyperdx/hyperdx-all-in-one
container_name: hyperdx-ui
depends_on:
- clickhouse
- otel-collector
environment:
- HYPERDX_API_URL=${HYPERDX_API_URL}
- HYPERDX_API_PORT=${HYPERDX_API_PORT}
- HYPERDX_APP_URL=${HYPERDX_APP_URL}
- HYPERDX_APP_PORT=${HYPERDX_APP_PORT}
- FRONTEND_URL=${FRONTEND_URL}
- HYPERDX_BASE_PATH=${HYPERDX_BASE_PATH}
- NEXT_PUBLIC_HYPERDX_BASE_PATH=${NEXT_PUBLIC_HYPERDX_BASE_PATH}
- TZ=Europe/Moscow
volumes:
- ./volumes/hyperdx/data:/data/db
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- schedule_net
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1.env HYPERDX_API_URL=https://schedule-imsit.ru
HYPERDX_API_PORT=8000
HYPERDX_APP_URL=https://schedule-imsit.ru
HYPERDX_APP_PORT=8081
FRONTEND_URL=https://schedule-imsit.ru/hyperdx
HYPERDX_BASE_PATH=/hyperdx
NEXT_PUBLIC_HYPERDX_BASE_PATH=/hyperdxnginx.conf |
|
Full Mount the template in Docker Compose: In your Restart the stack: build images again -> it's current form because this value must be set at build time, not runtime. |
|
How can I adapt this template to the extended configuration where $host is used to be multiple service provider? I would like to add lending page on / path and shift hyperdx service from / to /hyperdx/ subpath. nginx.conf compose.yaml services:
nginx:
image: nginx:stable
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
- "6379:6379"
volumes:
- ./configs/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./volumes/nginx/ssl:/etc/nginx/ssl
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Moscow
depends_on:
- app
- minio
- pgadmin
- hyperdx
- grafana
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
app:
container_name: app
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./:/service/schedule-parse-service
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- app_db
- otel-collector
environment:
- TZ=Europe/Moscow
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
minio:
restart: always
image: minio/minio:latest
container_name: minio
hostname: minio.local
environment:
- MINIO_NOTIFY_WEBHOOK_AUTH_TOKEN_1=${MINIO_WEBHOOK_AUTH_TOKEN}
- MINIO_NOTIFY_WEBHOOK_QUEUE_DIR=./volumes/minio/events
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
- MINIO_SKIP_CLIENT=yes
- MINIO_SCHEME=http
- MINIO_CONSOLE_PORT_NUMBER=9001
- MINIO_LOG_LEVEL=debug
- MINIO_SERVER_URL=${MINIO_SERVER_URL}
- MINIO_BROWSER_REDIRECT_URL=${MINIO_BROWSER_REDIRECT_URL}
- TZ=Europe/Moscow
volumes:
- ./volumes/minio/data:/data
- ./volumes/minio/certs:/certs
- ./minio-manual-init-webhook.sh:/minio-manual-init-webhook.sh
- ./minio-auto-init-webhook.sh:/minio-auto-init-webhook.sh
- ./minio-generate-keys.sh:/minio-generate-keys.sh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: ["server", "/data", "--console-address", ":9001"]
healthcheck:
test: [ "CMD", "curl", "-k", "http://minio:9000/minio/health/live" ]
interval: 30s
timeout: 20s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
pgadmin:
image: dpage/pgadmin4:latest
container_name: pgadmin
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION=True
- PGADMIN_CONFIG_WTF_CSRF_ENABLED=True
- TZ=Europe/Moscow
depends_on:
- app_db
volumes:
- ./volumes/pgadmin-data:/var/lib/pgadmin
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
app_db:
image: postgres:17.6
container_name: app_db
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- TZ=Europe/Moscow
volumes:
- ./volumes/app_db/postgres:/var/lib/postgresql/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
redis:
image: redis:8.2.1
container_name: redis
environment:
- REDIS_USER_PASSWORD=${REDIS_USER_PASSWORD}
- TZ=Europe/Moscow
volumes:
- ./volumes/redis:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
clickhouse:
image: clickhouse/clickhouse-server:25.9
container_name: clickhouse
ulimits:
nofile:
soft: 262144
hard: 262144
environment:
- CLICKHOUSE_USER=${CLICKHOUSE_USER}
- CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD}
- TZ=Europe/Moscow
volumes:
- ./volumes/clickhouse/data:/var/lib/clickhouse
- ./volumes/clickhouse/logs:/var/log/clickhouse-server
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
container_name: open-telemetry
restart: always
depends_on:
- clickhouse
environment:
- TZ=Europe/Moscow
volumes:
- ./configs/opentelemetry/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
hyperdx:
image: docker.hyperdx.io/hyperdx/hyperdx-all-in-one
container_name: hyperdx-ui
depends_on:
- clickhouse
- otel-collector
environment:
- HYPERDX_API_URL=${HYPERDX_API_URL}
- HYPERDX_API_PORT=${HYPERDX_API_PORT}
- HYPERDX_APP_URL=${HYPERDX_APP_URL}
- HYPERDX_APP_PORT=${HYPERDX_APP_PORT}
- FRONTEND_URL=${FRONTEND_URL} - для production
- HYPERDX_BASE_PATH=${HYPERDX_BASE_PATH} - для production
- NEXT_PUBLIC_HYPERDX_BASE_PATH=${NEXT_PUBLIC_HYPERDX_BASE_PATH} - для production
- TZ=Europe/Moscow
volumes:
- ./volumes/hyperdx/data:/data/db
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
grafana:
image: grafana/grafana-enterprise
container_name: grafana
restart: always
depends_on:
- clickhouse
- otel-collector
environment:
- GF_SECURITY_ADMIN_USER=${GF_SECURITY_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD}
- TZ=Europe/Moscow
volumes:
- ./volumes/grafana:/var/lib/grafana
- ./configs/grafana/grafana.ini:/etc/grafana/grafana.ini
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: 1
networks:
- schedule_net
networks:
schedule_net:
driver: bridge |
Try this one config sir @simadimonyan |
|
I get 404 and redirection process from https://schedule-imsit.ru/hyperdx/ to https://schedule-imsit.ru/hyperdx/hyperdx
Browser console logs |


Feature: BasePath Support for Subpath Deployments
Problem
Self-hosted users cannot deploy HyperDX under custom subpaths with nginx/traefik reverse proxies (e.g.,
domain.com/hyperdx).Current User Pain: Users who want subpath deployment have to:
Solution: Docker Environment Variable Approach
Added three environment variables to enable subpath deployment without requiring source code modifications:
HYPERDX_BASE_PATH- Frontend Next.js basePath (e.g.,/hyperdx)HYPERDX_API_BASE_PATH- Backend API basePath (e.g.,/hyperdx/api)HYPERDX_OTEL_BASE_PATH- OTEL collector basePath (e.g.,/hyperdx/otel)Key Innovation: Use Official Docker Images + Environment Variables
Before (Complex):
After (Simple):
Benefits
For Users:
For HyperDX Project:
Usage Example
Docker Compose Deployment
Environment Variables
# Set in .env file HYPERDX_BASE_PATH=/hyperdx HYPERDX_API_BASE_PATH=/hyperdx/api HYPERDX_OTEL_BASE_PATH=/hyperdx/otelnginx Reverse Proxy
Deployment Patterns Enabled
https://domain.com/hyperdx→ HyperDX frontendhttps://domain.com/hyperdx/api→ HyperDX APIhttps://domain.com/hyperdx/otel→ OTEL collectorUpdate Workflow Improvement
Traditional Approach (Maintenance Burden):
Our Approach (Effortless Updates):
Testing
/hyperdxsubpath/hyperdx/api/config,/hyperdx/api/health)Files Modified
packages/app/next.config.jspackages/app/src/api.tspackages/app/pages/api/[...all].tspackages/api/src/api-app.tspackages/api/src/opamp/app.tsdocker-compose.ymldocker/hyperdx/DockerfileMakefileImpact
This addresses a common deployment need for self-hosted users who want to run HyperDX under reverse proxies with custom paths without the maintenance burden of custom builds.
Result: Transform HyperDX from "clone sources and modify code" to "pull official image and set environment variables" - enabling enterprise deployment flexibility while maintaining the official update path.