You've arrived at the repo for the backend of the Host Based Inventory (HBI). If you're looking for API, integration or user documentation for HBI please see the Insights Inventory Documentation.
- Getting started
- Running the webserver locally
- Legacy Support
- Identity
- Payload Tracker integration
- Database migrations
- Container builds
- Metrics
- Documentation
- Release process
- Rollback process
- Updating the System Profile
- Logging System Profile fields
- Running ad hoc jobs using a different image
- Debugging local code with services deployed into Kubernetes namespaces
- Contributing
- Claude Code Integration
Before starting, ensure you have the following installed on your system:
- Podman: For running containers and services.
- Python 3.12.x: The recommended version for this project.
- pipenv: For managing Python dependencies.
Local development also requires the pg_config file, which is installed with the postgres developer library.
To install this, use the command appropriate for your system:
sudo dnf install libpq-devel postgresql python3.12-develsudo apt-get install libpq-dev postgresql python3.12-devbrew install postgresql@16Create a .env file in your project root with the following content. Replace placeholders with
appropriate values for your environment.
cat > ${PWD}/.env<<EOF
# RUNNING HBI Locally
PROMETHEUS_MULTIPROC_DIR=/tmp
BYPASS_RBAC="true"
BYPASS_UNLEASH="true"
BYPASS_KESSEL="true"
# Optional legacy prefix configuration
# PATH_PREFIX="/r/insights/platform"
APP_NAME="inventory"
INVENTORY_DB_USER="insights"
INVENTORY_DB_PASS="insights"
INVENTORY_DB_HOST="localhost"
INVENTORY_DB_NAME="insights"
INVENTORY_DB_POOL_TIMEOUT="5"
INVENTORY_DB_POOL_SIZE="5"
INVENTORY_DB_SSL_MODE=""
INVENTORY_DB_SSL_CERT=""
UNLEASH_TOKEN='*:*.dbffffc83b1f92eeaf133a7eb878d4c58231acc159b5e1478ce53cfc'
UNLEASH_CACHE_DIR=./.unleash
UNLEASH_URL="http://localhost:4242/api"
# Kafka Export Service Configuration
KAFKA_EXPORT_SERVICE_TOPIC="platform.export.requests"
EOFAfter creating the file, source it to set the environment variables:
source .env- Install dependencies:
pipenv install --dev- Activate virtual environment:
pipenv shellProvide a local directory for database persistence:
mkdir ~/.pg_dataIf using a different directory, update the volumes section in dev.yml.
All dependent services are managed by Podmam Compose and are listed in the dev.yml file. This includes the web server, MQ server, database, Kafka, and other infrastructure services.
Note: This repository uses git submodules (e.g., librdkafka). If you haven't already, clone the repository with submodules:
git clone --recurse-submodules https://github.com/RedHatInsights/insights-host-inventory.gitOr, if you've already cloned without submodules, initialize them with:
git submodule update --init --recursiveStart the services with the following command:
podman compose -f dev.yml up -dThe web and MQ servers will automatically start when this command is run. By default, the database container will use a bit of local storage so that data you enter will persist across multiple starts of the container. If you want to destroy that data do the following:
podman compose -f dev.yml down
rm -r ~/.pg_data # or a another directory you defined in volumesmake upgrade_db- Note: You may need to add a host entry for Kafka:
echo "127.0.0.1 kafka" | sudo tee -a /etc/hostsTo create one host(s), run the following command:
make run_inv_mq_service_test_producer NUM_HOSTS=800- By default, it creates one host if
NUM_HOSTSis not specified. - Optionally, you may need to pass
INVENTORY_HOST_ACCOUNT=5894300to the command above to override the defaultorg_id(321) - Optionally, you may want to create different types of hosts by passing
HOST_TYPE=[sap|rhsm|qpc]. By default, it will create standard hosts with basic system profile data.
The new Kafka producer supports creating different types of hosts with various configurations:
# Create default hosts
python utils/kafka_producer.py --num-hosts 10
# Create RHSM hosts
python utils/kafka_producer.py --host-type rhsm --num-hosts 5
# Create QPC hosts
python utils/kafka_producer.py --host-type qpc --num-hosts 3
# Create SAP hosts
python utils/kafka_producer.py --host-type sap --num-hosts 2
# List available host types
python utils/kafka_producer.py --list-types
# Use custom Kafka settings
python utils/kafka_producer.py --host-type sap --num-hosts 5 \
--bootstrap-servers localhost:29092 \
--topic platform.inventory.host-ingressAvailable Host Types:
default: Standard hosts with basic system profile datarhsm: Red Hat Subscription Manager hosts with RHSM-specific facts and metadataqpc: Quipucords Product Catalog hosts with discovery-specific datasap: SAP hosts with SAP workloads data in the dynamic system profile
Environment Variables:
NUM_HOSTS: Number of hosts to create (default: 1)HOST_TYPE: Default host type (default: "default")KAFKA_BOOTSTRAP_SERVERS: Kafka bootstrap servers (default: "localhost:29092")KAFKA_HOST_INGRESS_TOPIC: Kafka topic for host ingress (default: "platform.inventory.host-ingress")
pipenv shell
make run_inv_export_serviceIn another terminal, generate events for the export service with:
make sample-request-create-exportBy default, it will send a json format request. To modify the data format, use:
make sample-request-create-export format=[json|csv]Want to learn more? See the Export Service documentation for details on how the export service works.
You can run the tests using pytest:
pytest --cov=.Or run individual tests:
# To run all tests in a specific file:
pytest tests/test_api_auth.py
# To run a specific test
pytest tests/test_api_auth.py::test_validate_valid_identity- Note: Ensure DB-related environment variables are set before running tests.
The repository now includes the IQE (Insights QE) test suite in the iqe-host-inventory-plugin/ directory. These are comprehensive integration tests that cover:
- REST API endpoints (backend tests)
- UI tests (frontend tests)
- Database tests
- Resilience tests
- RBAC tests
- Notifications tests
Running IQE Tests Locally:
The IQE tests require special dependencies and configuration. For detailed instructions on running IQE tests locally, see the IQE README.
PR Checks:
The IQE smoke tests are automatically run as part of the PR check pipeline. When IQE_INSTALL_LOCAL_PLUGIN=true (default), the pr_check.sh script:
- Builds the PR commit image
- Runs unit tests
- Deploys to an ephemeral environment
- Deploys a CJI (ClowdJobInvocation) pod with
--debug-podoption - Copies the local IQE plugin from
iqe-host-inventory-plugin/to the CJI pod - Installs the plugin in editable mode inside the pod
- Runs IQE smoke tests (tests marked with
backend and smoke) with your local changes - Collects test artifacts
This ensures that every PR is tested with the exact IQE test code in the repository, not the version from Nexus. The local IQE plugin deployment is controlled by the IQE_INSTALL_LOCAL_PLUGIN environment variable set in pr_check_common.sh.
How it works:
deploy_ephemeral_env.sh: Creates the ephemeral namespace and deploys HBIrun_cji_with_local_plugin.sh: Deploys the CJI pod, copies local plugin, installs it, and runs testspost_test_results.sh: Collects and publishes test results
When running the web server locally for development, the Prometheus configuration is done automatically. You can run the web server directly using this command:
python3 run_gunicorn.pyNote: If you started services with podman compose -f dev.yml up -d, the web server is already running in the hbi-web container.
Some apps still need to use the legacy API path, which by default is /r/insights/platform/inventory/v1/.
In case legacy apps require this prefix to be changed, it can be modified using this environment variable:
export INVENTORY_LEGACY_API_URL="/r/insights/platform/inventory/api/v1"When testing the API, set the identity header in curl:
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiJ0ZXN0IiwidHlwZSI6IlVzZXIiLCJhdXRoX3R5cGUiOiJiYXNpYy1hdXRoIiwidXNlciI6eyJ1c2VybmFtZSI6InR1c2VyQHJlZGhhdC5jb20iLCJlbWFpbCI6InR1c2VyQHJlZGhhdC5jb20iLCJmaXJzdF9uYW1lIjoidGVzdCIsImxhc3RfbmFtZSI6InVzZXIiLCJpc19hY3RpdmUiOnRydWUsImlzX29yZ19hZG1pbiI6ZmFsc2UsImlzX2ludGVybmFsIjp0cnVlLCJsb2NhbGUiOiJlbl9VUyJ9fX0=
This is the Base64 encoding of:
{
"identity": {
"org_id": "test",
"type": "User",
"auth_type": "basic-auth",
"user": {
"username": "tuser@redhat.com",
"email": "tuser@redhat.com",
"first_name": "test",
"last_name": "user",
"is_active": true,
"is_org_admin": false,
"is_internal": true,
"locale": "en_US"
}
}
}The above header has the "User" identity type, but it's possible to use a "System" type header as well.
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiAidGVzdCIsICJhdXRoX3R5cGUiOiAiY2VydC1hdXRoIiwgInN5c3RlbSI6IHsiY2VydF90eXBlIjogInN5c3RlbSIsICJjbiI6ICJwbHhpMTN5MS05OXV0LTNyZGYtYmMxMC04NG9wZjkwNGxmYWQifSwidHlwZSI6ICJTeXN0ZW0ifX0=
This is the Base64 encoding of:
{
"identity": {
"org_id": "test",
"auth_type": "cert-auth",
"system": {
"cert_type": "system",
"cn": "plxi13y1-99ut-3rdf-bc10-84opf904lfad"
},
"type": "System"
}
}If you want to encode other JSON documents, you can use the following command:
echo -n '{"identity": {"org_id": "0000001", "type": "System"}}' | base64 -w0The Swagger UI provides an interactive interface for testing the API endpoints locally.
Prerequisites:
- Ensure the web service is running (via
podman compose -f dev.yml up -d) - The service should be accessible at
http://localhost:8080/api/inventory/v1/ui/
Access Swagger UI:
-
Open your browser and navigate to:
http://localhost:8080/api/inventory/v1/ui/ -
The Swagger UI will display all available API endpoints with their documentation.
Testing API Requests:
By default, RBAC is bypassed in local development (via BYPASS_RBAC="true" in
.env), so you can make requests without authentication. However, you still
need to provide a valid identity header for some endpoints.
Option 1: Using the Swagger UI Interface
- Add the identity header:
- Click on "Authorize" (On the top right of the page)
- Header name:
x-rh-identity - Header value:
eyJpZGVudGl0eSI6eyJvcmdfaWQiOiJ0ZXN0IiwidHlwZSI6IlVzZXIiLCJhdXRoX3R5cGUiOiJiYXNpYy1hdXRoIiwidXNlciI6eyJ1c2VybmFtZSI6InR1c2VyQHJlZGhhdC5jb20iLCJlbWFpbCI6InR1c2VyQHJlZGhhdC5jb20iLCJmaXJzdF9uYW1lIjoidGVzdCIsImxhc3RfbmFtZSI6InVzZXIiLCJpc19hY3RpdmUiOnRydWUsImlzX29yZ19hZG1pbiI6ZmFsc2UsImlzX2ludGVybmFsIjp0cnVlLCJsb2NhbGUiOiJlbl9VUyJ9fX0=
- Click on an endpoint (e.g.,
GET /hosts) - Click the "Try it out" button
- Fill in any required parameters
- Click "Execute"
Option 2: Using curl (from Swagger UI)
After configuring your request in the Swagger UI, you can copy the generated curl command and run it directly from your terminal.
Common Test Identity Headers:
User identity (for general API testing):
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiJ0ZXN0IiwidHlwZSI6IlVzZXIiLCJhdXRoX3R5cGUiOiJiYXNpYy1hdXRoIiwidXNlciI6eyJ1c2VybmFtZSI6InR1c2VyQHJlZGhhdC5jb20iLCJlbWFpbCI6InR1c2VyQHJlZGhhdC5jb20iLCJmaXJzdF9uYW1lIjoidGVzdCIsImxhc3RfbmFtZSI6InVzZXIiLCJpc19hY3RpdmUiOnRydWUsImlzX29yZ19hZG1pbiI6ZmFsc2UsImlzX2ludGVybmFsIjp0cnVlLCJsb2NhbGUiOiJlbl9VUyJ9fX0=
System identity (for system-level operations):
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiAidGVzdCIsICJhdXRoX3R5cGUiOiAiY2VydC1hdXRoIiwgInN5c3RlbSI6IHsiY2VydF90eXBlIjogInN5c3RlbSIsICJjbiI6ICJwbHhpMTN5MS05OXV0LTNyZGYtYmMxMC04NG9wZjkwNGxmYWQifSwidHlwZSI6ICJTeXN0ZW0ifX0=
Example: Listing Hosts
- Add the
x-rh-identityheader (see above) - Navigate to
GET /hostsendpoint in Swagger UI - Click "Try it out"
- Set query parameters as needed (e.g.,
per_page: 10) - Click "Execute"
- View the response in the "Responses" section
For Kafka messages, the Identity must be set in the platform_metadata.b64_identity field.
Want to learn more? For comprehensive documentation on message formats, validation, and the host insertion and update flow, see Host Insertion in the HBI documentation.
The Identity provided limits access to specific hosts.
For API requests, the user can only access Hosts which have the same Org ID as the provided Identity.
For Host updates via Kafka messages, A Host can only be updated if not only the Org ID matches,
but also the Host.system_profile.owner_id matches the provided identity.system.cn value.
The inventory service integrates with the Payload Tracker service. Configure it using these environment variables:
KAFKA_BOOTSTRAP_SERVERS=localhost:29092
PAYLOAD_TRACKER_KAFKA_TOPIC=platform.payload-status
PAYLOAD_TRACKER_SERVICE_NAME=inventory
PAYLOAD_TRACKER_ENABLED=true- Enabled: Set
PAYLOAD_TRACKER_ENABLED=falseto disable the tracker. - Usage: The tracker logs success or errors for each payload operation. For example, if a payload contains multiple hosts and one fails, it's logged as a "processing_error" but doesn't mark the entire payload as failed.
Generate new migration scripts with:
make migrate_db message="Description of your changes"In managed environments, the database migrations are run by the run-db-migrations job.
This job runs once per release, as its name contains the image tag (`run-db-migrations-<IMAGE_TAG>).
Build local development containers with:
podman build . -f dev.dockerfile -t inventory:dev- Note: Some packages require a subscription. Ensure your host has access to valid RHEL content.
Prometheus integration provides monitoring endpoints:
/health: Liveness probe endpoint./metrics: Prometheus metrics endpoint./version: Returns build version info.
Cron jobs (reaper, sp-validator) push metrics to
a Prometheus Pushgateway at PROMETHEUS_PUSHGATEWAY (default:
localhost:9091).
The source of the HBI documentation is located in the docs/ folder. Any change to service behavior must also be reflected in the documentation to keep it up to date.
New files added to the folder will be accessible on the InScope HBI page, but renaming existing files can break direct links.
This section describes the process of getting a code change from a pull request all the way to production.
It all starts with a pull request. When a new pull request is opened, some jobs are run automatically. These jobs are defined in app-interface here.
- host-inventory pr-checker runs the
following:
- database migrations
- code style checks
- unit tests
- smoke (the most important) IQE tests
ci.ext.devshift.net PR build - All testsruns all the IQE tests (with disabled RBAC) on the PR's code.ci.ext.devshift.net PR build - RBAC testsruns RBAC integration IQE tests on the PR's code.- host-inventory build-master builds the container image, and pushes it to Quay, where it is scanned for vulnerabilities.
Should any of these fail this is indicated directly on the pull request.
When all of these checks pass and a reviewer approves the changes the pull request can be merged by someone from the @RedHatInsights/host-based-inventory-committers team.
When a pull request is merged to master, a new container image is built and tagged as insights-inventory:latest. This image is then automatically deployed to the Stage environment.
Once the image lands in the Stage environment, the QE testing can begin. People in @team-inventory-dev run the full IQE test suite against Stage, and then report the results in the #team-insights-inventory channel. Everything that needs to be done before we can do a Prod release is mentioned in the "Promoting deployments to Prod" section of the iqe-host-inventory-plugin README
In order to promote a new image to the production environment, it is necessary to update
the deploy-clowder.yml
file.
The ref parameter on the prod-host-inventory-prod namespace needs to be updated to the SHA of the validated image.
Also, if there are new commits in cyndi-operator or
xjoin-kafka-connect repositories,
those should be deployed together with HBI as well.
If xjoin-kafka-connect doesn't have new commits that need to be deployed (it's a very stable repo, so deploying new version of it to Prod is very rare), and we don't need to make any app-interface config changes, it is preferred to use app-interface-bot to create the release MR. This bot automatically scans the repositories and creates a release MR with the latest commits. It also adds all released PRs and linked Jira cards to the description of the MR.
To run the app-interface-bot go to
pipelines and click
"New pipeline" in the top right corner of the page. Now select "host-inventory" as HMS_SERVICE,
and put "master" (to release the latest commit) to HOST_INVENTORY_BACKEND and CYNDI_OPERATOR
variables. If the CI is failing in GitHub on the latest commit for an irrelevant reason, and you are
sure that it is OK, also choose "--force" on the FORCE variable. Now you can click "New pipeline"
and the bot will create the release MR for you in a few seconds. When it's done, it will send a
Slack message to #insights-experiences-release channel with the link to the MR
(example).
The bot is configured to automatically create these release MRs on Mondays. Every time
it does so, carefully check if the release doesn't need any config change. For example, if the
release includes a DB migration, then there is a high chance that we want to reduce the number of
MQ replicas during this migration. In that case, feel free to close the MR either manually, or by
creating a new pipeline and
adding the MR ID to the CLOSE_MR variable. Then you can create a new release MR manually with
everything that's needed, and you can copy the description from the bot's release MR.
For the CI pipeline to run tests on your fork, you'll need to add @devtools-bot as a Maintainer. See this guide on how to do that.
After the MR has been opened, carefully check all PRs that are going to be released and if everything is
OK and well tested (all Jira cards that are being released are in "Release pending" state, there is
no "On QA" Jira), then ask someone else from the Inventory team to also check the release MR and
approve it by adding a /lgtm comment.
Afterward, the MR will be merged automatically and the changes will be deployed to the production environment.
The engineer who approved the MR is then responsible for monitoring of the rollout of the new image.
Once that happens, contact @team-inventory-dev and request the image to be re-tested in the production environment. The new image will also be tested automatically when the Full Prod Check pipeline is run (twice daily).
It is essential to monitor the health of the service during and after the production deployment. A non-exhaustive list of things to watch includes:
- Monitor deployment in:
- OpenShift
namespace: host-inventory-prod
- primarily ensure that the new pods are spun up properly
- Slack channel: Inventory Slack Channel
- for any inventory-related Prometheus alerts
- Grafana
dashboard: Inventory Dashboard
- for any anomalies such as error rate or consumer lag
- Kibana
logs
here
- for any error-level log entries
- OpenShift
namespace: host-inventory-prod
Should unexpected problems occur during the deployment,
it is possible to do a rollback.
This is done by updating the ref parameter in deploy-clowder.yml to point to the previous commit SHA,
or by reverting the MR that triggered the production deployment.
When contributing a new field to the system_profile schema, please ensure you complete the following steps:
- Add the new field
- Annotate the field
- Add an example of the value(s) you expect to receive using the
examplekeyword. For string fields, provide at least 2 unique examples. - Add a description of the field. If the field should support
rangeorwildcardoperations when queried against, note that here.
- Add an example of the value(s) you expect to receive using the
- Add filtering flags
- If the field should support wildcard operations in filtering, add
x-wildcard: true. Defaults tofalseotherwise.
- If the field should support wildcard operations in filtering, add
- Validate the field
- The field should have the strictest possible validation rules applied to it.
- Add positive and negative test examples
- Add examples of valid/invalid values in tests/utils/valids.py and tests/utils/invalids.pyrespectively.
Before committing, make sure these apps are not going to be negatively affected by the changes as they interact with the system profile.
- RHSM
- Yupana
- Puptoo
- Satellite
- Discovery
- Patch
- Content
- Vulnerability
- Malware
- Remediations
- Compliance
- Image builder
- Digital roadmap
- BU
Use the environment variable SP_FIELDS_TO_LOG to log the System Profile fields of a host. These fields are logged when adding, updating or deleting a host from inventory.
SP_FIELDS_TO_LOG="cpu_model,disk_devices"This logging helps with debugging hosts in Kibana.
There may be a job ClowdJobInvocation which requires using a special image that is different
from the one used by the parent application, i.e. host-inventory.
Clowder out-of-the-box does not allow it.
Running a Special Job describes how to accomplish it.
Making local code work with the services running in Kubernetes requires some actions provided in Debugging Local Code with Services Deployed into Kubernetes Namespaces.
The repository uses pre-commit to enforce code style. Install pre-commit hooks:
pre-commit installIf inside the Red Hat network, also ensure rh-pre-commit is installed as per
instructions here.
This project includes Claude Code hooks, slash commands, and make targets that follow the install-and-maintain pattern for AI-assisted development.
# Deterministic setup (hooks run automatically)
make hbi-cldi
# Agentic setup (interactive, runs /hbi-install slash command)
make hbi-cldii
# Agentic maintenance (runs /hbi-maintenance slash command)
make hbi-cldmmOr start Claude Code directly in the repo — the SessionStart hook will automatically load your .env variables into the session.
Hooks are defined in .claude/settings.json and run automatically at specific lifecycle points.
| Hook | Trigger | Script | Purpose |
|---|---|---|---|
| SessionStart | Every Claude Code session | .claude/hooks/session_start.py |
Loads .env variables into the Claude session via CLAUDE_ENV_FILE |
| Setup (init) | claude --init |
.claude/hooks/setup_init.py |
Full deterministic setup: prereqs, dirs, deps, Podman, DB migrations, health check |
| Setup (maintenance) | claude --maintenance |
.claude/hooks/setup_maintenance.py |
Update deps, pull images, restart services, run migrations and style checks |
All hooks output structured JSON using the hookSpecificOutput format:
{
"hookSpecificOutput": {
"hookEventName": "Setup",
"additionalContext": "Summary of what happened..."
}
}Hook logs are written to .claude/hooks/ (e.g., setup.init.log, setup.maintenance.log, session_start.log).
Slash commands are markdown files in .claude/commands/ that instruct Claude Code to perform multi-step tasks.
| Command | File | Purpose |
|---|---|---|
/hbi-install |
.claude/commands/hbi-install.md |
Runs /hbi-prime for orientation, then the setup init script, reports results |
/hbi-install-hil |
.claude/commands/hbi-install-hil.md |
Interactive setup — asks preferences for database, deps, and Podman services |
/hbi-maintenance |
.claude/commands/hbi-maintenance.md |
Runs /hbi-prime for orientation, then the maintenance script, reports results |
/hbi-doctor |
.claude/commands/hbi-doctor.md |
Health checks all services: Podman, PostgreSQL, Kafka, HBI web, DB migrations, Python env |
/hbi-prime |
.claude/commands/hbi-prime.md |
Quick orientation: reads key files (CLAUDE.md, README.md, mk/private.mk, dev.yml), reports project summary |
/hbi-api-hosts |
.claude/commands/hbi-api-hosts.md |
Query and manage hosts (list, get by ID, filter, system profile, tags, update, delete) |
/hbi-api-groups |
.claude/commands/hbi-api-groups.md |
Query and manage groups (list, create, get hosts in group, add/remove hosts, delete) |
/hbi-api-tags |
.claude/commands/hbi-api-tags.md |
Query tags (list active tags, search, per-host tags and counts) |
/hbi-api-system-profile |
.claude/commands/hbi-api-system-profile.md |
Query system profiles (per-host, OS distribution, SAP system data and SIDs) |
/hbi-api-staleness |
.claude/commands/hbi-api-staleness.md |
Query and manage staleness configuration (get, set, reset to defaults) |
/hbi-vuln-triage |
.claude/commands/hbi-vuln-triage.md |
Triage container vulnerability reports: dedup, cross-ref Pipfile.lock/Dockerfile, produce Jira-formatted output |
Custom make targets for development workflows and Claude Code invocations are defined in mk/private.mk. List them with:
make hbi-helpDevelopment workflows:
| Target | Description |
|---|---|
make hbi-up |
Start all Podman Compose services |
make hbi-down |
Stop all Podman Compose services |
make hbi-logs SERVICE=<name> |
View service logs (optionally for a specific service) |
make hbi-migrate |
Run database migrations |
make hbi-test ARGS="<extra args>" |
Run tests with coverage |
make hbi-style |
Run code style checks |
make hbi-deps |
Install Python dependencies |
make hbi-health |
Health check the web service |
make hbi-ps |
Check Podman container status |
make hbi-reset |
Reset development environment (stop services, remove db data) |
Claude Code invocations:
| Target | Description |
|---|---|
make hbi-cldi |
Deterministic codebase setup (claude --init) |
make hbi-cldm |
Deterministic codebase maintenance (claude --maintenance) |
make hbi-cldii |
Agentic setup via /hbi-install slash command |
make hbi-cldit |
Agentic interactive setup via /hbi-install-hil |
make hbi-cldmm |
Agentic maintenance via /hbi-maintenance slash command |
Send SQL queries to a GABI endpoint either interactively (REPL) or non-interactively (file/stdin). Results can be displayed as a formatted table, raw JSON, or both.
- Logged into your OpenShift cluster via
ocif using default authentication and URL detection.
- Interactive REPL (default environment: prod):
./utils/run_gabi_interactive.py --interactive
- Interactive on stage:
./utils/run_gabi_interactive.py --env stage --interactive
- Explicit URL override (interactive):
./utils/run_gabi_interactive.py --url https://gabi-host-inventory-stage.apps.<cluster>/query --interactive
- Non-interactive from file:
./utils/run_gabi_interactive.py --env stage --file query.sql
- Non-interactive from stdin:
cat query.sql | ./utils/run_gabi_interactive.py --env stage - Output format control:
# JSON only ./utils/run_gabi_interactive.py --file query.sql --format json # Both table and JSON ./utils/run_gabi_interactive.py --file query.sql --format both
--env {prod,stage}: Chooses the target environment (defaults to prod). Determines the app used for URL derivation.--url <API_ENDPOINT_URL>: Explicit GABI endpoint. Overrides the derived URL.--auth <AUTH_TOKEN>: Bearer token. Defaults to token fromoc whoami -t.--file <SQL_QUERY_FILE>: Run a single query from a file (non-interactive).--interactive: Run in REPL mode and enter multiple queries.--format {table,json,both}: Output as pretty table, raw JSON, or both (default: table).
./utils/run_gabi_interactive.py --env stage --interactive
Connected to https://gabi-host-inventory-stage.apps.example.com/query
Type 'exit' or 'quit' to end the session.
Enter your SQL query (press Enter on an empty line to submit):
SELECT COUNT(*) FROM hbi.hosts;./utils/run_gabi_interactive.py --env stage --file query.sql --format both