Skip to content

Latest commit

 

History

History
152 lines (107 loc) · 4.79 KB

File metadata and controls

152 lines (107 loc) · 4.79 KB

Developing Trustification

This document describes how to run all of the trustification processes for local development. You can skip running some of the processes if you don't need them for developing.

Dependencies

Requires docker-compose to run dependent services.

cd deploy/compose
docker-compose -f compose.yaml up

This will start MinIO and Kafka in containers and initialize them accordingly so that you don't need to configure anything. Default arguments of Vexination components will work with this setup.

Integration tests

Trustification comes with a set of integration tests that you can run assuming dependent services are launched with the above compose configuration:

RUST_LOG=info cargo test -p integration-tests -- --nocapture

APIs

To run the API processes, you can use cargo:

RUST_LOG=info cargo run -p trust -- vexination api --devmode -p 8081 &
RUST_LOG=info cargo run -p trust -- bombastic api --devmode -p 8082 &
RUST_LOG=info cargo run -p trust -- spog api -p 8083 &

Indexing

To run the indexer processes, you can use cargo:

RUST_LOG=info cargo run -p trust -- vexination indexer --devmode &
RUST_LOG=info cargo run -p trust -- bombastic indexer --devmode &

Ingesting VEX

At this point, you can POST and GET VEX documents with the API using the id. To ingest a VEX document:

curl -X POST --json @vexination/testdata/rhsa-2023_1441.json http://localhost:8081/api/v1/vex

To get the data, either using direct lookup:

curl -X GET "http://localhost:8081/api/v1/vex?advisory=RHSA-2023:1441"

You can also crawl Red Hat security data using the walker, which will feed the S3 storage with data:

RUST_LOG=info cargo run -p trust -- vexination walker --devmode --source https://www.redhat.com/.well-known/csaf/provider-metadata.json -3

If you have a local copy of the data, you can also run:

RUST_LOG=info cargo run -p trust -- vexination walker --devmode -3 --source file:///path/to/copy

Ingesting SBOMs

At this point, you can POST and GET SBOMs with the API using a unique identifier for the id. To ingest a small-ish SBOM:

curl --json @bombastic/testdata/my-sbom.json http://localhost:8082/api/v1/sbom?id=my-sbom

For large SBOM's, you may use a "chunked" Transfer-Encoding:

curl -H "transfer-encoding: chunked" --json @bombastic/testdata/ubi9-sbom.json http://localhost:8082/api/v1/sbom?id=ubi9

You can also post compressed SBOM's using the Content-Encoding header, though the Content-Type header should always be application/json (as is implied by the --json option above).

Both zstd and bzip2 encodings are supported:

curl -H "transfer-encoding: chunked" \
     -H "content-encoding: bzip2" \
     -H "content-type: application/json" \
     -T openshift-4.13.json.bz2 \
     http://localhost:8082/api/v1/sbom?id=openshift-4.13

You can also crawl Red Hat security data using the walker, which will push data through bombastic:

bombastic/walker/walker.sh http://localhost:8082

Example for importing an SBOM generated by syft:

REGISTRY=registry.k8s.io/coredns
IMAGE=coredns
TAG=v1.9.3

podman pull $REGISTRY/$IMAGE:$TAG
DIGEST=$(podman images $REGISTRY/$IMAGE:$TAG --digests '--format={{.Id}}')
PURL=pkg:oci/$IMAGE@sha256:$DIGEST
syft -q -o spdx-json --name $IMAGE $REGISTRY/$IMAGE:$TAG | http --json POST http://localhost:8082/api/v1/sbom purl==$PURL sha256==$DIGEST

Or when pulling by digest:

REGISTRY=docker.io/bitnami
IMAGE=postgresql
DIGEST=e6d322cf36ff6b5e2bb13d71c816dc60f1565ff093cc220064dba08c4b057275

PURL=pkg:oci/$IMAGE@sha256:$DIGEST
syft -q -o spdx-json --name $IMAGE $REGISTRY/$IMAGE@sha256:$DIGEST | http --json POST http://localhost:8082/api/v1/sbom purl==$PURL sha256==$DIGEST

To query the data, either using direct lookup or querying via the index using the sha256 digest:

curl "http://localhost:8082/api/v1/sbom?id=pkg%3Amaven%2Fio.seedwing%2Fseedwing-java-example%401.0.0-SNAPSHOT%3Ftype%3Djar"
curl -o ubi9-sbom.json "http://localhost:8082/api/v1/sbom?id=ubi9"

The indexer will automatically sync the index to the S3 bucket, while the API will periodically retrieve the index from S3. Therefore, there may be a delay between storing the entry and it being indexed.

Searching

You can search all the data using the bombastic-api or vexination-api endpoints:

curl "http://localhost:8082/api/v1/sbom/search?q=openssl"
curl "http://localhost:8081/api/v1/vex/search?q=openssl"

Working with local images

If you need to build an image locally, you can do that by running

docker build -f trust/Containerfile -t trust:latest .

Then, you can use it like

TRUST_IMAGE=trust:latest docker-compose -f compose.yaml -f compose-trustification.yaml up --force-recreate