-
Notifications
You must be signed in to change notification settings - Fork 47
chore: add AWS deployment release workflow #441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a composite GitHub Action for AWS OIDC role assume-chaining, updates a prepare action to install libsodium, configure PNPM and Node.js and build a plugin example, and updates Dockerfile.production to copy config assets and add a conditional launchtube inclusion flow. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor GH as GitHub Actions Job
participant OIDC as configure-aws-credentials (OIDC step)
participant STS as AWS STS
participant CHAIN as configure-aws-credentials (Target step)
GH->>OIDC: assume `role-for-oidc` (region=us-east-1, duration=900)
OIDC->>STS: AssumeRoleWithWebIdentity
STS-->>OIDC: temporary creds (AK/SK/Token)
OIDC-->>GH: export temp creds to env
GH->>CHAIN: assume `role-to-assume` (role-chaining=true, region/duration inputs) using temp creds
CHAIN->>STS: AssumeRole (chained)
STS-->>CHAIN: target role creds
CHAIN-->>GH: configure AWS env for downstream steps
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
✨ Finishing Touches🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (7)
.github/actions/prepare/action.yml (4)
21-23: Harden apt install (non-interactive, minimal, clean apt lists).Prevents hangs and reduces runner image bloat.
- - name: Install libsodium - run: sudo apt-get update && sudo apt-get install -y libsodium-dev + - name: Install libsodium + run: | + sudo apt-get update + sudo apt-get install -y --no-install-recommends libsodium-dev + sudo rm -rf /var/lib/apt/lists/* shell: bash
29-33: PNPM setup looks fine. Consider enabling Corepack instead of pinning PNPM via action.If you prefer Corepack, enable it and pin via packageManager in package.json for provenance.
- - uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0 - with: - run_install: false - version: 10 + - name: Enable Corepack + run: corepack enable + shell: bash
39-45: Make directory hopping robust; fail fast.Use pushd/popd and strict bash flags to avoid remaining in the wrong directory on failure.
- - name: Build launchtube plugin example - run: | - cd examples/launchtube-plugin-example/launchtube - pnpm install - pnpm run build - cd ../.. + - name: Build launchtube plugin example + run: | + set -euo pipefail + pushd examples/launchtube-plugin-example/launchtube + pnpm install + pnpm run build + popd shell: bash
2-3: Action naming vs scope.The action is named “Cache” but performs broader preparation (Rust toolchain, libs, Node/PNPM, and a build). Consider renaming to better reflect purpose, aiding discoverability.
.github/workflows/ecs-deployment.yml (3)
15-20: Tighten job permissions (least privilege).
packages: write,attestations: write,security-events: writedon’t appear used in build-and-push.permissions: id-token: write contents: read - packages: write - attestations: write - security-events: write
118-122: Consider blocking unexpected egress in deploy.If feasible, switch harden-runner to
egress-policy: blockwith explicit allowlist for AWS endpoints to reduce supply-chain risk.
79-81: Pin Buildx action version comment (minor).You’ve pinned to a commit—good. Optionally add a version comment for clarity, consistent with other steps.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
.github/actions/oidc/action.yaml(1 hunks).github/actions/prepare/action.yml(1 hunks).github/workflows/ecs-deployment.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/ecs-deployment.yml
45-45: input "token" is not defined in action "Cache" defined at "./.github/actions/prepare". available inputs are "components"
(action)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: Analyze (rust)
- GitHub Check: semgrep/ci
🔇 Additional comments (2)
.github/actions/prepare/action.yml (1)
34-37: Pin rust-toolchain action via commit and toolchain inputThe tag
1.86.0doesn’t exist in dtolnay/rust-toolchain, so you must pin to a commit (or@stable) and passtoolchain: 1.86.0. Update in.github/actions/prepare/action.ymlaccordingly:- - name: setup rust tool chain - uses: dtolnay/rust-toolchain@1.86.0 # v1.86.0 + - name: setup rust toolchain + uses: dtolnay/rust-toolchain@<commit-sha-or-stable> with: + toolchain: 1.86.0 components: ${{ (inputs.components != '') && format('{0}, rustfmt, clippy', inputs.components) || 'rustfmt, clippy' }}Likely an incorrect or invalid review comment.
.github/workflows/ecs-deployment.yml (1)
109-114: Cross-account flow: verify ECS task can pull from the ECR account.Ensure appropriate ECR repository permissions and ECS task execution role permissions for cross-account image pulls (Solutions -> Research).
| aws-region: us-east-1 | ||
| role-to-assume: ${{ inputs.role-for-oidc }} | ||
| role-session-name: ${{ inputs.role-session-name }} | ||
| role-duration-seconds: 900 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t hardcode region; use the input across both steps.
Using us-east-1 here while allowing a region input elsewhere causes inconsistent AWS_REGION in jobs.
- aws-region: us-east-1
+ aws-region: ${{ inputs.aws-region }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| aws-region: us-east-1 | |
| role-to-assume: ${{ inputs.role-for-oidc }} | |
| role-session-name: ${{ inputs.role-session-name }} | |
| role-duration-seconds: 900 | |
| aws-region: ${{ inputs.aws-region }} | |
| role-to-assume: ${{ inputs.role-for-oidc }} | |
| role-session-name: ${{ inputs.role-session-name }} | |
| role-duration-seconds: 900 |
🤖 Prompt for AI Agents
.github/actions/oidc/action.yaml around lines 30 to 33: the action hardcodes
aws-region as "us-east-1" causing inconsistent AWS_REGION; change the aws-region
field to use the action input (e.g. aws-region: ${{ inputs.aws-region }}) so the
provided region input is used here as well, and verify the corresponding input
is defined in the action metadata and consumed consistently in any other steps
that expect the same region.
| uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0 | ||
| id: assume-target-role | ||
| with: | ||
| aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }} | ||
| aws-session-token: ${{ env.AWS_SESSION_TOKEN }} | ||
| aws-region: ${{ inputs.aws-region }} | ||
| role-chaining: true | ||
| role-to-assume: ${{ inputs.role-to-assume }} | ||
| role-session-name: ${{ inputs.role-session-name }} | ||
| role-duration-seconds: ${{ inputs.role-duration-seconds }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Mask account IDs and align duration defaults.
Add masking to reduce info disclosure in logs; optionally make the first-step duration configurable for consistency.
- name: assume oidc role
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
- aws-region: us-east-1
+ aws-region: ${{ inputs.aws-region }}
role-to-assume: ${{ inputs.role-for-oidc }}
role-session-name: ${{ inputs.role-session-name }}
- role-duration-seconds: 900
+ role-duration-seconds: 900
+ mask-aws-account-id: true
- name: assume target role
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
id: assume-target-role
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
aws-region: ${{ inputs.aws-region }}
role-chaining: true
role-to-assume: ${{ inputs.role-to-assume }}
role-session-name: ${{ inputs.role-session-name }}
role-duration-seconds: ${{ inputs.role-duration-seconds }}
+ mask-aws-account-id: trueCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/actions/oidc/action.yaml around lines 35 to 45: add masking of sensitive
account/role identifiers prior to calling aws-actions/configure-aws-credentials
by invoking the GitHub Actions mask command (echo ::add-mask::${{
inputs.ACCOUNT_ID }} and any other account/role inputs you expose) so those
values never appear in logs, and expose a configurable input for the first-step
duration (e.g., first-step-duration-seconds) with a sensible default and wire
that input into the initial role-duration-seconds parameter (keeping
role-chaining behavior intact) so the initial assume duration can be
aligned/configured.
.github/workflows/ecs-deployment.yml
Outdated
| - name: Create launchtube fund | ||
| run: | | ||
| cargo run --example create_key -- \ | ||
| --password "$KEYSTORE_PASSPHRASE_FUND" \ | ||
| --output-dir config/keys \ | ||
| --filename launchtube-fund.json | ||
| env: | ||
| KEYSTORE_PASSPHRASE_FUND: ${{ secrets.KEYSTORE_PASSPHRASE_FUND }} | ||
|
|
||
| - name: Create launchtube sequence account 001 | ||
| run: | | ||
| cargo run --example create_key -- \ | ||
| --password "$KEYSTORE_PASSPHRASE_SEQ_001" \ | ||
| --output-dir config/keys \ | ||
| --filename launchtube-seq-001.json | ||
| env: | ||
| KEYSTORE_PASSPHRASE_SEQ_001: ${{ secrets.KEYSTORE_PASSPHRASE_SEQ_001 }} | ||
|
|
||
| - name: Create launchtube sequence account 002 | ||
| run: | | ||
| cargo run --example create_key -- \ | ||
| --password "$KEYSTORE_PASSPHRASE_SEQ_002" \ | ||
| --output-dir config/keys \ | ||
| --filename launchtube-seq-002.json | ||
| env: | ||
| KEYSTORE_PASSPHRASE_SEQ_002: ${{ secrets.KEYSTORE_PASSPHRASE_SEQ_002 }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Secret handling: ensure generated keys don’t end up in the Docker image or artifacts.
Confirm .dockerignore and Dockerfile exclude config/keys/**. Avoid passing passwords via CLI if the example supports env/stdin.
If needed, add .dockerignore:
+config/keys/Or adjust Dockerfile to not COPY that path.
🏁 Script executed:
#!/usr/bin/env bash
echo "=== Checking .dockerignore for config/keys/ ==="
if [[ -f .dockerignore ]]; then
grep -n 'config/keys/' .dockerignore || echo "No 'config/keys/' entry in .dockerignore"
else
echo ".dockerignore not found"
fi
echo
echo "=== Locating Dockerfile(s) ==="
mapfile -t dockerfiles < <(find . -type f -name Dockerfile)
if [[ ${#dockerfiles[@]} -eq 0 ]]; then
echo "No Dockerfile found"
exit 0
fi
printf "%s\n" "${dockerfiles[@]}"
echo
echo "=== Inspecting Dockerfile(s) for config/keys references and context COPY ==="
for df in "${dockerfiles[@]}"; do
echo "-> $df"
grep -n 'config/keys/' "$df" || echo " No direct 'config/keys/' reference"
grep -n '^COPY \.' "$df" || echo " No 'COPY .' command (build context may include config/keys/)"
echo
doneLength of output: 586
🏁 Script executed:
#!/usr/bin/env bash
echo "=== Finding create_key example files ==="
mapfile -t files < <(find . -type f -iname '*create_key*.rs')
if [[ ${#files[@]} -eq 0 ]]; then
echo "No create_key example files found"
else
for f in "${files[@]}"; do
echo "-> $f"
sed -n '1,100p' "$f"
echo
done
fi
echo "=== Searching for STDIN usage in example files ==="
grep -R -n 'std::io::stdin' . || echo "No stdin usage found"
echo "=== Searching for Arg::new + .env usage in example files ==="
grep -R -n "Arg::new" . | grep -i "env" || echo "No Arg::env usage found"Length of output: 772
Add missing .dockerignore entry and exclude secrets from Docker builds
.dockerignore: addconfig/keys/(currently absent)- Confirm all Dockerfile(s) or build contexts do not copy
config/keys/**into images - Where possible, load passwords from environment variables or stdin rather than passing them as CLI arguments
🤖 Prompt for AI Agents
In .github/workflows/ecs-deployment.yml around lines 47–73, the workflow creates
key files under config/keys and the repo lacks a .dockerignore entry to exclude
that directory; add "config/keys/" to .dockerignore, audit all Dockerfiles/build
contexts to remove any COPY or ADD of config/keys/** (or change paths so build
context excludes config/keys), and stop exposing secrets on CLI by switching the
cargo invocation to read passwords from environment variables or STDIN (e.g.,
use a password-file or pipe the secret into the command) or use docker/buildkit
secrets when building images so the passphrases are never written into images or
checked into source.
.github/workflows/ecs-deployment.yml
Outdated
| - name: Set up AWS credentials via OIDC and role chaining | ||
| uses: ./.github/actions/oidc | ||
| with: | ||
| role-for-oidc: ${{ env.ROLE_FOR_OIDC }} | ||
| role-to-assume: ${{ env.ROLE_TO_ASSUME }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Also pass region in deploy job to avoid AWS_REGION being overridden.
The composite sets AWS_REGION; be explicit to keep it aligned with job env.
- name: Set up AWS credentials via OIDC and role chaining
uses: ./.github/actions/oidc
with:
role-for-oidc: ${{ env.ROLE_FOR_OIDC }}
role-to-assume: ${{ env.ROLE_TO_ASSUME }}
+ aws-region: ${{ env.AWS_REGION }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up AWS credentials via OIDC and role chaining | |
| uses: ./.github/actions/oidc | |
| with: | |
| role-for-oidc: ${{ env.ROLE_FOR_OIDC }} | |
| role-to-assume: ${{ env.ROLE_TO_ASSUME }} | |
| - name: Set up AWS credentials via OIDC and role chaining | |
| uses: ./.github/actions/oidc | |
| with: | |
| role-for-oidc: ${{ env.ROLE_FOR_OIDC }} | |
| role-to-assume: ${{ env.ROLE_TO_ASSUME }} | |
| aws-region: ${{ env.AWS_REGION }} |
🤖 Prompt for AI Agents
.github/workflows/ecs-deployment.yml around lines 126 to 131: the composite OIDC
action sets AWS_REGION which can override the job-level AWS_REGION; update the
step inputs to explicitly pass the region (e.g. add an input like aws-region or
region set to ${{ env.AWS_REGION }} depending on the composite’s input name) so
the deploy job keeps the job-level region value instead of being overridden by
the composite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (3)
Dockerfile.production (3)
7-10: Remove unnecessaryapk update.It creates a bigger layer and is redundant with
--no-cache.-RUN apk update && apk --no-cache add \ +RUN apk add --no-cache \ openssl-dev \ perl \ libsodium-dev
59-64: Leverage pnpm cache layers for faster builds.Copy lockfile/manifests first, prefetch, then copy sources and install offline to maximize cache hits.
-# Copy plugins and install -COPY --chown=nonroot:nonroot ./plugins /app/plugins -USER nonroot -WORKDIR /app/plugins -RUN pnpm install --frozen-lockfile +# Prime store from lockfile (adjust paths if workspace root holds pnpm-lock.yaml) +COPY --chown=nonroot:nonroot ./plugins/pnpm-lock.yaml /app/plugins/pnpm-lock.yaml +COPY --chown=nonroot:nonroot ./plugins/package.json /app/plugins/package.json +USER nonroot +WORKDIR /app/plugins +RUN pnpm fetch --frozen-lockfile +# Now add sources and install offline +COPY --chown=nonroot:nonroot ./plugins /app/plugins +RUN pnpm install --frozen-lockfile --offline
42-42: Pin final base image by digest for supply-chain integrity.The builder is pinned; do the same for
wolfi-baseto avoid drifting runtimes.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Dockerfile.production(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: Analyze (actions)
- GitHub Check: Analyze (rust)
- GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)
71-76: Non-root runtime and explicit ENTRYPOINT look good.Good hardening; keep it.
Dockerfile.production
Outdated
| COPY --from=builder /out/libssl.so.3 /usr/lib/libssl.so.3 | ||
| COPY --from=builder /out/libcrypto.so.3 /usr/lib/libcrypto.so.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t copy shared libs from builder; install runtime packages in final image.
Copying .so files is brittle and can miss deps. Install the runtime packages instead and remove the COPYs.
-# If present, include these shared libs (harmless if not needed)
-COPY --from=builder /out/libssl.so.3 /usr/lib/libssl.so.3
-COPY --from=builder /out/libcrypto.so.3 /usr/lib/libcrypto.so.3
+# Runtime libs for TLS/crypto (Wolfi names)
+RUN apk add --no-cache openssl libsodium ca-certificates-bundleAnd keep the Node install:
-RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
+RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
npm install -g pnpm ts-node typescriptAlso applies to: 56-57
🤖 Prompt for AI Agents
In Dockerfile.production around lines 49-50 (and similarly lines 56-57), do not
COPY libssl.so.3 and libcrypto.so.3 from the builder image; instead remove those
COPY lines and install the appropriate runtime packages (e.g., the
distribution's openssl or libssl runtime packages) into the final image via the
package manager so all dependencies are resolved correctly; ensure you retain
the Node installation steps already present in the Dockerfile and replace the
manual .so copies with the package install commands in the stage that builds the
final runtime image.
Dockerfile.production
Outdated
| RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \ | ||
| npm install -g pnpm ts-node typescript |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify Node version constraint is resolvable in Wolfi.
apk add "nodejs~${NODE_VERSION}" may not match Wolfi’s versioning. Confirm availability or relax the constraint (e.g., nodejs-20).
🌐 Web query:
What nodejs package names/versions are currently available in Chainguard/Wolfi, and does it support installing Node 20.19 via 'apk add nodejs~20.19'?
💡 Result:
Short answer: Wolfi provides per-major Node packages (nodejs-16, nodejs-18, nodejs-19, nodejs-20, etc.) and there are Wolfi APK builds for many specific 20.x releases — but I couldn't find evidence that a Wolfi package for Node 20.19 exists in the public indexes. So running apk add nodejs~20.19 will fail unless you have a repository that contains a nodejs 20.19 APK.
Evidence / pointers:
- Chainguard/Wolfi docs explain using apk search inside the wolfi-base image to see available packages. Use that to confirm in your environment. (edu.chainguard.dev)
- Public vulnerability/package indexes show Wolfi APKs for nodejs versions such as 16.20.x, 18.19.x, 19.9.x and various 20.x builds (examples: 20.3.1, 20.11.0, 20.12.2, 20.13/20.15 appear in different entries) but do not show a nodejs 20.19 Wolfi package in the indexes I checked. (examples: Vulners/OSV listings). (vulners.com, osv.dev)
How to verify locally (one command):
- Run an interactive wolfi-base container, update the repo and search:
- docker run -it --rm cgr.dev/chainguard/wolfi-base sh -c "apk update && apk search nodejs"
This will show the exact package names/versions available to your environment. (docs reference). (edu.chainguard.dev)
- docker run -it --rm cgr.dev/chainguard/wolfi-base sh -c "apk update && apk search nodejs"
If you need Node 20.19 and Wolfi doesn't publish it:
- Build a Wolfi APK with melange or pin a nearby Wolfi nodejs-20 package and rebuild with your upstream Node 20.19 sources (Chainguard docs cover building Wolfi packages). (edu.chainguard.dev)
If you want, I can run a targeted search of the Wolfi package index for "nodejs 20.19" and return exact APK names/paths.
Citations:
- 1: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-node/?utm_source=openai
- 2: https://vulners.com/wolfi/WOLFI%3ACVE-2024-24806?utm_source=openai
- 3: https://osv.dev/vulnerability/CGA-rr3h-9vqx-rqmp?utm_source=openai
- 4: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-node/?utm_source=openai
- 5: https://edu.chainguard.dev/open-source/wolfi/building-a-wolfi-package?utm_source=openai
Use per-major Wolfi Node.js package or supply a 20.19 APK
apk add "nodejs~${NODE_VERSION}" won’t match Wolfi’s naming (it publishes nodejs-16, nodejs-18, nodejs-19, nodejs-20, etc.). Replace with:
RUN apk add --no-cache nodejs-20 npm && \
npm install -g pnpm ts-node typescriptOr set ${NODE_VERSION} to 20. To list exactly what's available, run:
docker run -it --rm cgr.dev/chainguard/wolfi-base \
sh -c "apk update && apk search nodejs"If you must pin to 20.19, vendor or build a Wolfi APK for that release via melange.
🤖 Prompt for AI Agents
In Dockerfile.production around lines 56-57, the apk add line uses
"nodejs~${NODE_VERSION}" which doesn't match Wolfi/Wolfi-based package names
(they use nodejs-<major>), so replace that tokenized package with a per-major
package name (for example use nodejs-20 and keep npm) or set NODE_VERSION to
"20"; if you need exact 20.19 you must vendor/build a Wolfi APK for that release
via melange and install it instead; keep the subsequent global npm installs
unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
Dockerfile.production (4)
6-6: Good fix: build under TARGETPLATFORM.Resolves cross-arch OpenSSL/libsodium link issues noted earlier.
19-28: Good fix: correct BuildKit cache mount placement.RUN flags are now on the instruction; cache works as intended.
40-41: Don’t COPY shared libs; install runtime packages in final image.Copying .so files is brittle and can miss deps. Install runtimes instead.
-COPY --from=builder /out/libssl.so.3 /usr/lib/libssl.so.3 -COPY --from=builder /out/libcrypto.so.3 /usr/lib/libcrypto.so.3 +RUN apk add --no-cache openssl libsodium ca-certificates-bundle
45-47: Wolfi apk selector will likely fail for “nodejs~20.19”. Use per‑major package.Wolfi publishes nodejs- pkgs (e.g., nodejs-20). Pinning “~20.19” usually won’t resolve.
-ARG NODE_VERSION=20.19 -RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \ +ARG NODE_MAJOR=20 +RUN apk add --no-cache nodejs-${NODE_MAJOR} npm && \ npm install -g pnpm ts-node typescriptIf you must pin 20.19 exactly, vendor/provide the APK in a custom repo.
🧹 Nitpick comments (5)
Dockerfile.production (5)
49-54: Move plugin install to a separate “plugins-builder” stage; copy built artifacts.Reduces final image size/attack surface and avoids shipping dev tooling.
Example pattern:
FROM cgr.dev/chainguard/wolfi-base AS plugins-builder WORKDIR /plugins RUN apk add --no-cache nodejs-20 npm && npm i -g pnpm COPY ./plugins . RUN pnpm i --frozen-lockfile && pnpm -r build # later in final stage: # COPY --from=plugins-builder /plugins/dist /app/plugins/dist
53-53: Cache pnpm store for faster rebuilds.-RUN pnpm install --frozen-lockfile +RUN --mount=type=cache,target=/home/nonroot/.pnpm-store \ + pnpm config set store-dir /home/nonroot/.pnpm-store && \ + pnpm install --frozen-lockfile
9-12: Nit: avoid separate apk update; use --no-cache add only.Smaller layers and deterministic installs.
-RUN apk update && apk --no-cache add \ +RUN apk add --no-cache \ openssl-dev \ perl \ libsodium-dev
61-63: Nit: chmod likely unnecessary; COPY preserves exec bit from builder.Drop to reduce layer churn.
-USER root -RUN chmod +x /app/openzeppelin-relayer -USER nonroot +# binary should already be executable
33-33: Pin final base image by digest for reproducibility.-FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base@sha256:<digest>
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Dockerfile.production(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: Analyze (rust)
- GitHub Check: semgrep/ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Dockerfile.production (1)
2-16: Build for TARGETPLATFORM and produce a release binary; fix cache mount path.
- Using BUILDPLATFORM for the Rust builder will break when building multi-arch (OpenSSL/libsodium mismatch). Build on TARGETPLATFORM.
cargo installwith--debugships an unoptimized binary; use a release build.- Cache mount points should match the actual workspace path (
/usr/app/target).Apply:
-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base @@ -RUN --mount=type=cache,target=/usr/local/cargo/registry \ - --mount=type=cache,target=/app/target \ - cargo install --root /usr/app --path . --debug --locked +RUN --mount=type=cache,target=/usr/local/cargo/registry \ + --mount=type=cache,target=/usr/app/target \ + cargo build --release --locked +RUN install -Dm755 target/release/openzeppelin-relayer /usr/app/bin/openzeppelin-relayer
♻️ Duplicate comments (1)
Dockerfile.production (1)
23-24: Don’t copy shared libs from the builder; install runtime packages in final image.Copying
.sofiles is brittle and may miss transitive deps. Install the runtime packages instead.-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3 -COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3 +RUN apk add --no-cache openssl libsodium ca-certificates-bundle
🧹 Nitpick comments (4)
Dockerfile.production (4)
21-21: Remove stale commented COPY or document the runtime volume approach.Leaving this commented line is confusing given the bind/copy block below. Clean it up or add a brief comment explaining that config is supplied via runtime volumes.
49-50: Unused build arg.
TARGETARCHis declared but never used in this stage. Remove it or wire it into logic if intended.
62-68: Permission/ownership check for plugin install.You
COPY --chown=nonroot:nonrootand then runpnpm installas nonroot (good). Consider--frozen-lockfile --prefer-offlineand adding a cache mount to speed builds.-RUN pnpm install --frozen-lockfile +RUN --mount=type=cache,target=/home/nonroot/.pnpm-store \ + pnpm install --frozen-lockfile --prefer-offline
75-79: Non-root entrypoint confirmed; consider explicit USER.The image remains as
nonroot. For clarity and defense-in-depth, setUSER nonrootnear the end (before ENTRYPOINT) even if already set earlier.WORKDIR /app +USER nonroot ENTRYPOINT ["/app/openzeppelin-relayer"]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Dockerfile.production(1 hunks)
🧰 Additional context used
🪛 Hadolint (2.12.0)
Dockerfile.production
[error] 30-30: unexpected "required=fal"
expecting "from=", "readonly", "readonly=", "readwrite", "readwrite=", "ro", "ro=", "rw", "rw=", source=, or target=
(DL1000)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: Analyze (rust)
- GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)
50-57: Ignore the suggestednodejs-22package rename—Alpine (and Wolfi) ship a singlenodejspackage whose versions you select via fuzzy matching.
To use Node 22, bump your ARG to22and install with:ARG NODE_VERSION=22 RUN apk add --no-cache nodejs=~${NODE_VERSION} npmThis aligns with CI’s 22.x runtime without inventing a non-existent
nodejs-22package.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Dockerfile.production (1)
21-21: Remove dead commented COPY or replace with a proper, documented config-ingestion mechanism.The commented-out COPY is noise. Either delete it or implement the intended behavior (e.g., use a BuildKit bind mount in a RUN step to copy configs into /app/config during build).
Example (if you choose to implement):
-# COPY --chown=nonroot:nonroot ./config /app/config +# See BAKE_CONFIGS block below for config ingestion via RUN.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Dockerfile.production(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Analyze (rust)
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: semgrep/ci
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
Dockerfile.production (3)
2-2: Use TARGETPLATFORM for both stages; BUILDPLATFORM breaks multi-arch builds.
Builder and final stages pinned to BUILDPLATFORM will produce wrong-arch artifacts when using buildx. Build each stage for the target architecture.-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base ... -FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/wolfi-base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-baseAlso applies to: 18-18
24-25: Don’t copy OpenSSL shared libs from the builder; install runtime packages in final image.
Copying .so’s is brittle and can miss deps (e.g., libsodium). Install runtime packages instead.-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3 -COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3 +RUN apk add --no-cache openssl libsodium ca-certificates-bundle
32-34: Wolfi Node package pin likely invalid; use per-major nodejs-20 (or verify availability).
nodejs=~20.19rarely resolves in Wolfi; prefernodejs-20. Keep npm if separate.-ARG NODE_VERSION=20.19 -RUN apk add --no-cache nodejs=~${NODE_VERSION} npm && \ +ARG NODE_VERSION=20 +RUN apk add --no-cache nodejs-${NODE_VERSION} npm && \ npm install -g pnpm ts-node typescriptIf you must pin 20.19 exactly, provide a Wolfi APK in your repos and adjust
apk addaccordingly.
🧹 Nitpick comments (3)
Dockerfile.production (3)
27-29: Drop unused /tmp/src copy; BAKE uses a bind mount.
You copy /usr/app to /tmp/src but bake reads from /mnt/src via mount; /tmp/src is never used except for cleanup.-# Bring source tree from base so we can optionally bake files -COPY --from=base /usr/app /tmp/src ... -# Clean up build-only sources -USER root -RUN rm -rf /tmp/src && apk del apk-tools wolfi-base || true -USER nonroot +# (No /tmp/src copy needed; retain USER switch only if you still need to run root steps later)Also applies to: 64-67
39-55: Minor: wrap the bake script with sh -euxc for stricter error handling.
Yourset -eux; \is fine;sh -euxcis slightly cleaner and avoids line-continuation pitfalls.-RUN --mount=type=bind,from=base,src=/usr/app,target=/mnt/src,ro \ - set -eux; \ +RUN --mount=type=bind,from=base,src=/usr/app,target=/mnt/src,ro \ + sh -euxc '\ if [ "${BAKE_CONFIGS}" = "true" ]; then \ if [ -d /mnt/src/examples/launchtube-plugin-example/config ]; then \ mkdir -p /app/config; \ cp -a /mnt/src/examples/launchtube-plugin-example/config/. /app/config/; \ fi; \ if [ -d /mnt/src/config/networks ]; then \ mkdir -p /app/config/networks; \ cp -a /mnt/src/config/networks/. /app/config/networks/; \ fi; \ if [ -d /mnt/src/examples/launchtube-plugin-example/launchtube ]; then \ mkdir -p /app/plugins/launchtube; \ cp -a /mnt/src/examples/launchtube-plugin-example/launchtube/. /app/plugins/launchtube/; \ fi; \ chown -R nonroot:nonroot /app/config /app/plugins || true; \ - fi + fi \ + '
66-66: Avoid removing meta ‘wolfi-base’; just drop apk-tools.
apk del wolfi-baseis odd (meta/uninstall may be a no-op). Keep the surface minimal by removing apk-tools only.-RUN rm -rf /tmp/src && apk del apk-tools wolfi-base || true +RUN apk del apk-tools || true
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Dockerfile.production(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Analyze (rust)
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)
57-62: Verify overlay semantics: COPY ./plugins after baking may overwrite baked launchtube.
Ensure ./plugins doesn’t clobber /app/plugins/launchtube from the bake step or duplicate dependencies.If needed, reorder to COPY before bake (and only bake missing dirs) or COPY with include/exclude patterns.
| RUN --mount=type=cache,target=/usr/local/cargo/registry \ | ||
| --mount=type=cache,target=/app/target \ | ||
| cargo install --root /usr/app --path . --debug --locked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t ship a debug build; produce an optimized release binary and align cache paths.
Release should be default for production; also cache the cargo git DB and keep target under WORKDIR for better cache hits.
-RUN --mount=type=cache,target=/usr/local/cargo/registry \
- --mount=type=cache,target=/app/target \
- cargo install --root /usr/app --path . --debug --locked
+RUN --mount=type=cache,target=/usr/local/cargo/registry \
+ --mount=type=cache,target=/usr/local/cargo/git \
+ --mount=type=cache,target=/usr/app/target \
+ cargo install --root /usr/app --path . --locked📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN --mount=type=cache,target=/usr/local/cargo/registry \ | |
| --mount=type=cache,target=/app/target \ | |
| cargo install --root /usr/app --path . --debug --locked | |
| RUN --mount=type=cache,target=/usr/local/cargo/registry \ | |
| --mount=type=cache,target=/usr/local/cargo/git \ | |
| --mount=type=cache,target=/usr/app/target \ | |
| cargo install --root /usr/app --path . --locked |
🤖 Prompt for AI Agents
In Dockerfile.production around lines 12-14, change the build to produce an
optimized release binary and align cache mounts: remove the --debug flag and use
--release for cargo install, add a cache mount for the cargo git DB (target
/usr/local/cargo/git), and ensure the target cache path is located under the
image WORKDIR (e.g., /app/target) so cache hits are effective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
Dockerfile.production (5)
2-2: Build on TARGETPLATFORM to avoid cross-arch breakage (Rust + native Node deps).Using BUILDPLATFORM will yield wrong-arch binaries/modules on multi-arch builds. Build each stage for TARGETPLATFORM.
Apply:
-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/wolfi-base +FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-baseAlso applies to: 18-18
12-16: Ship release binary and align Cargo caches.Remove debug build and add git cache + correct target path for better cache hit rate.
Apply:
-RUN --mount=type=cache,target=/usr/local/cargo/registry \ - --mount=type=cache,target=/app/target \ - cargo install --root /usr/app --path . --debug --locked +RUN --mount=type=cache,target=/usr/local/cargo/registry \ + --mount=type=cache,target=/usr/local/cargo/git \ + --mount=type=cache,target=/usr/app/target \ + sh -euxc 'CARGO_TARGET_DIR=/usr/app/target cargo install --root /usr/app --path . --locked'
39-47: Wolfi Node pin is likely invalid; use per-major package.Replace apk constraint with Wolfi’s nodejs- packages; verify availability.
Apply:
-ARG NODE_VERSION=20.19 +ARG NODE_VERSION=20 @@ -RUN apk add --no-cache nodejs=~${NODE_VERSION} npm +RUN apk add --no-cache nodejs-${NODE_VERSION} npmVerify what’s available:
#!/usr/bin/env bash docker run --rm cgr.dev/chainguard/wolfi-base sh -lc 'apk update && apk search -x "nodejs*"'
25-26: Don’t copy .so’s from builder; install runtime libs in final image.Copying shared libs is brittle; use packages so transitive deps and security updates apply.
Apply:
-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3 -COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3 +RUN apk add --no-cache openssl libsodium ca-certificates-bundle
28-36: Invalid RUN bind mount option ‘optional’; wrap in strict shell.BuildKit doesn’t support ‘optional’ for type=bind; this will fail parse/solve. Make the mount required (ensuring source exists) and harden the script.
Apply:
-ARG INCLUDE_LAUNCHTUBE=false +ARG INCLUDE_LAUNCHTUBE=false @@ -RUN --mount=type=bind,source=./examples/launchtube-plugin-example/launchtube,target=/tmp/launchtube,optional \ - if [ "${INCLUDE_LAUNCHTUBE}" = "true" ] && [ -d /tmp/launchtube ]; then \ - mkdir -p /app/plugins/launchtube && \ - cp -a /tmp/launchtube/. /app/plugins/launchtube/; \ - else \ - echo "Skipping launchtube copy"; \ - fi +RUN --mount=type=bind,source=examples/launchtube-plugin-example/launchtube,target=/tmp/launchtube,ro \ + sh -euxc '\ + if [ "${INCLUDE_LAUNCHTUBE}" = "true" ] && [ -d /tmp/launchtube ]; then \ + mkdir -p /app/plugins/launchtube; \ + cp -a /tmp/launchtube/. /app/plugins/launchtube/; \ + else \ + echo "Skipping launchtube copy"; \ + fi \ + 'If you truly need optional sources, switch to a named build context and pass it only when present (cannot be optional at parse time).
🧹 Nitpick comments (2)
Dockerfile.production (2)
21-23: Redundant COPY of networks directory.Line 23 is subsumed by line 21; drop the duplicate.
Apply:
COPY --chown=nonroot:nonroot ./config /app/config -# COPY --chown=nonroot:nonroot ./examples/launchtube-plugin-example/launchtube /app/plugins/launchtube -COPY --chown=nonroot:nonroot ./config/networks /app/config/networks +# COPY --chown=nonroot:nonroot ./examples/launchtube-plugin-example/launchtube /app/plugins/launchtube
45-45: Unnecessary PATH override.Node/npm binaries are already on PATH in Wolfi; drop to reduce noise.
Apply:
-ENV PATH="/usr/local/bin:$PATH"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Dockerfile.production(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: Analyze (rust)
- GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)
53-58: Potential native module arch mismatch during pnpm install.If BUILDPLATFORM ≠ TARGETPLATFORM, native Node addons compiled here will be wrong-arch. This is resolved by switching this stage to TARGETPLATFORM (see earlier comment).
After changing the stage platform, confirm no prebuilt binaries targeting the build host remain in node_modules.
Summary
Testing Process
Checklist
Summary by CodeRabbit