Skip to content

Conversation

@collins-w
Copy link
Contributor

@collins-w collins-w commented Aug 29, 2025

Summary

  • Adds AWS build and deployment release workflow

Testing Process

Checklist

  • Add a reference to related issues in the PR description.
  • Add unit tests if applicable.

Summary by CodeRabbit

  • Chores
    • Added a reusable CI action to obtain AWS OIDC credentials and perform chained role assumption for deployments.
    • Updated CI preparation to install libsodium, set up PNPM v10 (install disabled), configure Node.js 22.18.x, and build the Launchtube plugin example.
    • Modified production image build to preload configuration and network assets and add an optional build-time flow to include the Launchtube plugin.

@collins-w collins-w requested review from a team as code owners August 29, 2025 15:39
@coderabbitai
Copy link

coderabbitai bot commented Aug 29, 2025

Walkthrough

Adds a composite GitHub Action for AWS OIDC role assume-chaining, updates a prepare action to install libsodium, configure PNPM and Node.js and build a plugin example, and updates Dockerfile.production to copy config assets and add a conditional launchtube inclusion flow.

Changes

Cohort / File(s) Summary
GitHub Actions: OIDC role chaining
.github/actions/oidc/action.yaml
New composite action "AWS OIDC Credentials via Role Assume Chaining" with inputs role-for-oidc, role-to-assume, role-session-name (default github-actions), role-duration-seconds (default 3600), aws-region (default us-east-1). Runs two sequential aws-actions/configure-aws-credentials steps: 1) assume OIDC role (region=us-east-1, duration=900) to obtain temporary creds; 2) assume target role with role-chaining=true using those temp creds and configurable region/duration. Version-pinned action reference.
GitHub Actions: Prepare environment
.github/actions/prepare/action.yml
Adds installation of libsodium-dev, PNPM setup via pnpm/action-setup (version 10, run_install: false), Node.js setup via actions/setup-node for Node 22.18.0, and a build step that cd's into the launchtube plugin example, runs pnpm install and pnpm run build, then returns.
Docker: config, networks, and conditional launchtube
Dockerfile.production
Adds COPY --chown=nonroot:nonroot ./config /app/config and COPY --chown=nonroot:nonroot ./config/networks /app/config/networks. Introduces ARG INCLUDE_LAUNCHTUBE=false and a RUN block that bind-mounts ./examples/launchtube-plugin-example/launchtube to /mnt/launchtube and conditionally copies it into /app/plugins/launchtube when INCLUDE_LAUNCHTUBE="true" and path exists (prints "✅ Launchtube copied"); otherwise prints a skip message. Commented bake/copy variants remain. Existing runtime copies unchanged.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor GH as GitHub Actions Job
  participant OIDC as configure-aws-credentials (OIDC step)
  participant STS as AWS STS
  participant CHAIN as configure-aws-credentials (Target step)

  GH->>OIDC: assume `role-for-oidc` (region=us-east-1, duration=900)
  OIDC->>STS: AssumeRoleWithWebIdentity
  STS-->>OIDC: temporary creds (AK/SK/Token)
  OIDC-->>GH: export temp creds to env

  GH->>CHAIN: assume `role-to-assume` (role-chaining=true, region/duration inputs) using temp creds
  CHAIN->>STS: AssumeRole (chained)
  STS-->>CHAIN: target role creds
  CHAIN-->>GH: configure AWS env for downstream steps
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

I hop through CI to fetch a key,
I chain two roles to set creds free.
I nibble libsodium, PNPM in sight,
I tuck configs into the image at night.
A floppy-eared build — hoppity delight! 🥕🐇

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 03d882f and 6fb4dfb.

📒 Files selected for processing (1)
  • Dockerfile.production (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • Dockerfile.production
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch aws-deployment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (7)
.github/actions/prepare/action.yml (4)

21-23: Harden apt install (non-interactive, minimal, clean apt lists).

Prevents hangs and reduces runner image bloat.

-    - name: Install libsodium
-      run: sudo apt-get update && sudo apt-get install -y libsodium-dev
+    - name: Install libsodium
+      run: |
+        sudo apt-get update
+        sudo apt-get install -y --no-install-recommends libsodium-dev
+        sudo rm -rf /var/lib/apt/lists/*
       shell: bash

29-33: PNPM setup looks fine. Consider enabling Corepack instead of pinning PNPM via action.

If you prefer Corepack, enable it and pin via packageManager in package.json for provenance.

-    - uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
-      with:
-        run_install: false
-        version: 10
+    - name: Enable Corepack
+      run: corepack enable
+      shell: bash

39-45: Make directory hopping robust; fail fast.

Use pushd/popd and strict bash flags to avoid remaining in the wrong directory on failure.

-    - name: Build launchtube plugin example
-      run: |
-        cd examples/launchtube-plugin-example/launchtube
-        pnpm install
-        pnpm run build
-        cd ../..
+    - name: Build launchtube plugin example
+      run: |
+        set -euo pipefail
+        pushd examples/launchtube-plugin-example/launchtube
+        pnpm install
+        pnpm run build
+        popd
       shell: bash

2-3: Action naming vs scope.

The action is named “Cache” but performs broader preparation (Rust toolchain, libs, Node/PNPM, and a build). Consider renaming to better reflect purpose, aiding discoverability.

.github/workflows/ecs-deployment.yml (3)

15-20: Tighten job permissions (least privilege).

packages: write, attestations: write, security-events: write don’t appear used in build-and-push.

     permissions:
       id-token: write
       contents: read
-      packages: write
-      attestations: write
-      security-events: write

118-122: Consider blocking unexpected egress in deploy.

If feasible, switch harden-runner to egress-policy: block with explicit allowlist for AWS endpoints to reduce supply-chain risk.


79-81: Pin Buildx action version comment (minor).

You’ve pinned to a commit—good. Optionally add a version comment for clarity, consistent with other steps.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 118d386 and b219cfa.

📒 Files selected for processing (3)
  • .github/actions/oidc/action.yaml (1 hunks)
  • .github/actions/prepare/action.yml (1 hunks)
  • .github/workflows/ecs-deployment.yml (1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/ecs-deployment.yml

45-45: input "token" is not defined in action "Cache" defined at "./.github/actions/prepare". available inputs are "components"

(action)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
  • GitHub Check: semgrep/ci
🔇 Additional comments (2)
.github/actions/prepare/action.yml (1)

34-37: Pin rust-toolchain action via commit and toolchain input

The tag 1.86.0 doesn’t exist in dtolnay/rust-toolchain, so you must pin to a commit (or @stable) and pass toolchain: 1.86.0. Update in .github/actions/prepare/action.yml accordingly:

-    - name: setup rust tool chain
-      uses: dtolnay/rust-toolchain@1.86.0  # v1.86.0
+    - name: setup rust toolchain
+      uses: dtolnay/rust-toolchain@<commit-sha-or-stable>
       with:
+        toolchain: 1.86.0
         components: ${{ (inputs.components != '') && format('{0}, rustfmt, clippy', inputs.components) || 'rustfmt, clippy' }}

Likely an incorrect or invalid review comment.

.github/workflows/ecs-deployment.yml (1)

109-114: Cross-account flow: verify ECS task can pull from the ECR account.

Ensure appropriate ECR repository permissions and ECS task execution role permissions for cross-account image pulls (Solutions -> Research).

Comment on lines +30 to +33
aws-region: us-east-1
role-to-assume: ${{ inputs.role-for-oidc }}
role-session-name: ${{ inputs.role-session-name }}
role-duration-seconds: 900
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Don’t hardcode region; use the input across both steps.

Using us-east-1 here while allowing a region input elsewhere causes inconsistent AWS_REGION in jobs.

-        aws-region: us-east-1
+        aws-region: ${{ inputs.aws-region }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
aws-region: us-east-1
role-to-assume: ${{ inputs.role-for-oidc }}
role-session-name: ${{ inputs.role-session-name }}
role-duration-seconds: 900
aws-region: ${{ inputs.aws-region }}
role-to-assume: ${{ inputs.role-for-oidc }}
role-session-name: ${{ inputs.role-session-name }}
role-duration-seconds: 900
🤖 Prompt for AI Agents
.github/actions/oidc/action.yaml around lines 30 to 33: the action hardcodes
aws-region as "us-east-1" causing inconsistent AWS_REGION; change the aws-region
field to use the action input (e.g. aws-region: ${{ inputs.aws-region }}) so the
provided region input is used here as well, and verify the corresponding input
is defined in the action metadata and consumed consistently in any other steps
that expect the same region.

Comment on lines +35 to +45
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
id: assume-target-role
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
aws-region: ${{ inputs.aws-region }}
role-chaining: true
role-to-assume: ${{ inputs.role-to-assume }}
role-session-name: ${{ inputs.role-session-name }}
role-duration-seconds: ${{ inputs.role-duration-seconds }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mask account IDs and align duration defaults.

Add masking to reduce info disclosure in logs; optionally make the first-step duration configurable for consistency.

     - name: assume oidc role
       uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722  # v4.1.0
       with:
-        aws-region: us-east-1
+        aws-region: ${{ inputs.aws-region }}
         role-to-assume: ${{ inputs.role-for-oidc }}
         role-session-name: ${{ inputs.role-session-name }}
-        role-duration-seconds: 900
+        role-duration-seconds: 900
+        mask-aws-account-id: true
     - name: assume target role
       uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722  # v4.1.0
       id: assume-target-role
       with:
         aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
         aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
         aws-session-token: ${{ env.AWS_SESSION_TOKEN }}
         aws-region: ${{ inputs.aws-region }}
         role-chaining: true
         role-to-assume: ${{ inputs.role-to-assume }}
         role-session-name: ${{ inputs.role-session-name }}
         role-duration-seconds: ${{ inputs.role-duration-seconds }}
+        mask-aws-account-id: true

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
.github/actions/oidc/action.yaml around lines 35 to 45: add masking of sensitive
account/role identifiers prior to calling aws-actions/configure-aws-credentials
by invoking the GitHub Actions mask command (echo ::add-mask::${{
inputs.ACCOUNT_ID }} and any other account/role inputs you expose) so those
values never appear in logs, and expose a configurable input for the first-step
duration (e.g., first-step-duration-seconds) with a sensible default and wire
that input into the initial role-duration-seconds parameter (keeping
role-chaining behavior intact) so the initial assume duration can be
aligned/configured.

Comment on lines 47 to 73
- name: Create launchtube fund
run: |
cargo run --example create_key -- \
--password "$KEYSTORE_PASSPHRASE_FUND" \
--output-dir config/keys \
--filename launchtube-fund.json
env:
KEYSTORE_PASSPHRASE_FUND: ${{ secrets.KEYSTORE_PASSPHRASE_FUND }}

- name: Create launchtube sequence account 001
run: |
cargo run --example create_key -- \
--password "$KEYSTORE_PASSPHRASE_SEQ_001" \
--output-dir config/keys \
--filename launchtube-seq-001.json
env:
KEYSTORE_PASSPHRASE_SEQ_001: ${{ secrets.KEYSTORE_PASSPHRASE_SEQ_001 }}

- name: Create launchtube sequence account 002
run: |
cargo run --example create_key -- \
--password "$KEYSTORE_PASSPHRASE_SEQ_002" \
--output-dir config/keys \
--filename launchtube-seq-002.json
env:
KEYSTORE_PASSPHRASE_SEQ_002: ${{ secrets.KEYSTORE_PASSPHRASE_SEQ_002 }}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Secret handling: ensure generated keys don’t end up in the Docker image or artifacts.

Confirm .dockerignore and Dockerfile exclude config/keys/**. Avoid passing passwords via CLI if the example supports env/stdin.

If needed, add .dockerignore:

+config/keys/

Or adjust Dockerfile to not COPY that path.


🏁 Script executed:

#!/usr/bin/env bash

echo "=== Checking .dockerignore for config/keys/ ==="
if [[ -f .dockerignore ]]; then
  grep -n 'config/keys/' .dockerignore || echo "No 'config/keys/' entry in .dockerignore"
else
  echo ".dockerignore not found"
fi

echo
echo "=== Locating Dockerfile(s) ==="
mapfile -t dockerfiles < <(find . -type f -name Dockerfile)
if [[ ${#dockerfiles[@]} -eq 0 ]]; then
  echo "No Dockerfile found"
  exit 0
fi
printf "%s\n" "${dockerfiles[@]}"

echo
echo "=== Inspecting Dockerfile(s) for config/keys references and context COPY ==="
for df in "${dockerfiles[@]}"; do
  echo "-> $df"
  grep -n 'config/keys/' "$df" || echo "   No direct 'config/keys/' reference"
  grep -n '^COPY \.' "$df" || echo "   No 'COPY .' command (build context may include config/keys/)"
  echo
done

Length of output: 586


🏁 Script executed:

#!/usr/bin/env bash

echo "=== Finding create_key example files ==="
mapfile -t files < <(find . -type f -iname '*create_key*.rs')
if [[ ${#files[@]} -eq 0 ]]; then
  echo "No create_key example files found"
else
  for f in "${files[@]}"; do
    echo "-> $f"
    sed -n '1,100p' "$f"
    echo
  done
fi

echo "=== Searching for STDIN usage in example files ==="
grep -R -n 'std::io::stdin' . || echo "No stdin usage found"

echo "=== Searching for Arg::new + .env usage in example files ==="
grep -R -n "Arg::new" . | grep -i "env" || echo "No Arg::env usage found"

Length of output: 772


Add missing .dockerignore entry and exclude secrets from Docker builds

  • .dockerignore: add config/keys/ (currently absent)
  • Confirm all Dockerfile(s) or build contexts do not copy config/keys/** into images
  • Where possible, load passwords from environment variables or stdin rather than passing them as CLI arguments
🤖 Prompt for AI Agents
In .github/workflows/ecs-deployment.yml around lines 47–73, the workflow creates
key files under config/keys and the repo lacks a .dockerignore entry to exclude
that directory; add "config/keys/" to .dockerignore, audit all Dockerfiles/build
contexts to remove any COPY or ADD of config/keys/** (or change paths so build
context excludes config/keys), and stop exposing secrets on CLI by switching the
cargo invocation to read passwords from environment variables or STDIN (e.g.,
use a password-file or pipe the secret into the command) or use docker/buildkit
secrets when building images so the passphrases are never written into images or
checked into source.

Comment on lines 126 to 131
- name: Set up AWS credentials via OIDC and role chaining
uses: ./.github/actions/oidc
with:
role-for-oidc: ${{ env.ROLE_FOR_OIDC }}
role-to-assume: ${{ env.ROLE_TO_ASSUME }}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Also pass region in deploy job to avoid AWS_REGION being overridden.

The composite sets AWS_REGION; be explicit to keep it aligned with job env.

       - name: Set up AWS credentials via OIDC and role chaining
         uses: ./.github/actions/oidc
         with:
           role-for-oidc: ${{ env.ROLE_FOR_OIDC }}
           role-to-assume: ${{ env.ROLE_TO_ASSUME }}
+          aws-region: ${{ env.AWS_REGION }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set up AWS credentials via OIDC and role chaining
uses: ./.github/actions/oidc
with:
role-for-oidc: ${{ env.ROLE_FOR_OIDC }}
role-to-assume: ${{ env.ROLE_TO_ASSUME }}
- name: Set up AWS credentials via OIDC and role chaining
uses: ./.github/actions/oidc
with:
role-for-oidc: ${{ env.ROLE_FOR_OIDC }}
role-to-assume: ${{ env.ROLE_TO_ASSUME }}
aws-region: ${{ env.AWS_REGION }}
🤖 Prompt for AI Agents
.github/workflows/ecs-deployment.yml around lines 126 to 131: the composite OIDC
action sets AWS_REGION which can override the job-level AWS_REGION; update the
step inputs to explicitly pass the region (e.g. add an input like aws-region or
region set to ${{ env.AWS_REGION }} depending on the composite’s input name) so
the deploy job keeps the job-level region value instead of being overridden by
the composite.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
Dockerfile.production (3)

7-10: Remove unnecessary apk update.

It creates a bigger layer and is redundant with --no-cache.

-RUN apk update && apk --no-cache add \
+RUN apk add --no-cache \
     openssl-dev \
     perl \
     libsodium-dev

59-64: Leverage pnpm cache layers for faster builds.

Copy lockfile/manifests first, prefetch, then copy sources and install offline to maximize cache hits.

-# Copy plugins and install
-COPY --chown=nonroot:nonroot ./plugins /app/plugins
-USER nonroot
-WORKDIR /app/plugins
-RUN pnpm install --frozen-lockfile
+# Prime store from lockfile (adjust paths if workspace root holds pnpm-lock.yaml)
+COPY --chown=nonroot:nonroot ./plugins/pnpm-lock.yaml /app/plugins/pnpm-lock.yaml
+COPY --chown=nonroot:nonroot ./plugins/package.json /app/plugins/package.json
+USER nonroot
+WORKDIR /app/plugins
+RUN pnpm fetch --frozen-lockfile
+# Now add sources and install offline
+COPY --chown=nonroot:nonroot ./plugins /app/plugins
+RUN pnpm install --frozen-lockfile --offline

42-42: Pin final base image by digest for supply-chain integrity.

The builder is pinned; do the same for wolfi-base to avoid drifting runtimes.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b219cfa and 6138f4f.

📒 Files selected for processing (1)
  • Dockerfile.production (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (actions)
  • GitHub Check: Analyze (rust)
  • GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)

71-76: Non-root runtime and explicit ENTRYPOINT look good.

Good hardening; keep it.

Comment on lines 49 to 50
COPY --from=builder /out/libssl.so.3 /usr/lib/libssl.so.3
COPY --from=builder /out/libcrypto.so.3 /usr/lib/libcrypto.so.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Don’t copy shared libs from builder; install runtime packages in final image.

Copying .so files is brittle and can miss deps. Install the runtime packages instead and remove the COPYs.

-# If present, include these shared libs (harmless if not needed)
-COPY --from=builder /out/libssl.so.3     /usr/lib/libssl.so.3
-COPY --from=builder /out/libcrypto.so.3  /usr/lib/libcrypto.so.3
+# Runtime libs for TLS/crypto (Wolfi names)
+RUN apk add --no-cache openssl libsodium ca-certificates-bundle

And keep the Node install:

-RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
+RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
     npm install -g pnpm ts-node typescript

Also applies to: 56-57

🤖 Prompt for AI Agents
In Dockerfile.production around lines 49-50 (and similarly lines 56-57), do not
COPY libssl.so.3 and libcrypto.so.3 from the builder image; instead remove those
COPY lines and install the appropriate runtime packages (e.g., the
distribution's openssl or libssl runtime packages) into the final image via the
package manager so all dependencies are resolved correctly; ensure you retain
the Node installation steps already present in the Dockerfile and replace the
manual .so copies with the package install commands in the stage that builds the
final runtime image.

Comment on lines 56 to 57
RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
npm install -g pnpm ts-node typescript
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify Node version constraint is resolvable in Wolfi.

apk add "nodejs~${NODE_VERSION}" may not match Wolfi’s versioning. Confirm availability or relax the constraint (e.g., nodejs-20).


🌐 Web query:

What nodejs package names/versions are currently available in Chainguard/Wolfi, and does it support installing Node 20.19 via 'apk add nodejs~20.19'?

💡 Result:

Short answer: Wolfi provides per-major Node packages (nodejs-16, nodejs-18, nodejs-19, nodejs-20, etc.) and there are Wolfi APK builds for many specific 20.x releases — but I couldn't find evidence that a Wolfi package for Node 20.19 exists in the public indexes. So running apk add nodejs~20.19 will fail unless you have a repository that contains a nodejs 20.19 APK.

Evidence / pointers:

  • Chainguard/Wolfi docs explain using apk search inside the wolfi-base image to see available packages. Use that to confirm in your environment. (edu.chainguard.dev)
  • Public vulnerability/package indexes show Wolfi APKs for nodejs versions such as 16.20.x, 18.19.x, 19.9.x and various 20.x builds (examples: 20.3.1, 20.11.0, 20.12.2, 20.13/20.15 appear in different entries) but do not show a nodejs 20.19 Wolfi package in the indexes I checked. (examples: Vulners/OSV listings). (vulners.com, osv.dev)

How to verify locally (one command):

  • Run an interactive wolfi-base container, update the repo and search:
    • docker run -it --rm cgr.dev/chainguard/wolfi-base sh -c "apk update && apk search nodejs"
      This will show the exact package names/versions available to your environment. (docs reference). (edu.chainguard.dev)

If you need Node 20.19 and Wolfi doesn't publish it:

  • Build a Wolfi APK with melange or pin a nearby Wolfi nodejs-20 package and rebuild with your upstream Node 20.19 sources (Chainguard docs cover building Wolfi packages). (edu.chainguard.dev)

If you want, I can run a targeted search of the Wolfi package index for "nodejs 20.19" and return exact APK names/paths.

Citations:


Use per-major Wolfi Node.js package or supply a 20.19 APK
apk add "nodejs~${NODE_VERSION}" won’t match Wolfi’s naming (it publishes nodejs-16, nodejs-18, nodejs-19, nodejs-20, etc.). Replace with:

RUN apk add --no-cache nodejs-20 npm && \
    npm install -g pnpm ts-node typescript

Or set ${NODE_VERSION} to 20. To list exactly what's available, run:

docker run -it --rm cgr.dev/chainguard/wolfi-base \
  sh -c "apk update && apk search nodejs"

If you must pin to 20.19, vendor or build a Wolfi APK for that release via melange.

🤖 Prompt for AI Agents
In Dockerfile.production around lines 56-57, the apk add line uses
"nodejs~${NODE_VERSION}" which doesn't match Wolfi/Wolfi-based package names
(they use nodejs-<major>), so replace that tokenized package with a per-major
package name (for example use nodejs-20 and keep npm) or set NODE_VERSION to
"20"; if you need exact 20.19 you must vendor/build a Wolfi APK for that release
via melange and install it instead; keep the subsequent global npm installs
unchanged.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
Dockerfile.production (4)

6-6: Good fix: build under TARGETPLATFORM.

Resolves cross-arch OpenSSL/libsodium link issues noted earlier.


19-28: Good fix: correct BuildKit cache mount placement.

RUN flags are now on the instruction; cache works as intended.


40-41: Don’t COPY shared libs; install runtime packages in final image.

Copying .so files is brittle and can miss deps. Install runtimes instead.

-COPY --from=builder /out/libssl.so.3     /usr/lib/libssl.so.3
-COPY --from=builder /out/libcrypto.so.3  /usr/lib/libcrypto.so.3
+RUN apk add --no-cache openssl libsodium ca-certificates-bundle

45-47: Wolfi apk selector will likely fail for “nodejs~20.19”. Use per‑major package.

Wolfi publishes nodejs- pkgs (e.g., nodejs-20). Pinning “~20.19” usually won’t resolve.

-ARG NODE_VERSION=20.19
-RUN apk add --no-cache "nodejs~${NODE_VERSION}" npm && \
+ARG NODE_MAJOR=20
+RUN apk add --no-cache nodejs-${NODE_MAJOR} npm && \
     npm install -g pnpm ts-node typescript

If you must pin 20.19 exactly, vendor/provide the APK in a custom repo.

🧹 Nitpick comments (5)
Dockerfile.production (5)

49-54: Move plugin install to a separate “plugins-builder” stage; copy built artifacts.

Reduces final image size/attack surface and avoids shipping dev tooling.

Example pattern:

FROM cgr.dev/chainguard/wolfi-base AS plugins-builder
WORKDIR /plugins
RUN apk add --no-cache nodejs-20 npm && npm i -g pnpm
COPY ./plugins .
RUN pnpm i --frozen-lockfile && pnpm -r build
# later in final stage:
# COPY --from=plugins-builder /plugins/dist /app/plugins/dist

53-53: Cache pnpm store for faster rebuilds.

-RUN pnpm install --frozen-lockfile
+RUN --mount=type=cache,target=/home/nonroot/.pnpm-store \
+    pnpm config set store-dir /home/nonroot/.pnpm-store && \
+    pnpm install --frozen-lockfile

9-12: Nit: avoid separate apk update; use --no-cache add only.

Smaller layers and deterministic installs.

-RUN apk update && apk --no-cache add \
+RUN apk add --no-cache \
     openssl-dev \
     perl \
     libsodium-dev

61-63: Nit: chmod likely unnecessary; COPY preserves exec bit from builder.

Drop to reduce layer churn.

-USER root
-RUN chmod +x /app/openzeppelin-relayer
-USER nonroot
+# binary should already be executable

33-33: Pin final base image by digest for reproducibility.

-FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base@sha256:<digest>
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6138f4f and 509b498.

📒 Files selected for processing (1)
  • Dockerfile.production (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
  • GitHub Check: semgrep/ci

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Dockerfile.production (1)

2-16: Build for TARGETPLATFORM and produce a release binary; fix cache mount path.

  • Using BUILDPLATFORM for the Rust builder will break when building multi-arch (OpenSSL/libsodium mismatch). Build on TARGETPLATFORM.
  • cargo install with --debug ships an unoptimized binary; use a release build.
  • Cache mount points should match the actual workspace path (/usr/app/target).

Apply:

-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
@@
-RUN --mount=type=cache,target=/usr/local/cargo/registry \
-    --mount=type=cache,target=/app/target \
-    cargo install --root /usr/app --path . --debug --locked
+RUN --mount=type=cache,target=/usr/local/cargo/registry \
+    --mount=type=cache,target=/usr/app/target \
+    cargo build --release --locked
+RUN install -Dm755 target/release/openzeppelin-relayer /usr/app/bin/openzeppelin-relayer
♻️ Duplicate comments (1)
Dockerfile.production (1)

23-24: Don’t copy shared libs from the builder; install runtime packages in final image.

Copying .so files is brittle and may miss transitive deps. Install the runtime packages instead.

-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3
-COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3
+RUN apk add --no-cache openssl libsodium ca-certificates-bundle
🧹 Nitpick comments (4)
Dockerfile.production (4)

21-21: Remove stale commented COPY or document the runtime volume approach.

Leaving this commented line is confusing given the bind/copy block below. Clean it up or add a brief comment explaining that config is supplied via runtime volumes.


49-50: Unused build arg.

TARGETARCH is declared but never used in this stage. Remove it or wire it into logic if intended.


62-68: Permission/ownership check for plugin install.

You COPY --chown=nonroot:nonroot and then run pnpm install as nonroot (good). Consider --frozen-lockfile --prefer-offline and adding a cache mount to speed builds.

-RUN pnpm install --frozen-lockfile
+RUN --mount=type=cache,target=/home/nonroot/.pnpm-store \
+    pnpm install --frozen-lockfile --prefer-offline

75-79: Non-root entrypoint confirmed; consider explicit USER.

The image remains as nonroot. For clarity and defense-in-depth, set USER nonroot near the end (before ENTRYPOINT) even if already set earlier.

 WORKDIR /app
+USER nonroot
 ENTRYPOINT ["/app/openzeppelin-relayer"]
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5d27dce and 1c9d27a.

📒 Files selected for processing (1)
  • Dockerfile.production (1 hunks)
🧰 Additional context used
🪛 Hadolint (2.12.0)
Dockerfile.production

[error] 30-30: unexpected "required=fal"
expecting "from=", "readonly", "readonly=", "readwrite", "readwrite=", "ro", "ro=", "rw", "rw=", source=, or target=

(DL1000)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
  • GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)

50-57: Ignore the suggested nodejs-22 package rename—Alpine (and Wolfi) ship a single nodejs package whose versions you select via fuzzy matching.
To use Node 22, bump your ARG to 22 and install with:

ARG NODE_VERSION=22  
RUN apk add --no-cache nodejs=~${NODE_VERSION} npm

This aligns with CI’s 22.x runtime without inventing a non-existent nodejs-22 package.

Likely an incorrect or invalid review comment.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
Dockerfile.production (1)

21-21: Remove dead commented COPY or replace with a proper, documented config-ingestion mechanism.

The commented-out COPY is noise. Either delete it or implement the intended behavior (e.g., use a BuildKit bind mount in a RUN step to copy configs into /app/config during build).

Example (if you choose to implement):

-# COPY --chown=nonroot:nonroot ./config /app/config
+# See BAKE_CONFIGS block below for config ingestion via RUN.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1c9d27a and 0fe824a.

📒 Files selected for processing (1)
  • Dockerfile.production (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Analyze (rust)
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: semgrep/ci
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
Dockerfile.production (3)

2-2: Use TARGETPLATFORM for both stages; BUILDPLATFORM breaks multi-arch builds.
Builder and final stages pinned to BUILDPLATFORM will produce wrong-arch artifacts when using buildx. Build each stage for the target architecture.

-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
...
-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/wolfi-base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base

Also applies to: 18-18


24-25: Don’t copy OpenSSL shared libs from the builder; install runtime packages in final image.
Copying .so’s is brittle and can miss deps (e.g., libsodium). Install runtime packages instead.

-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3
-COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3
+RUN apk add --no-cache openssl libsodium ca-certificates-bundle

32-34: Wolfi Node package pin likely invalid; use per-major nodejs-20 (or verify availability).
nodejs=~20.19 rarely resolves in Wolfi; prefer nodejs-20. Keep npm if separate.

-ARG NODE_VERSION=20.19
-RUN apk add --no-cache nodejs=~${NODE_VERSION} npm && \
+ARG NODE_VERSION=20
+RUN apk add --no-cache nodejs-${NODE_VERSION} npm && \
     npm install -g pnpm ts-node typescript

If you must pin 20.19 exactly, provide a Wolfi APK in your repos and adjust apk add accordingly.

🧹 Nitpick comments (3)
Dockerfile.production (3)

27-29: Drop unused /tmp/src copy; BAKE uses a bind mount.
You copy /usr/app to /tmp/src but bake reads from /mnt/src via mount; /tmp/src is never used except for cleanup.

-# Bring source tree from base so we can optionally bake files
-COPY --from=base /usr/app /tmp/src
...
-# Clean up build-only sources
-USER root
-RUN rm -rf /tmp/src && apk del apk-tools wolfi-base || true
-USER nonroot
+# (No /tmp/src copy needed; retain USER switch only if you still need to run root steps later)

Also applies to: 64-67


39-55: Minor: wrap the bake script with sh -euxc for stricter error handling.
Your set -eux; \ is fine; sh -euxc is slightly cleaner and avoids line-continuation pitfalls.

-RUN --mount=type=bind,from=base,src=/usr/app,target=/mnt/src,ro \
-    set -eux; \
+RUN --mount=type=bind,from=base,src=/usr/app,target=/mnt/src,ro \
+    sh -euxc '\
     if [ "${BAKE_CONFIGS}" = "true" ]; then \
       if [ -d /mnt/src/examples/launchtube-plugin-example/config ]; then \
         mkdir -p /app/config; \
         cp -a /mnt/src/examples/launchtube-plugin-example/config/. /app/config/; \
       fi; \
       if [ -d /mnt/src/config/networks ]; then \
         mkdir -p /app/config/networks; \
         cp -a /mnt/src/config/networks/. /app/config/networks/; \
       fi; \
       if [ -d /mnt/src/examples/launchtube-plugin-example/launchtube ]; then \
         mkdir -p /app/plugins/launchtube; \
         cp -a /mnt/src/examples/launchtube-plugin-example/launchtube/. /app/plugins/launchtube/; \
       fi; \
       chown -R nonroot:nonroot /app/config /app/plugins || true; \
-    fi
+    fi \
+    '

66-66: Avoid removing meta ‘wolfi-base’; just drop apk-tools.
apk del wolfi-base is odd (meta/uninstall may be a no-op). Keep the surface minimal by removing apk-tools only.

-RUN rm -rf /tmp/src && apk del apk-tools wolfi-base || true
+RUN apk del apk-tools || true
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fbc6dae and 24f81ef.

📒 Files selected for processing (1)
  • Dockerfile.production (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)

57-62: Verify overlay semantics: COPY ./plugins after baking may overwrite baked launchtube.
Ensure ./plugins doesn’t clobber /app/plugins/launchtube from the bake step or duplicate dependencies.

If needed, reorder to COPY before bake (and only bake missing dirs) or COPY with include/exclude patterns.

Comment on lines 12 to 14
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/app/target \
cargo install --root /usr/app --path . --debug --locked
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Don’t ship a debug build; produce an optimized release binary and align cache paths.
Release should be default for production; also cache the cargo git DB and keep target under WORKDIR for better cache hits.

-RUN --mount=type=cache,target=/usr/local/cargo/registry \
-    --mount=type=cache,target=/app/target \
-    cargo install --root /usr/app --path . --debug --locked
+RUN --mount=type=cache,target=/usr/local/cargo/registry \
+    --mount=type=cache,target=/usr/local/cargo/git \
+    --mount=type=cache,target=/usr/app/target \
+    cargo install --root /usr/app --path . --locked
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/app/target \
cargo install --root /usr/app --path . --debug --locked
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=/usr/app/target \
cargo install --root /usr/app --path . --locked
🤖 Prompt for AI Agents
In Dockerfile.production around lines 12-14, change the build to produce an
optimized release binary and align cache mounts: remove the --debug flag and use
--release for cargo install, add a cache mount for the cargo git DB (target
/usr/local/cargo/git), and ensure the target cache path is located under the
image WORKDIR (e.g., /app/target) so cache hits are effective.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
Dockerfile.production (5)

2-2: Build on TARGETPLATFORM to avoid cross-arch breakage (Rust + native Node deps).

Using BUILDPLATFORM will yield wrong-arch binaries/modules on multi-arch builds. Build each stage for TARGETPLATFORM.

Apply:

-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/rust:latest-dev@sha256:faf49718aaa95c798ed1dfdf3e4edee2cdbc3790c8994705ca6ef35972128459 AS base
-FROM --platform=${BUILDPLATFORM} cgr.dev/chainguard/wolfi-base
+FROM --platform=${TARGETPLATFORM} cgr.dev/chainguard/wolfi-base

Also applies to: 18-18


12-16: Ship release binary and align Cargo caches.

Remove debug build and add git cache + correct target path for better cache hit rate.

Apply:

-RUN --mount=type=cache,target=/usr/local/cargo/registry \
-    --mount=type=cache,target=/app/target \
-    cargo install --root /usr/app --path . --debug --locked
+RUN --mount=type=cache,target=/usr/local/cargo/registry \
+    --mount=type=cache,target=/usr/local/cargo/git \
+    --mount=type=cache,target=/usr/app/target \
+    sh -euxc 'CARGO_TARGET_DIR=/usr/app/target cargo install --root /usr/app --path . --locked'

39-47: Wolfi Node pin is likely invalid; use per-major package.

Replace apk constraint with Wolfi’s nodejs- packages; verify availability.

Apply:

-ARG NODE_VERSION=20.19
+ARG NODE_VERSION=20
@@
-RUN apk add --no-cache nodejs=~${NODE_VERSION} npm
+RUN apk add --no-cache nodejs-${NODE_VERSION} npm

Verify what’s available:

#!/usr/bin/env bash
docker run --rm cgr.dev/chainguard/wolfi-base sh -lc 'apk update && apk search -x "nodejs*"'

25-26: Don’t copy .so’s from builder; install runtime libs in final image.

Copying shared libs is brittle; use packages so transitive deps and security updates apply.

Apply:

-COPY --from=base /usr/lib/libssl.so.3 /usr/lib/libssl.so.3
-COPY --from=base /usr/lib/libcrypto.so.3 /usr/lib/libcrypto.so.3
+RUN apk add --no-cache openssl libsodium ca-certificates-bundle

28-36: Invalid RUN bind mount option ‘optional’; wrap in strict shell.

BuildKit doesn’t support ‘optional’ for type=bind; this will fail parse/solve. Make the mount required (ensuring source exists) and harden the script.

Apply:

-ARG INCLUDE_LAUNCHTUBE=false
+ARG INCLUDE_LAUNCHTUBE=false
@@
-RUN --mount=type=bind,source=./examples/launchtube-plugin-example/launchtube,target=/tmp/launchtube,optional \
-    if [ "${INCLUDE_LAUNCHTUBE}" = "true" ] && [ -d /tmp/launchtube ]; then \
-      mkdir -p /app/plugins/launchtube && \
-      cp -a /tmp/launchtube/. /app/plugins/launchtube/; \
-    else \
-      echo "Skipping launchtube copy"; \
-    fi
+RUN --mount=type=bind,source=examples/launchtube-plugin-example/launchtube,target=/tmp/launchtube,ro \
+    sh -euxc '\
+      if [ "${INCLUDE_LAUNCHTUBE}" = "true" ] && [ -d /tmp/launchtube ]; then \
+        mkdir -p /app/plugins/launchtube; \
+        cp -a /tmp/launchtube/. /app/plugins/launchtube/; \
+      else \
+        echo "Skipping launchtube copy"; \
+      fi \
+    '

If you truly need optional sources, switch to a named build context and pass it only when present (cannot be optional at parse time).

🧹 Nitpick comments (2)
Dockerfile.production (2)

21-23: Redundant COPY of networks directory.

Line 23 is subsumed by line 21; drop the duplicate.

Apply:

 COPY --chown=nonroot:nonroot ./config /app/config
-# COPY --chown=nonroot:nonroot ./examples/launchtube-plugin-example/launchtube /app/plugins/launchtube
-COPY --chown=nonroot:nonroot ./config/networks /app/config/networks
+# COPY --chown=nonroot:nonroot ./examples/launchtube-plugin-example/launchtube /app/plugins/launchtube

45-45: Unnecessary PATH override.

Node/npm binaries are already on PATH in Wolfi; drop to reduce noise.

Apply:

-ENV PATH="/usr/local/bin:$PATH"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 478ad7f and 7ff9d0f.

📒 Files selected for processing (1)
  • Dockerfile.production (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: Redirect rules - openzeppelin-relayer
  • GitHub Check: Header rules - openzeppelin-relayer
  • GitHub Check: Pages changed - openzeppelin-relayer
  • GitHub Check: Analyze (rust)
  • GitHub Check: semgrep/ci
🔇 Additional comments (1)
Dockerfile.production (1)

53-58: Potential native module arch mismatch during pnpm install.

If BUILDPLATFORM ≠ TARGETPLATFORM, native Node addons compiled here will be wrong-arch. This is resolved by switching this stage to TARGETPLATFORM (see earlier comment).

After changing the stage platform, confirm no prebuilt binaries targeting the build host remain in node_modules.

@collins-w collins-w closed this Sep 8, 2025
@collins-w collins-w deleted the aws-deployment branch September 8, 2025 09:09
@github-actions github-actions bot locked and limited conversation to collaborators Sep 8, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants