-
Notifications
You must be signed in to change notification settings - Fork 185
Stellar scaffold #762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Stellar scaffold #762
Conversation
Co-authored-by: Eric Lau <[email protected]> Co-authored-by: collins-w <[email protected]> Co-authored-by: https://x.com/rafaelKO76 <[email protected]> Co-authored-by: Ruslan <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ernesto García <[email protected]>
Co-authored-by: Eric Lau <[email protected]> Co-authored-by: collins-w <[email protected]> Co-authored-by: https://x.com/rafaelKO76 <[email protected]> Co-authored-by: Ruslan <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ernesto García <[email protected]>
…o stellar-scaffold
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis pull request introduces a new Rust-based Stellar API backend with Docker containerization, comprehensive CI/CD workflows for staging and production deployments, reorganizes AI assistant module paths, and adds extensive tests and configuration files to support local development via Docker Compose. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #762 +/- ##
=========================================
Coverage ? 89.36%
=========================================
Files ? 12
Lines ? 423
Branches ? 0
=========================================
Hits ? 378
Misses ? 45
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
packages/ui/api/ai/ai-assistant/function-definitions/cairo-shared.ts (1)
4-10: Remove duplicate import ofcairoAccessDescriptions.Line 10 redeclares
cairoAccessDescriptions, which is already imported on line 7. This will cause a parse/compilation error. If the intent was to update the import path, modify the existing import block (lines 4-9) instead of adding a new one.🐛 Proposed fix
import { cairoCommonDescriptions, cairoMacrosDescriptions, cairoAccessDescriptions, cairoRoyaltyInfoDescriptions, -} from '../../../../common/src/ai/descriptions/cairo.ts'; -import { cairoAccessDescriptions } from '../../../../../common/src/ai/descriptions/cairo.ts'; +} from '../../../../../common/src/ai/descriptions/cairo.ts';packages/ui/api/ai/ai-assistant/types/languages.ts (1)
1-22: Update Solidity and Uniswap import paths to use 5-level relative paths.Solidity (lines 2-3) and Uniswap Hooks (line 5) imports use 4-level paths (
../../../../core/...), which resolve topackages/ui/core/. This directory does not exist and will cause module resolution failures.All core package imports from this file location must use 5-level paths (
../../../../../core/...) to correctly resolve topackages/core/, matching the Cairo, Cairo Alpha, Stellar, and Stylus imports already in the file.Fix import paths
// Solidity -import type { KindedOptions as SolidityKindedOptions } from '../../../../core/solidity/dist'; -export type { CommonOptions as SolidityCommonOptions } from '../../../../core/solidity/dist/common-options'; +import type { KindedOptions as SolidityKindedOptions } from '../../../../../core/solidity/dist'; +export type { CommonOptions as SolidityCommonOptions } from '../../../../../core/solidity/dist/common-options'; // Uniswap Hooks -import type { KindedOptions as UniswapHooksOptions } from '../../../../core/uniswap-hooks/dist'; +import type { KindedOptions as UniswapHooksOptions } from '../../../../../core/uniswap-hooks/dist';packages/ui/api/ai/paths/ai.ts (1)
1-17: Fix broken import paths on lines 1-7.Lines 1-7 import from
./ai-assistant/function-definitions/..., which resolves topackages/ui/api/ai/paths/ai-assistant/— a directory that does not exist. The actual files are located atpackages/ui/api/ai/ai-assistant/function-definitions/. Change lines 1-7 to use../ai-assistant/function-definitions/...to match the correct path structure, consistent with line 16:import * as solidityFunctions from '../ai-assistant/function-definitions/solidity.ts'; import * as polkadotFunctions from '../ai-assistant/function-definitions/polkadot.ts'; import * as cairoFunctions from '../ai-assistant/function-definitions/cairo.ts'; import * as cairoAlphaFunctions from '../ai-assistant/function-definitions/cairo-alpha.ts'; import * as stellarFunctions from '../ai-assistant/function-definitions/stellar.ts'; import * as stylusFunctions from '../ai-assistant/function-definitions/stylus.ts'; import * as uniswapHooksFunctions from '../ai-assistant/function-definitions/uniswap-hooks.ts';
🤖 Fix all issues with AI agents
In @.vscode/example.settings.json:
- Around line 8-15: Update the "rust-analyzer.linkedProjects" entry (the
linkedProjects array) to point to the correct Cargo.toml for the Stellar
project: replace the existing value referencing packages/ui/stellar/Cargo.toml
with packages/ui/api/stellar/Cargo.toml so rust-analyzer can locate and analyze
the project.
In `@packages/ui/api/ai/ai-assistant/function-definitions/cairo.ts`:
- Around line 1-5: The import list includes a duplicate symbol
contractExactRequiredKeys which causes a parse error; remove the duplicate so
the import only declares contractExactRequiredKeys once alongside type
AiFunctionDefinition, i.e., update the import statement that currently imports
contractExactRequiredKeys twice to import it a single time.
In `@packages/ui/api/ai/Dockerfile.dev`:
- Around line 4-5: Update the Dockerfile COPY that references the common package
so it uses the build-context relative path instead of navigating up: replace the
current COPY instruction that uses ../../common with one that copies
packages/common into /app/common (i.e., adjust the COPY source to
"packages/common") so the build context is valid and the Docker build succeeds.
In `@packages/ui/api/stellar/Dockerfile.prod`:
- Around line 34-43: Change the runtime stage base image from "FROM
rust:bookworm AS runtime" to "FROM debian:bookworm-slim AS runtime" to
dramatically reduce image size since only precompiled binaries (stellar-api,
stellar, stellar-scaffold) are run; remove the ineffective "apt-get purge -y
--auto-remove build-essential pkg-config libssl-dev" line because those packages
are not present in the slim base; keep the necessary "apt-get update && apt-get
install -y --no-install-recommends ..." install of runtime libs
(ca-certificates, libssl3, libdbus-1-3, libudev1, libusb-1.0-0) and the cleanup
of /var/lib/apt/lists/*; and delete the unnecessary Rust environment variables
(CARGO_HOME, RUSTUP_HOME, and PATH entries pointing to the toolchain) since the
runtime image does not include the Rust toolchain and only runs prebuilt
binaries.
In `@packages/ui/api/stellar/rustfmt.toml`:
- Around line 1-5: The rustfmt config uses the wrong key: replace the TOML key
name `edition = "2024"` with `style_edition = "2024"` in the rustfmt.toml so
rustfmt applies the Rust 2024 formatting style (leave any `max_width`,
`tab_spaces`, and `hard_tabs` entries unchanged).
In `@packages/ui/api/stellar/src/utils/dir.rs`:
- Around line 9-21: The create_dir_safe function has compilation issues: move
the DirBuilderExt import into the #[cfg(unix)] block and remove the redundant
standalone `use DirBuilderExt;`, then in the unix branch call DirBuilder::new(),
set recursive and mode, and map errors with the local to_io_error function; in
the non-unix branch replace `fs::create_dir_all` with `std::fs::create_dir_all`
and use `to_io_error` (not `utils::to_io_error::to_io_error`) for map_err so
both branches compile and reference the correct symbols (DirBuilderExt,
DirBuilder::new, to_io_error, std::fs::create_dir_all) in create_dir_safe.
- Line 4: The unconditional import of std::os::unix::fs::DirBuilderExt breaks
non-Unix builds; wrap that use in a cfg attribute (e.g. add #[cfg(unix)] use
std::os::unix::fs::DirBuilderExt;) and ensure any code that relies on
DirBuilderExt is also guarded by #[cfg(unix)] so Windows builds don’t reference
the Unix-only trait.
In `@packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs`:
- Around line 1-5: The unconditional import of std::os::unix::fs::PermissionsExt
will fail on non-Unix platforms; guard the import and any use of PermissionsExt
with a cfg(unix) (or move Unix-specific logic into a #[cfg(unix)] module) and
provide a Windows/portable path for the non-Unix case (e.g., skip set_mode or
use platform-neutral Permissions) so that scaffold_upgrade.rs compiles
cross-platform while keeping Unix-specific behavior only inside blocks
referencing PermissionsExt.
- Around line 148-174: The test
ai_run_scaffold_upgrade_command_when_missing_fails modifies the global PATH
without using the shared PATH_LOCK and then conditionally skips the assertion;
acquire PATH_LOCK at the start of the test (same mutex used by
run_scaffold_upgrade_command_success and
run_scaffold_upgrade_command_failure_propagates_error), set PATH while holding
the lock, call run_scaffold_upgrade_command, restore the original PATH and
release the lock in a finally-style scope so restoration always runs, and remove
the skip-on-Ok branch so the test reliably asserts that missing stellar returns
Err.
🟡 Minor comments (18)
.changeset/brown-walls-roll.md-7-7 (1)
7-7: Fix hyphenation in "non-fungible".The term should be hyphenated as "non-fungible" per standard usage in blockchain/NFT contexts.
📝 Proposed fix
-Add tokenUri setting for stellar non fungible model +Add tokenUri setting for stellar non-fungible modelpackages/ui/api/stellar/.env.example-1-2 (1)
1-2: Add a trailing newline.The file is missing a trailing newline at the end, which is flagged by the linter and is a common convention for text files.
Proposed fix
RUST_LOG=debug WIZARD_ORIGIN=http://localhost:8080 +packages/ui/.env.example-1-4 (1)
1-4: Remove quotes aroundENVvalue for consistency.The
ENV="dev"uses quotes while the other variables don't. Some.envfile parsers will include the literal quote characters in the value. For consistency and to avoid parsing issues, remove the quotes.Proposed fix
-ENV="dev" +ENV=dev API_HOST=http://localhost:3000 FARGATE_HOST=http://localhost:8888 +packages/ui/public/stellar.html-91-107 (1)
91-107: Remove duplicate UI Builder promotional block.Lines 91-107 are an exact duplicate of the UI Builder promotional block already present at lines 73-88. Having two identical CTAs in the navigation row appears unintentional and would confuse users.
🗑️ Suggested fix: Remove the duplicate block
</div> - - <div class="flex flex-row gap-2"> - <div class="text-sm text-gray-600 leading-tight text-right"> - <div>After deploying a contract, use the</div> - <div>new UI Builder to spin up a quick UI</div> - </div> - - <div class="flex items-center"> - <img src="/icons/arrow-right.svg" alt="arrow-right" class="arrow-gray"> - </div> - - <div class="flex items-center"> - <a href="https://builder.openzeppelin.com?ecosystem=stellar" target="_blank" rel="noopener noreferrer" class="bg-[`#3d3b5d`] text-white px-4 py-2 rounded-full text-sm font-medium flex items-center gap-2 hover:bg-[`#4a4872`] transition-colors no-underline whitespace-nowrap"> - <span>Try the UI Builder</span> - <img src="/icons/up-right-from-square.svg" alt="up-right-from-square" class="arrow-white"> - </a> - </div> - </div> </div>CONTRIBUTING.md-64-64 (1)
64-64: Missingyarnprefix in command.The command should include the
yarnprefix for consistency with other commands in this document.📝 Suggested fix
- - Run `dev:api:build` to rebuild the Docker container + - Run `yarn dev:api:build` to rebuild the Docker containerpackages/ui/api/stellar/Cargo.toml-20-22 (1)
20-22: Updateserial_testto latest version.The current pinned version
0.5is from late 2020. The latest version is 3.3.1 (2024). Since the codebase only uses the#[serial]attribute and does not use thefile_serialorfile_parallelvariants where breaking changes occurred in v3.0.0, updating is safe and recommended.packages/ui/src/stellar/App.svelte-115-119 (1)
115-119: Useconsole.errorfor error logging.
console.logis typically reserved for informational output. For errors,console.errorprovides semantic clarity and proper stream routing in some environments.🐛 Proposed fix
} catch (error) { - console.log(error); + console.error('Scaffold download failed:', error); downloadScaffoldError = true; isDownloadingScaffold = false; }packages/ui/api/stellar/src/environment/zip_folder.rs-69-71 (1)
69-71: Inconsistent error handling.Line 62 uses
.map_err(to_zip_io_error)?but line 70 callscreate_dir_safe(parent)?without the error mapping. Sincecreate_dir_safereturnsResult<(), IoError>and the function returnsResult<(), ZipError>, this may cause compilation issues or relies on an implicitFromimplementation.🔧 Suggested fix
if let Some(parent) = target_full_path.parent() { - create_dir_safe(parent)?; + create_dir_safe(parent).map_err(to_zip_io_error)?; }packages/ui/api/stellar/tests/routes/upgrade_scaffold.rs-8-8 (1)
8-8: Unix-only import will fail to compile on Windows.The
std::os::unix::fs::PermissionsExtimport and its usage at line 21 are Unix-specific. Either gate this module with#[cfg(unix)]or mark the tests as Unix-only:+#[cfg(unix)] use std::os::unix::fs::PermissionsExt;Or at the module level:
#![cfg(unix)]packages/ui/api/stellar/tests/config/server.rs-12-15 (1)
12-15: Environment variable manipulation is not thread-safe in parallel tests.Using
env::set_varandenv::remove_varin tests is problematic because Rust's test runner executes tests in parallel by default. Concurrent tests modifying shared environment variables can cause race conditions and flaky failures, even with save/restore logic.Consider one of these mitigations:
- Run these tests with
cargo test -- --test-threads=1- Use a test serialization approach (e.g.,
serial_testcrate)- Refactor
ServerConfig::from_environment_variables()to accept an environment reader trait/closure for testabilitypackages/ui/api/stellar/src/main.rs-35-42 (1)
35-42: CORS suffix matching may be overly permissive.The check
origin_str.ends_with("wizard.netlify.app")would allow origins likeevil-wizard.netlify.apporattacker-wizard.netlify.app. Consider using a more restrictive pattern that requires a specific prefix or subdomain structure.Suggested fix with stricter pattern
.allowed_origin_fn(move |origin, _req| { if let Ok(origin_str) = origin.to_str() { - origin_str.ends_with("wizard.netlify.app") + origin_str.ends_with(".wizard.netlify.app") + || origin_str == "https://wizard.netlify.app" || origin_str == "https://wizard.openzeppelin.com" || origin_str == wizard_origin } else { false } })packages/ui/api/stellar/src/config/server.rs-20-21 (1)
20-21: Documentation mismatch: APP_PORT default.The doc comment states
APP_PORTdefaults to8080, but the implementation on line 27 defaults to"8888". Update the documentation to match the actual default.Fix documentation
/// # Defaults /// /// - `HOST` defaults to `"0.0.0.0"`. - /// - `APP_PORT` defaults to `8080`. + /// - `APP_PORT` defaults to `8888`. /// - `RATE_LIMIT_REQUESTS_PER_SECOND` defaults to `100`.packages/ui/api/stellar/tests/environment/zip_folder.rs-54-70 (1)
54-70: Misleading test name.
ai_test_persists_chat_on_successful_stream_completionactually tests rejection of oversized file entries. Consider renaming totest_rejects_entry_exceeding_per_file_limit.packages/ui/api/stellar/tests/environment/zip_folder.rs-43-52 (1)
43-52: Misleading test name.
ai_test_builds_system_prompt_with_contract_contextshould be renamed to reflect that it testsexpected_entry_countcalculation, e.g.,test_expected_entry_count_includes_ancestors.packages/ui/api/stellar/tests/environment/zip_folder.rs-72-90 (1)
72-90: Misleading test name.
ai_test_filters_out_messages_at_or_above500_charstests suspicious compression ratio detection. Considertest_rejects_suspicious_compression_ratio.packages/ui/api/stellar/tests/environment/zip_folder.rs-108-114 (1)
108-114: Misleading test name.
ai_test_returns_error_json_on_request_parsing_failuretests invalid zip data handling. Considertest_rejects_invalid_zip_data.packages/ui/api/stellar/tests/environment/zip_folder.rs-92-106 (1)
92-106: Misleading test name.
ai_test_uses_empty_tools_for_unsupported_languagetests unexpected zip content detection. Considertest_rejects_mismatched_expected_files.packages/ui/api/stellar/tests/environment/zip_folder.rs-25-41 (1)
25-41: Test name does not match test behavior.The test name
ai_test_returns_streaming_ndjson_response_with_headersdescribes NDJSON streaming, but the test actually validates basic zip/unzip round-trip functionality. Consider renaming to something liketest_zip_unzip_roundtrip_preserves_content.
🧹 Nitpick comments (42)
.prettierignore (1)
21-22: Verify the intentional exclusion of YAML files from Prettier.Prettier natively supports YAML formatting, so ignoring
*.ymland*.yamlfiles will prevent formatting of GitHub Actions workflows, docker-compose files, and other YAML configurations. This could lead to inconsistent formatting across the codebase unless a different YAML formatter is being used or there's a specific reason to exclude them.Please confirm whether:
- This exclusion is intentional
- An alternative YAML formatter is being used (e.g.,
yamlfmt,prettier-plugin-yamlwith custom config)- The YAML files are auto-generated and shouldn't be formatted
If there's no specific formatter or reason, consider removing these patterns to maintain consistent YAML formatting.
.github/workflows/test.yml (2)
5-5: Typo in branch names: "scafold" should be "scaffold".Both
stellar-scafold-fargate-backendandstellar-scafoldcontain a typo. If these are temporary feature branches that will be deleted after merge, this is fine. Otherwise, consider renaming them tostellar-scaffold-*for consistency.
84-85: Consider removing debug output step.This step only logs cache-hit status for debugging. It's harmless but adds noise to CI logs. Consider removing it or converting to a conditional debug step.
packages/ui/api/ai/.env.example (1)
1-2: LGTM! Consider adding EOF newline for consistency.The environment variable template is correctly structured with a reasonable default port and an empty placeholder for the API key.
The static analysis tool flagged a missing blank line at the end of the file, which you may want to add for consistency with project conventions.
packages/ui/package.json (1)
8-11: LGTM with minor consideration.The scripts correctly set up the Docker-based API development workflow. One optional consideration:
docker-compose(standalone) is deprecated in favor ofdocker compose(Docker CLI plugin). While both work, the plugin syntax is the recommended approach going forward.♻️ Optional: Use Docker Compose v2 syntax
- "dev:api": "cd api && docker-compose up", - "dev:api:build": "cd api && docker-compose up --build", + "dev:api": "cd api && docker compose up", + "dev:api:build": "cd api && docker compose up --build",packages/ui/api/stellar/src/environment/scaffold_upgrade.rs (1)
7-24: Consider adding a timeout and capturing stdout for diagnostics.The implementation is correct and handles errors well. Two considerations:
Timeout: The
stellar scaffold upgradecommand could potentially hang indefinitely. Consider whether a timeout is appropriate for your use case.Diagnostic output: Discarding stdout may lose useful information when debugging failures. Consider capturing it alongside stderr.
♻️ Optional: Capture stdout in error message
pub fn run_scaffold_upgrade_command(project_dir_path: &Path) -> Result<(), Error> { let output = Command::new("stellar") .args(["scaffold", "upgrade", "--skip-prompt"]) .stdin(Stdio::null()) - .stdout(Stdio::null()) + .stdout(Stdio::piped()) .stderr(Stdio::piped()) .current_dir(project_dir_path) .output()?; if output.status.success() { Ok(()) } else { Err(Error::other(format!( - "'stellar scaffold upgrade' failed with code {:?}: {}", + "'stellar scaffold upgrade' failed with code {:?}:\nstdout: {}\nstderr: {}", output.status.code(), + String::from_utf8_lossy(&output.stdout), String::from_utf8_lossy(&output.stderr) ))) } }packages/ui/api/stellar/src/utils/glob.rs (1)
3-7: Consider implementingstd::error::ErrorforMatchError.If this error type will be propagated or converted in error handling chains, implementing the
Errortrait improves composability with?operator and error conversion utilities.♻️ Suggested implementation
+use std::fmt; + #[derive(Debug)] pub enum MatchError { Glob(GlobError), NoMatch(String), } + +impl fmt::Display for MatchError { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + MatchError::Glob(e) => write!(f, "glob error: {e}"), + MatchError::NoMatch(s) => write!(f, "no match for: {s}"), + } + } +} + +impl std::error::Error for MatchError { + fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { + match self { + MatchError::Glob(e) => Some(e), + MatchError::NoMatch(_) => None, + } + } +}packages/ui/src/stellar/App.svelte (1)
260-273: Good UX for communicating build progress and errors.The conditional rendering for loading, error, and default states provides clear feedback to users. Consider whether the error message "check console for details" is actionable for end users who may not know how to access browser developer tools.
packages/ui/api/stellar/tests/utils/glob.rs (1)
3-8: Minor: Inconsistent test naming convention.
ai_test_single_pattern_matchuses anai_prefix while other tests don't. Consider using a consistent naming convention across all tests.Suggested fix
#[test] -fn ai_test_single_pattern_match() { +fn test_single_pattern_match() { let globset = build_globset(vec!["*.rs".to_string()]).unwrap(); let matcher = globset.matches("main.rs"); assert!(!matcher.is_empty()); }packages/ui/api/stellar/tests/utils/path.rs (2)
1-4: Unix-only test dependency.
std::os::unix::fs::symlinkis Unix-specific. Tests that use symlinks (ensure_no_symlinks_detects_parent_and_self_symlink) won't compile on Windows. Consider using#[cfg(unix)]to conditionally compile symlink-dependent tests.💡 Suggested conditional compilation
use std::fs::{create_dir_all, File}; -use std::os::unix::fs::symlink; use std::path::PathBuf; use tempfile::tempdir; + +#[cfg(unix)] +use std::os::unix::fs::symlink;Then wrap symlink-dependent tests:
#[cfg(unix)] #[test] fn ensure_no_symlinks_detects_parent_and_self_symlink() { // ... }
107-115: Weak assertion doesn't verify normalization.The test comment states "backslash should be normalized to forward slash" but the assertion accepts either form. If the
norm()function inpath.rsnormalizes backslashes to forward slashes, the assertion should verify this behavior specifically.💡 Stronger assertion
- // backslash should be normalized to forward slash somewhere in the output - assert!(items - .iter() - .any(|s| s.contains("dir/") || s.contains("dir\\"))); + // backslash should be normalized to forward slash + assert!( + items.iter().any(|s| s.contains("dir/")), + "expected backslash to be normalized to forward slash" + ); + assert!( + !items.iter().any(|s| s.contains("\\")), + "backslashes should not remain after normalization" + );packages/ui/api/stellar/src/environment/zip_folder.rs (3)
53-57: Redundant path extraction.
validate_entry_metadata(line 53) already extracts and validates the entry path as aString, butnormalize_entry_path(line 55) extracts it again as aPathBuf. Thenvalidate_and_register_entry(line 57) usesvalidated_entry(the String). Consider consolidating to avoid redundantenclosed_name()calls.♻️ Consolidate path extraction
Consider having
validate_entry_metadatareturn both the validated string path and the normalizedPathBuf, or restructure to callenclosed_name()only once per entry.
117-131: Consider making limits configurable.The
ZipEntryLimits::rust_env()hardcodes restrictive limits (100KB total, 50KB per file). While appropriate for scaffold templates, consider accepting limits as a parameter tounzip_in_temporary_folderorsecure_zip_extractfor flexibility if requirements change.
221-224: Silent error suppression in directory walk.
filter_map(Result::ok)silently ignoresWalkDirerrors (e.g., permission denied, broken symlinks). Consider logging or propagating these errors to avoid silently producing incomplete archives.💡 Handle WalkDir errors explicitly
- for entry in WalkDir::new(zip_path).into_iter().filter_map(Result::ok) { + for entry_result in WalkDir::new(zip_path) { + let entry = entry_result.map_err(|e| to_zip_io_error(e))?; let path = entry.path();packages/ui/api/stellar/src/environment/mod.rs (1)
1-4: Consider explicit re-exports for clearer API surface.Glob re-exports (
pub use zip_folder::*) work but can make the public API harder to track. For library-like modules, consider explicit re-exports:pub use zip_folder::{unzip_in_temporary_folder, zip_directory, ZipEntryLimits, expected_entry_count}; pub use scaffold_upgrade::run_scaffold_upgrade_command;This is a minor stylistic preference; the current approach is functional.
packages/ui/api/stellar/tests/tests_paths.rs (1)
1-14: Explicit#[path]attributes are unnecessary for standard test module layout.When test modules reside in
tests/<module_name>/mod.rs, Cargo discovers them automatically without explicit path annotations. The#[path]attributes add verbosity without benefit here.Consider simplifying to:
mod config; mod controllers; mod environment; mod routes; mod utils;This works if each subdirectory contains a
mod.rs. However, if you're intentionally consolidating multiple test modules into a single integration test binary for faster compilation, the current approach is acceptable.packages/ui/api/stellar/src/lib.rs (1)
1-17: Clean module structure with helpful documentation.Minor grammatical issue in the doc comment on line 9:
📝 Suggested fix
-//! * `environment` - The logic for create, upgrading and zipping the contract environment +//! * `environment` - Logic for creating, upgrading, and zipping the contract environmentpackages/ui/api/stellar/tests/routes/routes.rs (1)
4-10: Remove the redundantassert!(true).The
assert!(true)on line 9 provides no value—if the code reaches that point without panicking, the test already passes. Consider removing it or replacing with a more meaningful assertion if one exists.♻️ Suggested simplification
#[actix_rt::test] async fn ai_routes_configure_executes() { - // Simply configure the app with our routes to exercise `routes::configure_routes`. - let _app = test::init_service(App::new().configure(|cfg| configure_routes(cfg))).await; - // If configure runs without panic we've executed the code paths in `routes/mod.rs`. - assert!(true); + // If configure runs without panic we've executed the code paths in `routes/mod.rs`. + let _app = test::init_service(App::new().configure(configure_routes)).await; }packages/ui/api/stellar/src/routes/upgrade_scaffold.rs (1)
7-11: Consider renamingdownload_scaffold_routeto better reflect its purpose.The function name suggests a download operation, but it actually transforms/upgrades a contract to a scaffold. A name like
upgrade_scaffold_routeorhandle_upgrade_scaffoldwould better match the route path and operation.♻️ Suggested rename
#[post("/upgrade-scaffold")] -async fn download_scaffold_route(req: web::Bytes) -> Result<HttpResponse, actix_web::Error> { +async fn upgrade_scaffold_route(req: web::Bytes) -> Result<HttpResponse, actix_web::Error> { if ZipArchive::new(Cursor::new(&req)).is_err() { return Ok(HttpResponse::UnsupportedMediaType().body("invalid ZIP archive")); }And update the service registration:
pub fn init(cfg: &mut web::ServiceConfig) { - cfg.service(download_scaffold_route); + cfg.service(upgrade_scaffold_route); }packages/ui/api/stellar/Dockerfile.dev (1)
1-1: Consider pinning a specific Rust version for reproducible builds.Using
rust:latestcan lead to inconsistent builds over time as the Rust version changes. For development this may be acceptable, but consider pinning to a specific version (e.g.,rust:1.84) if build reproducibility matters.packages/ui/api/docker-compose.yaml (2)
7-12: Consider if thestellar-tmpvolume is necessary.The
/tmpmount to a named volume (stellar-tmp) persists temporary files across container restarts. This could lead to stale temp files accumulating or unexpected behavior if the application expects/tmpto be ephemeral. If the intent is to preserve temp files for debugging, this is fine; otherwise, consider removing this volume to use the container's ephemeral/tmp.
1-50: Consider adding healthchecks for service orchestration.For a more robust local development experience, consider adding healthchecks to both services. This helps with service readiness detection when using
docker-compose up --waitor when other services depend on these being healthy.💡 Example healthcheck for stellar-api
healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8888/health"] interval: 10s timeout: 5s retries: 3 start_period: 30spackages/ui/api/stellar/tests/config/server.rs (2)
17-20: Strengthen the default port assertion.The assertion
assert!(cfg.port > 0)is weak. Based on the implementation inserver.rs, the default port is8888. Assert the exact expected value:let cfg = ServerConfig::from_environment_variables(); assert_eq!(cfg.host, "0.0.0.0"); - // default port is parsed or fallback to 8888 in this crate - assert!(cfg.port > 0); + assert_eq!(cfg.port, 8888); + assert_eq!(cfg.rate_limit_requests_per_second, 100); + assert_eq!(cfg.wizard_origin, "https://wizard.openzeppelin.com");This provides better coverage of all default values.
34-54: Consider extracting repetitive restoration logic.The environment variable save/restore pattern is repeated for each variable. A helper function or macro would reduce duplication:
♻️ Suggested refactor using a helper
fn with_env_vars<F, T>(vars: &[(&str, Option<&str>)], f: F) -> T where F: FnOnce() -> T, { let saved: Vec<_> = vars.iter() .map(|(k, _)| (*k, env::var_os(k))) .collect(); for (k, v) in vars { match v { Some(val) => env::set_var(k, val), None => env::remove_var(k), } } let result = f(); for (k, old) in saved { match old { Some(v) => env::set_var(k, v), None => env::remove_var(k), } } result }packages/ui/api/stellar/tests/routes/upgrade_scaffold.rs (3)
15-24: Consider extracting duplicatemake_fake_stellarhelper to shared test utilities.This helper function is duplicated across multiple test files:
tests/controllers/upgrade_scaffold.rs(lines 25-38)tests/environment/scaffold_upgrade.rs(lines 18-30)- This file (lines 15-24)
Extract to a shared test utilities module to maintain DRY principles:
// tests/common/mod.rs or tests/helpers.rs #[cfg(unix)] pub fn make_fake_stellar(dir: &Path, exit_code: i32, stderr: Option<&str>) -> PathBuf { // unified implementation with optional stderr support }
70-83: PATH restoration is not panic-safe.If the test panics or fails before reaching the restoration code (lines 81-83), the PATH environment variable remains modified, potentially affecting subsequent tests.
Use an RAII guard pattern to ensure cleanup:
♻️ Suggested RAII guard
struct EnvGuard { key: &'static str, old_value: Option<OsString>, } impl EnvGuard { fn new(key: &'static str, new_value: &str) -> Self { let old_value = env::var_os(key); env::set_var(key, new_value); Self { key, old_value } } } impl Drop for EnvGuard { fn drop(&mut self) { match &self.old_value { Some(v) => env::set_var(self.key, v), None => env::remove_var(self.key), } } } // Usage: let _guard = EnvGuard::new("PATH", &newp); // PATH automatically restored when _guard goes out of scope
135-138: Consider adding more assertions for the success case.The success test only verifies status 200 and presence of
content-typeheader. Consider also validating:
- The content-type is specifically
application/zip- The response body is a valid ZIP archive
assert_eq!(resp.status().as_u16(), 200); let headers = resp.headers(); - assert!(headers.get("content-type").is_some()); + assert_eq!( + headers.get("content-type").map(|v| v.to_str().unwrap()), + Some("application/zip") + ); + // Optionally verify response body is valid ZIP + let body = test::read_body(resp).await; + assert!(body.starts_with(&[0x50, 0x4B])); // ZIP magic bytes.github/workflows/docker-prod.yaml (2)
18-20: Add concurrency control to prevent overlapping deployments.Without concurrency settings, rapid pushes to master could trigger multiple simultaneous deployments, potentially causing race conditions or deployment failures.
permissions: contents: read +concurrency: + group: prod-deploy-${{ github.ref }} + cancel-in-progress: false + jobs:Using
cancel-in-progress: falseensures deployments complete in order rather than being cancelled.
72-80: Consider adding environment protection for production deployments.The deploy job could benefit from GitHub environment protection rules for additional safeguards:
deploy: runs-on: ubuntu-latest needs: build-and-push + environment: + name: production env:This enables features like required reviewers, wait timers, and deployment branch restrictions at the repository settings level.
.github/workflows/docker-stg.yaml (3)
44-45: Pin thedocker/setup-buildx-actionversion with a commit hash.Other actions in this workflow are pinned to commit hashes for supply chain security (e.g.,
actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683). Thedocker/setup-buildx-actionshould follow the same pattern.Suggested fix
- name: Set up Docker Buildx - uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 + uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
24-29: Review permissions scope for ECR-only workflow.The
packages: writepermission is typically needed for GitHub Container Registry (ghcr.io), not Amazon ECR. Since this workflow pushes to ECR using AWS credentials, consider removing unused permissions to follow the principle of least privilege.Similarly, verify if
attestations: writeandsecurity-events: writeare actively used by subsequent steps.
99-101: Consider adding deployment verification.The deployment triggers a rollout but doesn't wait for it to complete. For better CI visibility, consider adding a wait step to confirm the service reaches steady state.
Optional: Add wait for service stability
- name: AWS ECS force new deployment run: | aws ecs update-service --cluster $ECS_CLUSTER --service $ECS_SERVICE --force-new-deployment --region $AWS_REGION + + - name: Wait for service stability + run: | + aws ecs wait services-stable --cluster $ECS_CLUSTER --services $ECS_SERVICE --region $AWS_REGIONpackages/ui/api/stellar/src/config/server.rs (1)
26-33: Consider logging warnings on parse failures.Silent fallback with
parse().unwrap_or()can mask configuration errors. Consider logging a warning when environment variables are present but fail to parse, to aid debugging.Optional: Add warning logging for parse failures
+use log::warn; + impl ServerConfig { pub fn from_environment_variables() -> Self { Self { host: env::var("HOST").unwrap_or_else(|_| "0.0.0.0".to_string()), - port: env::var("APP_PORT") - .unwrap_or_else(|_| "8888".to_string()) - .parse() - .unwrap_or(8888), + port: env::var("APP_PORT") + .unwrap_or_else(|_| "8888".to_string()) + .parse() + .unwrap_or_else(|e| { + warn!("Failed to parse APP_PORT, using default 8888: {e}"); + 8888 + }), // Similar pattern for rate_limit_requests_per_second...packages/ui/api/stellar/src/controllers/upgrade_scaffold.rs (1)
5-24: Consider avoiding the.to_vec()copy.
rust_contract_zip.to_vec()creates a full copy of the payload. Ifunzip_in_temporary_foldercan acceptimpl AsRef<[u8]>or&[u8], you could pass&rust_contract_zip[..]directly to avoid the allocation.packages/ui/api/stellar/tests/environment/zip_folder.rs (2)
116-128: Misleading test name.
ai_test_fails_when_entry_count_mismatchis more accurately named but still hasai_test_prefix inconsistent with later tests that useai_prefix. The test logic is correct.
377-411: Symlink test has overly permissive success condition.The test accepts both rejection (
"Symlink entries are not allowed") and successful extraction as valid outcomes. This makes the test less effective at validating symlink rejection behavior. If symlink rejection is a security requirement, theOkbranch should fail or be documented as expected only when the zip library doesn't preserve symlink metadata.Consider tightening the assertion
If symlink rejection is mandatory, consider:
match res { Err(ZipError::UnsupportedArchive(msg)) => { - // Some zip writers do not preserve symlink metadata; accept the - // explicit symlink rejection or, in the absence of symlink metadata, - // allow successful extraction (the entry will be treated as a regular file). - assert!(msg == "Symlink entries are not allowed" || msg == "Unexpected zip content"); + assert_eq!(msg, "Symlink entries are not allowed"); } - Ok(extracted) => { - // If the archive didn't mark the entry as a symlink, ensure the file - // was extracted with the expected contents. - let content = - std::fs::read_to_string(extracted.path().join("link")).expect("read link"); - assert_eq!(content, "target"); - } + Ok(_) => panic!("expected symlink entry to be rejected"), Err(e) => panic!("expected symlink entry error or successful extraction, got {e:?}"), }packages/ui/api/stellar/tests/controllers/upgrade_scaffold.rs (4)
1-6: Organize imports at the top of the file.Imports are split between lines 1-6 and 16-24 with the first test function in between. This is unconventional and reduces readability.
Suggested reorganization
Move all imports to the top of the file:
use actix_web::web; use serial_test::serial; use std::env; use std::fs::write; use std::io::Cursor; use std::io::Write as IoWrite; use stellar_api::controllers::upgrade_to_scaffold; use stellar_api::environment::{ expected_entry_count, run_scaffold_upgrade_command, unzip_in_temporary_folder, }; use tempfile::tempdir; use zip::write::FileOptions; use zip::CompressionMethod; use zip::ZipArchive; use zip::ZipWriter;Also applies to: 16-24
26-39: Duplicate helper function across test files.
make_fake_stellaris duplicated inpackages/ui/api/stellar/tests/routes/upgrade_scaffold.rs(lines 14-23 per relevant snippets) andpackages/ui/api/stellar/tests/environment/scaffold_upgrade.rs(lines 19-31). Consider extracting to a shared test utilities module.
83-101: Remove debugeprintln!statements.These debug statements (lines 94-100, 110-111, 118-126, 134-136) appear to be leftover from development/debugging. They add noise to test output and should be removed for production test code, or converted to conditional debug output if needed for CI troubleshooting.
Lines to remove
- // debug: inspect zip archive entries vs expected count - let contract_zipped_files = [ - "contracts/*/src/contract.rs", - ... - ]; - let cursor = Cursor::new(bytes.clone()); - if let Ok(archive) = ZipArchive::new(cursor) { - eprintln!("archive.len() = {}", archive.len()); - } else { - eprintln!("failed to open archive for debugging"); - } - eprintln!( - "expected_entry_count = {}", - expected_entry_count(&contract_zipped_files) - ); ... - eprintln!("TEST PATH={}", env::var("PATH").unwrap_or_default()); - eprintln!("stellar present: {}", bin.join("stellar").exists()); ... - // For debugging: unzip and run the scaffold command directly to see stderr - match unzip_in_temporary_folder(good.to_vec(), &contract_zipped_files) { - Ok(dir) => { - eprintln!("extracted to: {}", dir.path().display()); - let rc = run_scaffold_upgrade_command(dir.path()); - eprintln!("run_scaffold_upgrade_command returned: {:?}", rc); - let _ = dir.close(); - } - Err(e) => { - eprintln!("unzip error: {:?}", e); - } - } ... - if let Err(e) = &res { - eprintln!("upgrade_to_scaffold error: {:?}", e); - }Also applies to: 110-111, 115-126, 134-136
41-141: Consider extracting ZIP creation to a helper function.The ZIP archive creation logic (lines 50-81 and 154-185) is duplicated between
ai_upgrade_to_scaffold_success_returns_okandai_upgrade_to_scaffold_internal_error_returns_err. A helper function would reduce duplication and maintenance burden.Possible helper
fn create_valid_scaffold_zip() -> Vec<u8> { let mut bytes = Vec::new(); let cursor = Cursor::new(&mut bytes); let mut writer = ZipWriter::new(cursor); let options = FileOptions::<()>::default().compression_method(CompressionMethod::Deflated); writer.add_directory("contracts/", options).unwrap(); writer.add_directory("contracts/proj/", options).unwrap(); writer.add_directory("contracts/proj/src/", options).unwrap(); writer.start_file("contracts/proj/src/contract.rs", options).unwrap(); writer.write_all(b"c").unwrap(); // ... rest of entries writer.finish().unwrap(); bytes }packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs (2)
17-17: Redundant synchronization withPATH_LOCKmutex.The
PATH_LOCKmutex is used alongside#[serial]attribute, which already serializes test execution. The mutex adds unnecessary complexity. Consider removing the mutex and relying solely on#[serial].Suggested simplification
-static PATH_LOCK: Mutex<()> = Mutex::new(()); ... fn run_scaffold_upgrade_command_success() { ... - let _guard = PATH_LOCK.lock().unwrap(); let old_path = env::var_os("PATH"); ... // restore PATH if let Some(v) = old_path { env::set_var("PATH", v); } - drop(_guard); ... }Also applies to: 91-109, 126-143
19-31:make_fake_stellarhelper duplicated across test files.This helper with stderr support is more capable than the versions in
tests/controllers/upgrade_scaffold.rsandtests/routes/upgrade_scaffold.rs. Consider consolidating into a shared test utilities module and using this version as the canonical implementation.
| "[rust]": { | ||
| "editor.defaultFormatter": "rust-lang.rust-analyzer" | ||
| }, | ||
| "rust-analyzer.linkedProjects": [ | ||
| "packages/ui/stellar/Cargo.toml" | ||
| ], | ||
| "rust-analyzer.check.command": "clippy", | ||
| "rust-analyzer.checkOnSave": true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the Cargo.toml path referenced in the settings exists
if [ -f "packages/ui/stellar/Cargo.toml" ]; then
echo "✓ Cargo.toml found at packages/ui/stellar/Cargo.toml"
head -10 packages/ui/stellar/Cargo.toml
else
echo "✗ Cargo.toml NOT found at packages/ui/stellar/Cargo.toml"
echo "Searching for Cargo.toml files in the repository:"
fd -t f "Cargo.toml"
fiRepository: OpenZeppelin/contracts-wizard
Length of output: 214
🏁 Script executed:
cat -n .vscode/example.settings.json | sed -n '8,15p'Repository: OpenZeppelin/contracts-wizard
Length of output: 361
Fix the linkedProjects path: it should point to packages/ui/api/stellar/Cargo.toml, not packages/ui/stellar/Cargo.toml.
The Cargo.toml file does not exist at the configured path. The actual location includes the api/ directory level. Update line 12 to use the correct path so rust-analyzer can properly locate and analyze the Stellar project.
🤖 Prompt for AI Agents
In @.vscode/example.settings.json around lines 8 - 15, Update the
"rust-analyzer.linkedProjects" entry (the linkedProjects array) to point to the
correct Cargo.toml for the Stellar project: replace the existing value
referencing packages/ui/stellar/Cargo.toml with
packages/ui/api/stellar/Cargo.toml so rust-analyzer can locate and analyze the
project.
| import { | ||
| contractExactRequiredKeys, | ||
| contractExactRequiredKeys, | ||
| type AiFunctionDefinition, | ||
| } from '../types/function-definition.ts'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate contractExactRequiredKeys import.
contractExactRequiredKeys appears twice in the import declaration (lines 2 and 3), which causes a parse error.
🐛 Proposed fix
import {
contractExactRequiredKeys,
- contractExactRequiredKeys,
type AiFunctionDefinition,
} from '../types/function-definition.ts';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { | |
| contractExactRequiredKeys, | |
| contractExactRequiredKeys, | |
| type AiFunctionDefinition, | |
| } from '../types/function-definition.ts'; | |
| import { | |
| contractExactRequiredKeys, | |
| type AiFunctionDefinition, | |
| } from '../types/function-definition.ts'; |
🧰 Tools
🪛 Biome (2.1.2)
[error] 3-3: Declarations inside of a import declaration may not have duplicates
a second declaration of contractExactRequiredKeys is not allowed
contractExactRequiredKeys is first declared here
(parse)
🤖 Prompt for AI Agents
In `@packages/ui/api/ai/ai-assistant/function-definitions/cairo.ts` around lines 1
- 5, The import list includes a duplicate symbol contractExactRequiredKeys which
causes a parse error; remove the duplicate so the import only declares
contractExactRequiredKeys once alongside type AiFunctionDefinition, i.e., update
the import statement that currently imports contractExactRequiredKeys twice to
import it a single time.
| COPY . /app/ui/api/ai | ||
| COPY ../../common /app/common |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find docker-compose files
fd -t f "docker-compose" -e "yaml" -e "yml"Repository: OpenZeppelin/contracts-wizard
Length of output: 108
🏁 Script executed:
# Check the directory structure to understand the path
find packages -type d -name "common" | head -20Repository: OpenZeppelin/contracts-wizard
Length of output: 111
🏁 Script executed:
# Look at the Dockerfile.dev to understand context
cat -n packages/ui/api/ai/Dockerfile.devRepository: OpenZeppelin/contracts-wizard
Length of output: 328
🏁 Script executed:
# Check the docker-compose configuration for build context
cat -n packages/ui/api/docker-compose.yamlRepository: OpenZeppelin/contracts-wizard
Length of output: 1543
🏁 Script executed:
# Check if there's a common directory at root level or any other location
find . -maxdepth 2 -type d -name "common"Repository: OpenZeppelin/contracts-wizard
Length of output: 90
🏁 Script executed:
# Verify the actual paths by checking what exists
ls -la packages/common 2>/dev/null | head -5 && echo "--- packages/common exists ---"
ls -la common 2>/dev/null | head -5 && echo "--- common exists at root ---"Repository: OpenZeppelin/contracts-wizard
Length of output: 368
🏁 Script executed:
# Verify exactly what exists at repository root level
ls -ld common 2>&1
# And verify packages/common
ls -ld packages/common 2>&1Repository: OpenZeppelin/contracts-wizard
Length of output: 190
Fix the COPY path for packages/common.
The COPY ../../common /app/common uses a path relative to the build context (repository root), not the Dockerfile location. From the repository root, ../../common reaches outside the repository and will fail the Docker build. The correct path is COPY packages/common /app/common.
Current problematic code
COPY ../../common /app/common
🤖 Prompt for AI Agents
In `@packages/ui/api/ai/Dockerfile.dev` around lines 4 - 5, Update the Dockerfile
COPY that references the common package so it uses the build-context relative
path instead of navigating up: replace the current COPY instruction that uses
../../common with one that copies packages/common into /app/common (i.e., adjust
the COPY source to "packages/common") so the build context is valid and the
Docker build succeeds.
| FROM rust:bookworm AS runtime | ||
| ARG WIZARD_ORIGIN | ||
| ENV DEBIAN_FRONTEND=noninteractive | ||
| WORKDIR /usr/local/bin | ||
|
|
||
| RUN apt-get purge -y --auto-remove \ | ||
| build-essential pkg-config libssl-dev | ||
| RUN apt-get update && apt-get install -y --no-install-recommends \ | ||
| ca-certificates libssl3 libdbus-1-3 libudev1 libusb-1.0-0 \ | ||
| && rm -rf /var/lib/apt/lists/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
cat packages/ui/api/stellar/Dockerfile.prodRepository: OpenZeppelin/contracts-wizard
Length of output: 202
🏁 Script executed:
git ls-files | head -20Repository: OpenZeppelin/contracts-wizard
Length of output: 633
🏁 Script executed:
fd -t f "Dockerfile.prod" 2>/dev/nullRepository: OpenZeppelin/contracts-wizard
Length of output: 112
🏁 Script executed:
cat -n packages/ui/api/stellar/Dockerfile.prodRepository: OpenZeppelin/contracts-wizard
Length of output: 2970
🏁 Script executed:
# Check the actual dependencies of the copied binaries
cd packages/ui/api/stellar && \
grep -A20 "stellar-cli" Cargo.toml || echo "stellar-cli not in Cargo.toml"Repository: OpenZeppelin/contracts-wizard
Length of output: 102
🏁 Script executed:
# Look for any Cargo.lock or metadata about these CLIs
fd -t f "Cargo.lock" packages/ui/api/stellar -exec head -30 {} \;Repository: OpenZeppelin/contracts-wizard
Length of output: 303
🏁 Script executed:
# Find Cargo.lock in stellar directory
find packages/ui/api/stellar -name "Cargo.lock" -type fRepository: OpenZeppelin/contracts-wizard
Length of output: 107
🏁 Script executed:
# Check if stellar and stellar-scaffold are statically compiled binaries
file /usr/local/cargo/bin/stellar 2>/dev/null || echo "Binary check would be done at runtime"
# Let's check the packages directory structure
ls -la packages/ui/api/stellar/Repository: OpenZeppelin/contracts-wizard
Length of output: 777
🏁 Script executed:
cat -n packages/ui/api/stellar/Cargo.tomlRepository: OpenZeppelin/contracts-wizard
Length of output: 765
🏁 Script executed:
# Check what dependencies exist in Cargo.lock for stellar and stellar-scaffold CLIs
grep -A2 "name = \"stellar" packages/ui/api/stellar/Cargo.lock | head -30Repository: OpenZeppelin/contracts-wizard
Length of output: 128
🏁 Script executed:
# Check if stellar-cli and stellar-scaffold-cli are bundled or have runtime Rust deps
grep -i "stellar-cli\|stellar-scaffold" packages/ui/api/stellar/Cargo.lock | head -20Repository: OpenZeppelin/contracts-wizard
Length of output: 55
🏁 Script executed:
# Understand what exactly gets copied to runtime
grep "COPY.*from=builder" packages/ui/api/stellar/Dockerfile.prodRepository: OpenZeppelin/contracts-wizard
Length of output: 337
Use debian:bookworm-slim as the runtime base image to significantly reduce image size.
The runtime stage only executes pre-compiled binaries (stellar-api, stellar, stellar-scaffold) copied from the builder. The rust:bookworm base image (~1.5GB) includes the full Rust toolchain, which is unnecessary at runtime. Switching to debian:bookworm-slim (~80MB) achieves substantial size reduction without any functional impact.
Also, remove the apt-get purge on lines 39-40—those packages (build-essential, pkg-config, libssl-dev) are not installed in the fresh runtime base image, so the command has no effect.
Additionally, lines 52-54 set CARGO_HOME, RUSTUP_HOME, and PATH variables pointing to Rust toolchain paths that don't exist in the runtime image. These should be removed since the pre-compiled binaries have no Rust runtime dependencies.
Suggested optimization
-FROM rust:bookworm AS runtime
+FROM debian:bookworm-slim AS runtime
ARG WIZARD_ORIGIN
ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /usr/local/bin
-RUN apt-get purge -y --auto-remove \
- build-essential pkg-config libssl-dev
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 libdbus-1-3 libudev1 libusb-1.0-0 \
&& rm -rf /var/lib/apt/lists/*Also remove lines 52-54:
-ENV CARGO_HOME=/home/appuser/.cargo
-ENV RUSTUP_HOME=/home/appuser/.rustup
-ENV PATH=/usr/local/cargo/bin:/usr/local/rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin:$PATH🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/Dockerfile.prod` around lines 34 - 43, Change the
runtime stage base image from "FROM rust:bookworm AS runtime" to "FROM
debian:bookworm-slim AS runtime" to dramatically reduce image size since only
precompiled binaries (stellar-api, stellar, stellar-scaffold) are run; remove
the ineffective "apt-get purge -y --auto-remove build-essential pkg-config
libssl-dev" line because those packages are not present in the slim base; keep
the necessary "apt-get update && apt-get install -y --no-install-recommends ..."
install of runtime libs (ca-certificates, libssl3, libdbus-1-3, libudev1,
libusb-1.0-0) and the cleanup of /var/lib/apt/lists/*; and delete the
unnecessary Rust environment variables (CARGO_HOME, RUSTUP_HOME, and PATH
entries pointing to the toolchain) since the runtime image does not include the
Rust toolchain and only runs prebuilt binaries.
| # Basic formatting | ||
| max_width = 100 | ||
| tab_spaces = 4 | ||
| hard_tabs = false | ||
| edition = "2024" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Rust 2024 edition rustfmt support
💡 Result:
Short answer: rustfmt supports the Rust 2024 "style edition". You can opt in per-project or per-invocation via Cargo.toml, rustfmt.toml, or CLI flags.
Key points and usage:
- By default cargo fmt/rustfmt uses the crate's Rust edition; rustfmt now has a separate style edition (2024). [1]
- Set style edition in rustfmt.toml: style_edition = "2024". [1]
- CLI options: rustfmt --edition 2024 (parsing) or --style-edition 2024 (style only); cargo fmt uses Cargo.toml's edition. [1][2]
- Running cargo fmt or rustfmt with the 2024 style edition will migrate formatting to the 2024 style. [1]
- Notable 2024 rustfmt changes include new sorting (version-sort) and other formatting fixes; see the edition guide and RFC for design/details. [3][4]
Sources: Rust Edition Guide (rustfmt/style edition) [1]; RFC 3338 (style-evolution & how rustfmt gets editions) [2]; Rust 1.85 / Rust 2024 release notes and rustfmt changes [3]; rustfmt version-sorting details (2024) [4].
Change edition = "2024" to style_edition = "2024".
The correct rustfmt configuration key for the Rust 2024 style edition is style_edition, not edition. The edition key specifies the Rust parsing edition, while style_edition controls the formatting style applied by rustfmt.
Corrected configuration
# Basic formatting
max_width = 100
tab_spaces = 4
hard_tabs = false
-edition = "2024"
+style_edition = "2024"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Basic formatting | |
| max_width = 100 | |
| tab_spaces = 4 | |
| hard_tabs = false | |
| edition = "2024" | |
| # Basic formatting | |
| max_width = 100 | |
| tab_spaces = 4 | |
| hard_tabs = false | |
| style_edition = "2024" |
🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/rustfmt.toml` around lines 1 - 5, The rustfmt config
uses the wrong key: replace the TOML key name `edition = "2024"` with
`style_edition = "2024"` in the rustfmt.toml so rustfmt applies the Rust 2024
formatting style (leave any `max_width`, `tab_spaces`, and `hard_tabs` entries
unchanged).
| use crate::utils::{ensure_no_symlinks, to_io_error}; | ||
| use std::fs::{DirBuilder, OpenOptions}; | ||
| use std::io::{copy, Error as IoError, Read, Take}; | ||
| use std::os::unix::fs::DirBuilderExt; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unconditional Unix import breaks non-Unix compilation.
std::os::unix::fs::DirBuilderExt is Unix-only and will fail to compile on Windows. Move this import inside the #[cfg(unix)] block.
🐛 Proposed fix
use crate::utils::{ensure_no_symlinks, to_io_error};
use std::fs::{DirBuilder, OpenOptions};
use std::io::{copy, Error as IoError, Read, Take};
-use std::os::unix::fs::DirBuilderExt;
use std::path::Path;🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/src/utils/dir.rs` at line 4, The unconditional import
of std::os::unix::fs::DirBuilderExt breaks non-Unix builds; wrap that use in a
cfg attribute (e.g. add #[cfg(unix)] use std::os::unix::fs::DirBuilderExt;) and
ensure any code that relies on DirBuilderExt is also guarded by #[cfg(unix)] so
Windows builds don’t reference the Unix-only trait.
| #[cfg(unix)] | ||
| { | ||
| use DirBuilderExt; | ||
| let mut b = DirBuilder::new(); | ||
| b.recursive(true) | ||
| .mode(0o700) | ||
| .create(path) | ||
| .map_err(to_io_error) | ||
| } | ||
| #[cfg(not(unix))] | ||
| { | ||
| fs::create_dir_all(path).map_err(utils::to_io_error::to_io_error) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix compilation errors in create_dir_safe.
Several issues in this function:
- Line 11: Redundant
use DirBuilderExt;since it's already imported (though that import itself needs to move here) - Line 20:
fs::create_dir_all-fsmodule is not in scope; should bestd::fs::create_dir_all - Line 20:
utils::to_io_error::to_io_erroris an invalid path; should be justto_io_error
🐛 Proposed fix
pub fn create_dir_safe(path: &Path) -> Result<(), IoError> {
// 0o700 on Unix. On non-Unix, this compiles but ignores mode.
#[cfg(unix)]
{
- use DirBuilderExt;
+ use std::os::unix::fs::DirBuilderExt;
let mut b = DirBuilder::new();
b.recursive(true)
.mode(0o700)
.create(path)
.map_err(to_io_error)
}
#[cfg(not(unix))]
{
- fs::create_dir_all(path).map_err(utils::to_io_error::to_io_error)
+ std::fs::create_dir_all(path).map_err(to_io_error)
}
}🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/src/utils/dir.rs` around lines 9 - 21, The
create_dir_safe function has compilation issues: move the DirBuilderExt import
into the #[cfg(unix)] block and remove the redundant standalone `use
DirBuilderExt;`, then in the unix branch call DirBuilder::new(), set recursive
and mode, and map errors with the local to_io_error function; in the non-unix
branch replace `fs::create_dir_all` with `std::fs::create_dir_all` and use
`to_io_error` (not `utils::to_io_error::to_io_error`) for map_err so both
branches compile and reference the correct symbols (DirBuilderExt,
DirBuilder::new, to_io_error, std::fs::create_dir_all) in create_dir_safe.
| use std::fs::{create_dir_all, write, File}; | ||
| use std::io::Write; | ||
| use std::os::unix::fs::PermissionsExt; | ||
| use std::path::Path; | ||
| use tempfile::tempdir; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unix-only import will fail on non-Unix platforms.
use std::os::unix::fs::PermissionsExt; at line 3 is unconditionally imported but only valid on Unix systems. This will cause a compilation error on Windows.
Proposed fix
Either add a module-level cfg guard:
+#[cfg(unix)]
+mod tests {
use std::fs::{create_dir_all, write, File};
use std::io::Write;
+#[cfg(unix)]
use std::os::unix::fs::PermissionsExt;
...
+}Or guard the import and usage:
use std::fs::{create_dir_all, write, File};
use std::io::Write;
-use std::os::unix::fs::PermissionsExt;
use std::path::Path;
...
fn make_fake_stellar(dir: &Path, exit_code: i32, stderr: Option<&str>) -> std::path::PathBuf {
...
let mut perms = f.metadata().unwrap().permissions();
+ #[cfg(unix)]
+ {
+ use std::os::unix::fs::PermissionsExt;
perms.set_mode(0o755);
+ }
std::fs::set_permissions(&path, perms).unwrap();
path
}🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs` around lines 1
- 5, The unconditional import of std::os::unix::fs::PermissionsExt will fail on
non-Unix platforms; guard the import and any use of PermissionsExt with a
cfg(unix) (or move Unix-specific logic into a #[cfg(unix)] module) and provide a
Windows/portable path for the non-Unix case (e.g., skip set_mode or use
platform-neutral Permissions) so that scaffold_upgrade.rs compiles
cross-platform while keeping Unix-specific behavior only inside blocks
referencing PermissionsExt.
| #[test] | ||
| #[serial] | ||
| fn ai_run_scaffold_upgrade_command_when_missing_fails() { | ||
| // Ensure PATH does not include any stellar binary by setting to empty temp dir | ||
| let tmp = tempdir().expect("tmp"); | ||
| let bin = tmp.path().join("bin"); | ||
| std::fs::create_dir_all(&bin).unwrap(); | ||
|
|
||
| let old = env::var_os("PATH"); | ||
| env::set_var("PATH", bin); | ||
|
|
||
| let proj = tempdir().expect("proj"); | ||
| let res = run_scaffold_upgrade_command(proj.path()); | ||
|
|
||
| if let Some(v) = old { | ||
| env::set_var("PATH", v); | ||
| } | ||
|
|
||
| // If `stellar` exists on the machine PATH this test can't guarantee a failure | ||
| // (CI runners or dev machines may have it installed). Treat an Ok as a skip. | ||
| if res.is_ok() { | ||
| eprintln!("skipping test: 'stellar' exists on PATH"); | ||
| return; | ||
| } | ||
|
|
||
| assert!(res.is_err(), "expected missing stellar to return Err"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '148,174p'Repository: OpenZeppelin/contracts-wizard
Length of output: 202
🏁 Script executed:
find . -name "scaffold_upgrade.rs" -type f 2>/dev/nullRepository: OpenZeppelin/contracts-wizard
Length of output: 198
🏁 Script executed:
git ls-files | grep -i scaffold_upgradeRepository: OpenZeppelin/contracts-wizard
Length of output: 194
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '148,174p'Repository: OpenZeppelin/contracts-wizard
Length of output: 1118
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | head -30Repository: OpenZeppelin/contracts-wizard
Length of output: 1270
🏁 Script executed:
rg -n "fn run_scaffold_upgrade_command" packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs -A 10Repository: OpenZeppelin/contracts-wizard
Length of output: 1041
🏁 Script executed:
rg -n "PATH_LOCK" packages/ui/api/stellar/tests/environment/scaffold_upgrade.rsRepository: OpenZeppelin/contracts-wizard
Length of output: 216
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '85,110p'Repository: OpenZeppelin/contracts-wizard
Length of output: 1044
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '148,175p'Repository: OpenZeppelin/contracts-wizard
Length of output: 1126
🏁 Script executed:
rg -n "ai_run_scaffold_upgrade_command_when_missing_fails" packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs -B 2 -A 30Repository: OpenZeppelin/contracts-wizard
Length of output: 1303
🏁 Script executed:
rg -n "use std::path" packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs -A 2 -B 2Repository: OpenZeppelin/contracts-wizard
Length of output: 186
🏁 Script executed:
python3 << 'EOF'
# Let's analyze the PATH behavior in Rust
# env::set_var("PATH", bin) where bin is PathBuf converts to OsStr
# The question is: does this properly isolate PATH?
# Looking at the test:
# 1. bin = /tmp/xxx/bin (empty directory)
# 2. env::set_var("PATH", bin) sets PATH to "/tmp/xxx/bin"
# 3. If stellar is at /usr/bin/stellar, it shouldn't be found
# 4. But the skip condition triggers if res.is_ok()
# This suggests either:
# A) PATH isolation doesn't work as expected
# B) The code doesn't use PATH to find stellar (uses absolute path or other method)
# C) There's a logical error in the test
print("Key observations:")
print("1. Test uses #[serial] but NOT PATH_LOCK (unlike other tests)")
print("2. Other tests that modify PATH use both #[serial] AND PATH_LOCK")
print("3. The skip condition suggests stellar might still be found despite PATH isolation")
print("4. This is the real issue - test reliability depends on environment")
EOFRepository: OpenZeppelin/contracts-wizard
Length of output: 368
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '176,200p'Repository: OpenZeppelin/contracts-wizard
Length of output: 357
🏁 Script executed:
rg -n "env::set_var.*PATH" packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs -B 3 -A 3Repository: OpenZeppelin/contracts-wizard
Length of output: 1627
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '120,145p'Repository: OpenZeppelin/contracts-wizard
Length of output: 1109
🏁 Script executed:
cat -n packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs | sed -n '1,20p'Repository: OpenZeppelin/contracts-wizard
Length of output: 822
🏁 Script executed:
python3 << 'EOF'
# Analyze the critical difference between the problematic test and the working ones
tests = {
"run_scaffold_upgrade_command_success": {
"line": 85,
"has_PATH_LOCK": True,
"has_serial": True,
"PATH_strategy": "PREPEND (format with :)",
"skip_condition": False
},
"run_scaffold_upgrade_command_failure_propagates_error": {
"line": 120,
"has_PATH_LOCK": True,
"has_serial": True,
"PATH_strategy": "PREPEND (format with :)",
"skip_condition": False
},
"ai_run_scaffold_upgrade_command_when_missing_fails": {
"line": 150,
"has_PATH_LOCK": False,
"has_serial": True,
"PATH_strategy": "REPLACE (direct assignment)",
"skip_condition": True
}
}
print("Critical finding:")
print("================")
print()
print("The problematic test is MISSING PATH_LOCK mutex synchronization!")
print()
for test_name, details in tests.items():
print(f"{test_name}:")
for key, value in details.items():
print(f" {key}: {value}")
print()
EOFRepository: OpenZeppelin/contracts-wizard
Length of output: 679
Add PATH_LOCK synchronization to prevent race conditions.
The test ai_run_scaffold_upgrade_command_when_missing_fails modifies the global PATH environment variable but fails to acquire the PATH_LOCK mutex used by other PATH-modifying tests in this file (see run_scaffold_upgrade_command_success and run_scaffold_upgrade_command_failure_propagates_error). Without synchronization, this test can race with concurrent tests, causing unpredictable behavior.
Additionally, the skip condition (lines 168-171) indicates the test is unreliable—stellar can still be found despite PATH manipulation, suggesting insufficient isolation. The proper fix is to:
- Acquire
PATH_LOCKbefore modifying PATH - Remove the skip condition once proper synchronization ensures
stellaris genuinely unavailable
Suggested fix
fn ai_run_scaffold_upgrade_command_when_missing_fails() {
// Ensure PATH does not include any stellar binary by setting to empty temp dir
let tmp = tempdir().expect("tmp");
let bin = tmp.path().join("bin");
std::fs::create_dir_all(&bin).unwrap();
+ let _guard = PATH_LOCK.lock().unwrap();
let old = env::var_os("PATH");
- env::set_var("PATH", bin);
+ env::set_var("PATH", &bin);
let proj = tempdir().expect("proj");
let res = run_scaffold_upgrade_command(proj.path());
if let Some(v) = old {
env::set_var("PATH", v);
}
+ drop(_guard);
- // If `stellar` exists on the machine PATH this test can't guarantee a failure
- // (CI runners or dev machines may have it installed). Treat an Ok as a skip.
- if res.is_ok() {
- eprintln!("skipping test: 'stellar' exists on PATH");
- return;
- }
-
assert!(res.is_err(), "expected missing stellar to return Err");
}🤖 Prompt for AI Agents
In `@packages/ui/api/stellar/tests/environment/scaffold_upgrade.rs` around lines
148 - 174, The test ai_run_scaffold_upgrade_command_when_missing_fails modifies
the global PATH without using the shared PATH_LOCK and then conditionally skips
the assertion; acquire PATH_LOCK at the start of the test (same mutex used by
run_scaffold_upgrade_command_success and
run_scaffold_upgrade_command_failure_propagates_error), set PATH while holding
the lock, call run_scaffold_upgrade_command, restore the original PATH and
release the lock in a finally-style scope so restoration always runs, and remove
the skip-on-Ok branch so the test reliably asserts that missing stellar returns
Err.
Summary
Introduce a Docker/ECS-backed Stellar API pipeline with stricter CI, revamped AI backend layout, and UI hooks for downloading upgraded scaffolds from the new service.
Description
WIZARD_ORIGIN, push to ECR, and trigger ECS rollouts using a shared AWS OIDC composite action. The main test workflow now exercises the Rust backend (with coverage uploaded to Codecov) and watches additional branches.dev:api,dev:api:build). The Netlify edge setup now scopes only the AI handler while Deno configs/import maps live underpackages/ui/api/ai.zipRustProjectBlobhelper enables the UI to upload scaffolds directly.FARGATE_HOST, and add a scaffold-download flow that streams results from the backend with user feedback. Added UX copy promoting the Builder UI..gitignoregrows cargo artifacts,.prettierignorecovers Rust/Docker/YAML, CONTRIBUTING points to the backend, Codecov config prepared, and rollup config now exposes both API and Fargate hosts.Motivation / Context
Needed to migrate away from Netlify functions toward a containerized Rust API that can run on ECS to build the scaffold project on backend, enforce security controls (AWS OIDC, hardened runners, strict zip handling), and unblock new product flows like scaffold upgrades. Also improves developer ergonomics by consolidating AI api in a container (allowing easier migration is need to ECS)
Changes
Affected functions
zipRustProjectBlob(packages/core/stellar/src/zip-rust.ts#L48-L51): exports a blob-friendly ZIP generator for scaffold uploads.downloadScaffoldHandler(packages/ui/src/stellar/App.svelte#L105-L144, L215-L235): now posts zipped contracts to the Fargate backend and streams the scaffold build result with state handling.mergeAiAssistanceOptionshelper (packages/ui/src/common/Wiz.svelte#L11-L30): updated typing logic for safer option merging.upgrade_scaffold), environment guards (zip_directory,unzip_in_temporary_folder), and route initializers powering the ECS API surface.Security Impact
✅ No new unresolved security risks.
Testing
cargo hack llvm-cov, Codecov uploads, expanded branch triggers.yarn dev:api:buildthenyarn devto ensure Dockerized API + UI launch successfully.(Staging) Build and Deployworkflow viaworkflow_dispatchusing a feature branch; confirm ECS service rolls a new task.Backward Compatibility
packages/ui/api/ai; local dev now expects Docker; Dockerfile requires explicitWIZARD_ORIGIN. Update import paths and CI env vars accordingly.API_HOST,FARGATE_HOST,ENVto.env.vars.DOCKER_REGISTRY,ECS_*, AWS roles/secrets in GitHub.yarn dev:api(Docker) instead of the previous Deno runner.