Skip to content

chore: Update backend to use blockfrost instead of mumak#139

Merged
gonzalezzfelipe merged 1 commit intomainfrom
chore/blockfrost-instead-of-mumak
Mar 9, 2026
Merged

chore: Update backend to use blockfrost instead of mumak#139
gonzalezzfelipe merged 1 commit intomainfrom
chore/blockfrost-instead-of-mumak

Conversation

@gonzalezzfelipe
Copy link
Contributor

@gonzalezzfelipe gonzalezzfelipe commented Mar 4, 2026

Summary by CodeRabbit

  • Refactor
    • Backend data retrieval infrastructure updated to use Blockfrost blockchain API instead of traditional database queries.
    • Enhanced data processing and transformations to support blockchain data structures and on-chain information.

@coderabbitai
Copy link

coderabbitai bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

The application's data source transitions from PostgreSQL-backed SQL queries to Blockfrost API-driven on-chain data fetching. Dependencies shift from sqlx/database libraries to Blockfrost and Cardano-related crates, with core GraphQL resolvers refactored to fetch UTXOs via API and perform in-memory transformations into domain objects.

Changes

Cohort / File(s) Summary
Dependency Configuration
backend/Cargo.toml
Removed sqlx (postgres, tokio runtime) and num-traits; added blockfrost (1.2.1), blockfrost-openapi (0.1.83), pallas (1.0.0-alpha.4 with cardano features), updated tokio (1.38.0 with multi-thread support), added serde_json.
Data Source & API Integration
backend/src/main.rs
Major refactor replacing SQL pool with Blockfrost API client. Added BlockFrostSettings configuration, fetch_utxos_by_policy helper, and comprehensive parsing utilities (utxo_id, parse_quantity, datum extraction, distance calculation). Implemented TryFrom conversions for Ship, Pellet, Asteria, Token from AddressUtxoContentInner. Reworked GraphQL resolvers (objects_in_radius, leaderboard, token resolution) to call Blockfrost API and transform results in-memory. Removed sqlx::FromRow derives from domain types.

Sequence Diagram

sequenceDiagram
    participant Client as GraphQL Client
    participant Resolver as Query Resolver
    participant Blockfrost as Blockfrost API
    participant Transform as Domain Transformer
    participant Response as GraphQL Response

    Client->>Resolver: Query objects_in_radius(lat, lon)
    Resolver->>Blockfrost: fetch_utxos_by_policy(policy_id)
    Blockfrost-->>Resolver: [AddressUtxoContentInner]
    
    loop For each UTXO
        Resolver->>Transform: TryFrom<AddressUtxoContentInner>
        Transform->>Transform: Parse datum, extract position
        Transform->>Transform: Calculate distance from center
        Transform-->>Resolver: Ship/Pellet/Asteria
    end
    
    Resolver->>Resolver: Filter by radius
    Resolver-->>Response: [Ship/Pellet/Asteria]
    Response-->>Client: GraphQL Response
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 From dusty pools to chains we leap,
Where Blockfrost secrets lie so deep!
Ships transform, Pellets align,
UTXOs parse in patterns divine. ✨
What SQL once guarded now dances free—
On-chain magic flows wild and spree!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing the database/SQL approach (mumak) with Blockfrost API integration across the backend architecture.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/blockfrost-instead-of-mumak

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/Cargo.toml`:
- Line 16: The Cargo.toml currently pins pallas = "1.0.0-alpha.4" with unstable
features; change the dependency to the stable release pallas = "0.34.0" (remove
alpha and unstable features) unless you explicitly need alpha-only
functionality—if so, add a short note in the project README and a new module
(e.g., pallas_adapter or pallas_shim) that isolates all uses of pallas behind a
narrow API so future breakage is contained and document the exact alpha
requirement and compatibility guarantees; update Cargo.toml and note the adapter
module name (pallas_adapter) and README entry accordingly.

In `@backend/src/main.rs`:
- Around line 599-603: The token iteration performs an unbounded external call
per token; dedupe tokens by policy_id and cap the number of outbound calls to
prevent amplification. In the block that uses tokens and calls
fetch_utxos_by_policy (refer to the tokens variable and fetch_utxos_by_policy
call), first collect unique policy_ids (e.g., via a HashSet) then limit to a
safe MAX_TOKENS (choose a constant like MAX_TOKENS = 20), and only iterate over
the deduped, truncated list; if the original list exceeds MAX_TOKENS, either log
a warning or return an error indicating the request is too large. Ensure the
deduping and truncation happen before any network calls so fetch_utxos_by_policy
is only invoked for the bounded set.
- Around line 44-45: The current CORS headers use Access-Control-Allow-Origin: *
together with Access-Control-Allow-Credentials: true which is invalid; update
the logic around the response.set_header calls so that when you enable
credentials (Access-Control-Allow-Credentials via
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"))),
you do not set a wildcard origin; instead read the incoming request Origin
header and echo that value into Access-Control-Allow-Origin (or set a specific
allowed origin string) using the same
response.set_header(Header::new("Access-Control-Allow-Origin", origin_value")),
or else remove/disable the credentials header; change the code around the
response.set_header calls to implement this conditional behavior.
- Around line 809-811: Replace the current fallback behavior on
std::env::var("BLOCKFROST_PROJECT_ID") with a fail-fast check: call
std::env::var("BLOCKFROST_PROJECT_ID") and .expect(...) with a clear error
message so the process exits at startup if the variable is missing, and assign
the resulting String to a local binding (e.g., blockfrost_project_id) before
calling .as_str() when passing it to the client to avoid taking &str from a
temporary; update the code around the existing std::env::var(...) usage
accordingly.
- Around line 335-343: fetch_utxos_by_policy currently calls
api.addresses_utxos_asset(..., Pagination::all()) which triggers concurrent
fetching of all pages and can exceed Blockfrost rate limits; replace
Pagination::all() with a bounded paginator (e.g., Pagination::new(Order::Asc, 1,
100)) and change fetch_utxos_by_policy to perform sequential page iteration
(looping over pages and awaiting each request) or integrate an application-level
rate limiter (~10 req/s) to throttle calls; apply the same replacement for other
call sites that use Pagination::all() so all API paginated requests (the same
addresses_utxos_asset usage patterns) use bounded, sequential pagination or the
shared rate limiter.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 460023eb-18e1-4cee-82f1-f268cc2d37b8

📥 Commits

Reviewing files that changed from the base of the PR and between a528558 and 79a95ed.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • backend/Cargo.toml
  • backend/src/main.rs

hex = "0.4.3"
blockfrost = "1.2.1"
blockfrost-openapi = "0.1.83"
pallas = { version = "1.0.0-alpha.4", features = ["hardano", "phase2", "unstable"] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

pallas Rust crate latest stable release version 2025 2026

💡 Result:

As of March 4, 2026, the latest stable (non–pre-release) version of the pallas Rust crate is 0.34.0, released 2025-12-16. [1]

Notes:

  • There are 1.0.0-alpha.* releases (latest shown: 1.0.0-alpha.3, 2025-11-18), but those are pre-releases, not stable. [1]

Source(s):

  • [1] docs.rs “pallas 0.34.0” (version list + release dates) (docs.rs)

Citations:


🌐 Web query:

pallas Rust crate hardano phase2 unstable features stability

💡 Result:

  • Pallas (Rust) is a workspace of crates that re-implement Cardano/Ouroboros building blocks in native Rust; the top-level pallas crate re-exports the subcrates and uses Cargo feature flags to tailor what you pull in. [1][2]
  • pallas-hardano (“hardano”) is the interoperability crate for Haskell cardano-node artifacts / implementation-specific formats. [1][3]
  • “phase2” (in this context) refers to the optional phase2 Cargo feature on pallas-validate (the transaction-validation crate). It is disabled by default and is wired through the pallas-uplc dependency (i.e., Plutus/UPlc-related validation). [4]
  • Stability / “unstable features”: here, “unstable” primarily means opt-in Cargo features and pre-release crates (e.g., pallas-validate has 1.0.0-alpha.* releases), so APIs/behavior behind those flags should be treated as not stability-guaranteed and pinned carefully (lockfile / exact versions). [3][4]

Sources: [1] Cardano Developer Portal (Pallas Rust SDK page) • [2] txpipe/pallas GitHub README • [3] pallas-hardano docs.rs • [4] pallas-validate docs.rs feature flags page


🏁 Script executed:

# Check if pallas is used in the backend code and how
find . -name "*.rs" -type f | head -20

Repository: txpipe/asteria

Length of output: 342


🏁 Script executed:

# Search for pallas usage patterns in Rust files
rg "pallas" --type rust -A 3 -B 1

Repository: txpipe/asteria

Length of output: 396


🏁 Script executed:

# Check the full Cargo.toml context
cat -n backend/Cargo.toml | head -30

Repository: txpipe/asteria

Length of output: 665


🏁 Script executed:

# Check for any comments or documentation about pallas choice
rg "pallas|alpha|1.0.0" backend/ -A 2 -B 2

Repository: txpipe/asteria

Length of output: 1112


🏁 Script executed:

# Look for any migration or changelog documentation
find . -name "*.md" -type f | xargs grep -l "pallas\|1.0.0\|migration" 2>/dev/null | head -5

Repository: txpipe/asteria

Length of output: 40


Replace pallas 1.0.0-alpha.4 with stable 0.34.0 or document the alpha requirement.

A stable version (0.34.0, released December 2025) is available. Using an alpha pre-release with unstable features in a core data path introduces unnecessary upgrade and runtime stability risk. If 1.0.0-alpha.4 is required for specific functionality not available in 0.34.0, document this explicitly. Otherwise, migrate to the stable release.

If the alpha version must remain, isolate pallas usage behind a small adapter module to contain future breakage, and pin exact compatibility in documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/Cargo.toml` at line 16, The Cargo.toml currently pins pallas =
"1.0.0-alpha.4" with unstable features; change the dependency to the stable
release pallas = "0.34.0" (remove alpha and unstable features) unless you
explicitly need alpha-only functionality—if so, add a short note in the project
README and a new module (e.g., pallas_adapter or pallas_shim) that isolates all
uses of pallas behind a narrow API so future breakage is contained and document
the exact alpha requirement and compatibility guarantees; update Cargo.toml and
note the adapter module name (pallas_adapter) and README entry accordingly.

Comment on lines 44 to 45
response.set_header(Header::new("Access-Control-Allow-Origin", "*"));
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Invalid CORS policy: wildcard origin with credentials.

At Line 44 and Line 45, Access-Control-Allow-Origin: * cannot be combined with Access-Control-Allow-Credentials: true. Browsers will reject/ignore credentialed cross-origin behavior.

Suggested fix
-        response.set_header(Header::new("Access-Control-Allow-Origin", "*"));
-        response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
+        if let Some(origin) = request.headers().get_one("Origin") {
+            response.set_header(Header::new("Access-Control-Allow-Origin", origin));
+            response.set_header(Header::new("Vary", "Origin"));
+            response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response.set_header(Header::new("Access-Control-Allow-Origin", "*"));
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
if let Some(origin) = request.headers().get_one("Origin") {
response.set_header(Header::new("Access-Control-Allow-Origin", origin));
response.set_header(Header::new("Vary", "Origin"));
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 44 - 45, The current CORS headers use
Access-Control-Allow-Origin: * together with Access-Control-Allow-Credentials:
true which is invalid; update the logic around the response.set_header calls so
that when you enable credentials (Access-Control-Allow-Credentials via
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"))),
you do not set a wildcard origin; instead read the incoming request Origin
header and echo that value into Access-Control-Allow-Origin (or set a specific
allowed origin string) using the same
response.set_header(Header::new("Access-Control-Allow-Origin", origin_value")),
or else remove/disable the credentials header; change the code around the
response.set_header calls to implement this conditional behavior.

Comment on lines +335 to +343
async fn fetch_utxos_by_policy(
api: &BlockfrostAPI,
address: &str,
policy_id: &str,
) -> Result<Vec<AddressUtxoContentInner>, Error> {
api.addresses_utxos_asset(address, policy_id, Pagination::all())
.await
.map_err(|e| Error::new(e.to_string()))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify all unbounded pagination callsites and their resolver fan-out paths
rg -nP --type rust -C3 'Pagination::all\(|addresses_utxos_asset\('

Repository: txpipe/asteria

Length of output: 450


🏁 Script executed:

#!/bin/bash
# Find fetch_utxos_by_policy call sites to understand usage patterns
rg -nP --type rust -C2 'fetch_utxos_by_policy\s*\('

Repository: txpipe/asteria

Length of output: 2788


🏁 Script executed:

#!/bin/bash
# Check if there's any rate limiting or memory safeguards around pagination
rg -nP --type rust -C5 'Pagination\s*::'

Repository: txpipe/asteria

Length of output: 622


🏁 Script executed:

#!/bin/bash
# Look at the complete file to understand context
wc -l backend/src/main.rs
head -350 backend/src/main.rs | tail -40

Repository: txpipe/asteria

Length of output: 1243


🏁 Script executed:

#!/bin/bash
# Find BlockfrostAPI definition or dependency
fd -e rs -e toml | head -20
rg -l 'BlockfrostAPI' --type rust

Repository: txpipe/asteria

Length of output: 495


🏁 Script executed:

#!/bin/bash
# Check Cargo.toml for blockfrost dependency
cat Cargo.toml | grep -A 5 'blockfrost\|pagination'

Repository: txpipe/asteria

Length of output: 40


🏁 Script executed:

#!/bin/bash
# Look for any rate-limiting, timeout, or pagination constants
rg -i 'rate.limit|max.result|page.size|timeout' --type rust

Repository: txpipe/asteria

Length of output: 40


🏁 Script executed:

#!/bin/bash
# Examine the broader loop context around line 600-605 and earlier
sed -n '595,610p' backend/src/main.rs

Repository: txpipe/asteria

Length of output: 688


🏁 Script executed:

#!/bin/bash
# Check the Cargo.toml for blockfrost dependency details
cat backend/Cargo.toml

Repository: txpipe/asteria

Length of output: 539


🏁 Script executed:

#!/bin/bash
# Look for any async/concurrent request limits or semaphores
rg -n 'semaphore|Semaphore|concurrent|pool|limit' --type rust backend/src/main.rs | head -20

Repository: txpipe/asteria

Length of output: 40


🏁 Script executed:

#!/bin/bash
# Check context around line 554-565 to see the full call pattern
sed -n '550,575p' backend/src/main.rs

Repository: txpipe/asteria

Length of output: 1072


🏁 Script executed:

#!/bin/bash
# Look for the complete token loop context
sed -n '595,620p' backend/src/main.rs

Repository: txpipe/asteria

Length of output: 1017


🏁 Script executed:

#!/bin/bash
# Let's understand the blockfrost crate's Pagination implementation
rg -n 'Pagination' --type rust backend/src/main.rs

Repository: txpipe/asteria

Length of output: 197


🏁 Script executed:

#!/bin/bash
# Check if there are any imports or usage patterns for the blockfrost crate
grep -n 'use.*blockfrost\|use.*Pagination' backend/src/main.rs | head -20

Repository: txpipe/asteria

Length of output: 305


🌐 Web query:

blockfrost rust crate Pagination::all() API rate limit

💡 Result:

  • In blockfrost (Rust SDK), Pagination::all() just sets fetch_all = true (it does not do any throttling by itself). [1]
  • When “fetch all pages” behavior is used, the crate’s fetch_all_pages() helper generates a batch of page URLs and fetches them concurrently via try_join_all(...) (so a larger batch_size can create request bursts that trigger 429). [2]
  • Blockfrost’s documented rate limit is 10 requests/second per IP, with an allowed burst of 500 requests that “cools off” at 10 req/s (i.e., after a full burst you need ~50s to fully recover). [3]
  • The crate retries 429 Too Many Requests (and a few other status codes) in send_request_unprocessed(), but it uses a fixed delay from RetrySettings.delay and does not appear to honor a Retry-After header. [2][3]

Practical implication: if you combine Pagination::all() with a non-trivial batch_size (or other concurrency in your app), you can easily exceed 10 rps and hit 429; mitigate by lowering concurrency (e.g., batch_size = 1), adding your own rate limiter (~10 rps), and/or increasing RetrySettings.delay. [2][3]

Sources
[1] https://docs.rs/blockfrost/latest/src/blockfrost/pagination.rs.html
[2] https://docs.rs/crate/blockfrost/latest/source/src/request.rs
[3] https://docs.rs/blockfrost-openapi/latest/src/blockfrost_openapi/models/script.rs.html


🌐 Web query:

blockfrost-rs addresses_utxos_asset pagination documentation

💡 Result:

In blockfrost-rs, addresses_utxos_asset is explicitly a paginated endpoint:

pub async fn addresses_utxos_asset(
  &self,
  address: &str,
  asset: &str,
  pagination: Pagination,
) -> BlockfrostResult<Vec<AddressUtxoContentInner>>

(docs.rs)

How pagination is represented (blockfrost-rs)

Pagination has these fields:

  • page: usize
  • count: usize
  • order: Order (Asc / Desc)
  • fetch_all: bool

and helpers:

  • Pagination::new(order, page, count)
  • Pagination::all() (sets fetch_all: true)
    (docs.rs)

The SDK turns that into query params ?page=...&count=...&order=.... (docs.rs)

Blockfrost endpoint parameters (server-side)

The underlying HTTP endpoint is:

GET /addresses/{address}/utxos/{asset}?count=...&page=...&order=... (postman.com)

Example usage

use blockfrost::{BlockfrostAPI, Pagination, Order};

let utxos_page_1 = api
  .addresses_utxos_asset(address, asset, Pagination::new(Order::Asc, 1, 100))
  .await?;

To request “all pages” via the crate’s flag:

let all_utxos = api
  .addresses_utxos_asset(address, asset, Pagination::all())
  .await?;

(docs.rs)

Citations:


Pagination::all() in loop context creates unbounded concurrent requests that can exceed rate limits.

At line 602, fetch_utxos_by_policy is called in a loop over tokens, and Pagination::all() internally fetches all pages concurrently. This can generate many simultaneous API requests, easily exceeding Blockfrost's 10 req/s rate limit and triggering 429 Too Many Requests responses. Additional calls at lines 554, 565, 583, 638, 689, and 737 compound this risk. Replace Pagination::all() with bounded pagination (e.g., Pagination::new(Order::Asc, 1, 100)) and either add explicit sequential fetching or implement application-level rate limiting (~10 rps).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 335 - 343, fetch_utxos_by_policy currently
calls api.addresses_utxos_asset(..., Pagination::all()) which triggers
concurrent fetching of all pages and can exceed Blockfrost rate limits; replace
Pagination::all() with a bounded paginator (e.g., Pagination::new(Order::Asc, 1,
100)) and change fetch_utxos_by_policy to perform sequential page iteration
(looping over pages and awaiting each request) or integrate an application-level
rate limiter (~10 req/s) to throttle calls; apply the same replacement for other
call sites that use Pagination::all() so all API paginated requests (the same
addresses_utxos_asset usage patterns) use bounded, sequential pagination or the
shared rate limiter.

Comment on lines +599 to +603
if let Some(tokens) = tokens {
for token in tokens {
let token_utxos =
fetch_utxos_by_policy(api, &pellet_address, &token.policy_id).await?;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

User-controlled token list can trigger unbounded external call fan-out.

At Line 599-Line 603, each token causes a separate network call. Add a max token count (and ideally dedupe by policy_id) to prevent query-based API amplification.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 599 - 603, The token iteration performs an
unbounded external call per token; dedupe tokens by policy_id and cap the number
of outbound calls to prevent amplification. In the block that uses tokens and
calls fetch_utxos_by_policy (refer to the tokens variable and
fetch_utxos_by_policy call), first collect unique policy_ids (e.g., via a
HashSet) then limit to a safe MAX_TOKENS (choose a constant like MAX_TOKENS =
20), and only iterate over the deduped, truncated list; if the original list
exceeds MAX_TOKENS, either log a warning or return an error indicating the
request is too large. Ensure the deduping and truncation happen before any
network calls so fetch_utxos_by_policy is only invoked for the bounded set.

Comment on lines +809 to +811
std::env::var("BLOCKFROST_PROJECT_ID")
.unwrap_or("asteria-backend".to_string())
.as_str(),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast if BLOCKFROST_PROJECT_ID is missing.

At Line 809-Line 811, defaulting to "asteria-backend" turns misconfiguration into runtime API failures. Prefer startup-time failure with a clear message.

Suggested fix
-    let client = BlockfrostAPI::new(
-        std::env::var("BLOCKFROST_PROJECT_ID")
-            .unwrap_or("asteria-backend".to_string())
-            .as_str(),
-        settings,
-    );
+    let project_id = std::env::var("BLOCKFROST_PROJECT_ID")
+        .expect("BLOCKFROST_PROJECT_ID must be set in the environment");
+    let client = BlockfrostAPI::new(project_id.as_str(), settings);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::env::var("BLOCKFROST_PROJECT_ID")
.unwrap_or("asteria-backend".to_string())
.as_str(),
let project_id = std::env::var("BLOCKFROST_PROJECT_ID")
.expect("BLOCKFROST_PROJECT_ID must be set in the environment");
let client = BlockfrostAPI::new(project_id.as_str(), settings);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 809 - 811, Replace the current fallback
behavior on std::env::var("BLOCKFROST_PROJECT_ID") with a fail-fast check: call
std::env::var("BLOCKFROST_PROJECT_ID") and .expect(...) with a clear error
message so the process exits at startup if the variable is missing, and assign
the resulting String to a local binding (e.g., blockfrost_project_id) before
calling .as_str() when passing it to the client to avoid taking &str from a
temporary; update the code around the existing std::env::var(...) usage
accordingly.

@gonzalezzfelipe gonzalezzfelipe merged commit b468155 into main Mar 9, 2026
1 of 2 checks passed
@gonzalezzfelipe gonzalezzfelipe deleted the chore/blockfrost-instead-of-mumak branch March 9, 2026 19:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants