Skip to content

Conversation

@gleb-chipiga
Copy link

  • Move storage to async tokio::fs APIs
  • Await storage calls in CLI/TUI flows
  • Offload current_dir and client init to blocking thread

- Move storage to async tokio::fs APIs
- Await storage calls in CLI/TUI flows
- Offload current_dir and client init to blocking thread
@gleb-chipiga gleb-chipiga changed the title Title: Avoid blocking I/O in async paths Avoid blocking I/O in async paths Jan 16, 2026
@kavehtehrani
Copy link
Owner

Hello hello! Thanks for the PR! I like the tokio::fs changes in storage.rs, should've def used that instead of the std::fs.

I'm less sure about the spawn_blocking wrappers in other places like gather_network_info, CloudflareClient::new, and current_dir(). These are fast operations (sub seconds since they're quick syscalls or one-time setup), and wrapping them adds complexity without much practical benefit in my view. The code also still runs sequentially, so we're not gaining concurrency.

Would you be open to keeping just the tokio::fs changes and dropping the spawn_blocking parts?

@gleb-chipiga
Copy link
Author

gleb-chipiga commented Jan 17, 2026

Thanks for the review!

My goal wasn’t to add concurrency but to keep blocking work off the async runtime. Tokio explicitly warns that blocking code inside async tasks prevents the executor from driving other futures forward, and recommends spawn_blocking for such work.

For filesystem access, Tokio notes that most OSes don’t provide truly async FS APIs, so tokio::fs uses blocking operations under the hood via the blocking thread pool. That’s the same rationale I applied to the remaining blocking calls.

Just to clarify what’s inside the two functions:

  • gather_network_info does sync reads from /sys and runs external tools via std::process::Command::output() (waits for process completion).
  • CloudflareClient::new reads a certificate file with std::fs::read().

I agree some of these are usually fast (e.g., current_dir()), and I’m open to a compromise: keep the tokio::fs changes and retain spawn_blocking where blocking can be unbounded (Command::output() in gather_network_info, reading certs). At the same time, I leaned toward moving even quick one‑off syscalls off the runtime for consistency. It keeps the rule simple: no blocking work on async threads. If you still prefer keeping those inline, I can adjust.

In an app that measures network performance, this kind of discipline seems important to me, at least for keeping things orderly.

@kavehtehrani
Copy link
Owner

Thanks but seems more theoretical than practical and unless I see an actual scenario that describes an actual problem I'm not sure the added complexity is worth it at all. Focusing on gather_network_info here's a sample code you can run to see for yourself:

use std::process::Command;
use std::time::Instant;

fn bench<F: FnMut()>(name: &str, iterations: u32, mut f: F) {
    let start = Instant::now();
    for _ in 0..iterations {
        f();
    }
    let elapsed = start.elapsed();
    println!(
        "{:<24} {:>7.2}ms total, {:.2}ms avg",
        name,
        elapsed.as_secs_f64() * 1000.0,
        elapsed.as_secs_f64() * 1000.0 / iterations as f64
    );
}

fn main() {
    bench("ip route show default:", 100, || {
        let _ = Command::new("ip").args(["route", "show", "default"]).output();
    });

    bench("iwgetid -r:", 100, || {
        let _ = Command::new("iwgetid").arg("-r").output();
    });

    bench("iw dev <iface> info:", 100, || {
        let _ = Command::new("iw").args(["dev", "lo", "info"]).output();
    });

    bench("sysfs read (address):", 100, || {
        let _ = std::fs::read_to_string("/sys/class/net/lo/address");
    });

    bench("path exists check:", 100, || {
        let _ = std::path::Path::new("/sys/class/net/lo/wireless").exists();
    });
}

On my 4 year old laptop it does not even take a single second to run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants