1+ <!-- markdownlint-disable MD013 MD033 MD041 -->
2+
13<div align =" center " >
24 <img src="https://deps.rs/repo/github/notashelf/microfetch/status.svg" alt="https://deps.rs/repo/github/notashelf/microfetch">
3- <!-- <img src="https://img.shields.io/github/v/release/notashelf/microfetch?display_name=tag&color=DEA584"> -->
4- <img src="https://img.shields.io/github/stars/notashelf/microfetch?label=stars&color=DEA584">
5+ <img src="https://img.shields.io/github/stars/notashelf/microfetch?label=stars&color=DEA584" alt="stars">
6+ </div >
7+
8+ <div id =" doc-begin " align =" center " >
9+ <h1 id =" header " >
10+ Microfetch
11+ </h1 >
12+ <p >
13+ Microscopic fetch tool in Rust, for NixOS systems, with special emphasis on speed
14+ </p >
15+ <br />
16+ <a href =" #synopsis " >Synopsis</a ><br />
17+ <a href =" #features " >Features</a > | <a href =" #motivation " >Motivation</a ><br />
18+ <a href =" #installation " >Installation</a >
19+ <br />
520</div >
621
7- < h1 align = " center " >Microfetch</ h1 >
22+ ## Synopsis
823
9- Stupidly simple, laughably fast fetch tool. Written in Rust for speed and ease
10- of maintainability. Runs in a _ fraction of a millisecond_ and displays _ most_ of
11- the nonsense you'd see posted on r/unixporn or other internet communities. Aims
12- to replace [ fastfetch] ( https://github.com/fastfetch-cli/fastfetch ) on my
13- personal system, but [ probably not yours] ( #customizing ) . Though, you are more
14- than welcome to use it on your system: it's pretty [ fast] ( #benchmarks ) ...
24+ [ fastfetch ] : https://github.com/fastfetch-cli/fastfetch
25+
26+ Stupidly small and simple, laughably fast and pretty fetch tool. Written in Rust
27+ for speed and ease of maintainability. Runs in a _ fraction of a millisecond_ and
28+ displays _ most_ of the nonsense you'd see posted on r/unixporn or other internet
29+ communities. Aims to replace [ fastfetch] on my personal system, but
30+ [ probably not yours] ( #customizing ) . Though, you are more than welcome to use it
31+ on your system: it is pretty _ [ fast] ( #benchmarks ) _ ...
1532
1633<p align =" center " >
1734 <img
@@ -26,6 +43,7 @@ than welcome to use it on your system: it's pretty [fast](#benchmarks)...
2643- Fast
2744- Really fast
2845- Minimal dependencies
46+ - Tiny binary (~ 410kb)
2947- Actually really fast
3048- Cool NixOS logo (other, inferior, distros are not supported)
3149- Reliable detection of following info:
@@ -44,39 +62,64 @@ than welcome to use it on your system: it's pretty [fast](#benchmarks)...
4462
4563## Motivation
4664
47- Fastfetch, as its name indicates, a very fast fetch tool written in C, however,
48- I am not interested in any of its additional features, such as customization,
49- and I very much dislike the defaults. Microfetch is my response to this problem,
50- a _ very fast_ fetch tool that you would normally write in Bash and put in your
51- ` ~/.bashrc ` but actually _ really_ fast because it opts-out of all customization
52- options provided by Fastfetch, and is written in Rust. Why? Because I can, and
53- because I prefer Rust for "structured" Bash scripts.
54-
55- I cannot re-iterate it enough, Microfetch is _ annoyingly fast_ .
65+ Fastfetch, as its name probably hinted, is a very fast fetch tool written in C.
66+ However, I am not interested in _ any_ of its additional features, and I'm not
67+ interested in its configuration options. Sure I can _ configure_ it when I
68+ dislike the defaults, but how often would I really change the configuration...
69+
70+ Microfetch is my response to this problem. It is an _ even faster_ fetch tool
71+ that I would've written in Bash and put in my ` ~/.bashrc ` but is _ actually_
72+ incredibly fast because it opts out of all the customization options provided by
73+ tools such as Fastfetch. Ultimately, it's a small, opinionated binary with a
74+ nice size that doesn't bother me, and incredible speed. Customization? No thank
75+ you. I cannot re-iterate it enough, Microfetch is _ annoyingly fast_ .
76+
77+ The project is written in Rust, which comes at the cost of "bloated" dependency
78+ trees and the increased build times, but we make an extended effort to keep the
79+ dependencies minimal and build times managable. The latter is also very easily
80+ mitigated with Nix's binary cache systems. Since Microfetch is already in
81+ Nixpkgs, you are recommended to use it to utilize the binary cache properly. The
82+ usage of Rust _ is_ nice, however, since it provides us with incredible tooling
83+ and a very powerful language that allows for Microfetch to be as fast as
84+ possible. Sure C could've been used here as well, but do you think I hate
85+ myself? [ ^ 1 ]
86+
87+ [ ^ 1 ] : Okay, maybe a little bit. One of the future goals of Microfetch is to
88+ defer to inline Assembly for the costliest functions, but that's for a
89+ future date and until I do that I can pretend to be sane.
5690
5791## Benchmarks
5892
59- The performance may be sometimes influenced by hardware-specific race
60- conditions, or even your kernel configuration meaning it may (at times) depend
61- on your hardware. However, the overall trend appears to be less than 1.3ms on
62- any modern (2015 and after) CPU that I own. Below are the benchmarks with
63- Hyperfine on my desktop system. Please note that those benchmarks will not be
64- always kept up to date, but I will try to update the numbers as I make
65- Microfetch faster.
66-
67- | Command | Mean [ ms] | Min [ ms] | Max [ ms] | Relative | Written by raf? |
68- | :----------- | -----------: | -------: | -------: | -------------: | --------------: |
69- | ` microfetch ` | 1.0 ± 0.1 | 0.9 | 1.7 | 1.00 | yes |
70- | ` fastfetch ` | 48.6 ± 1.6 | 45.8 | 61.3 | 46.65 ± 4.75 | no |
71- | ` pfetch ` | 206.0 ± 4.5 | 198.0 | 241.4 | 197.50 ± 19.53 | no |
72- | ` neofetch ` | 689.1 ± 29.1 | 637.7 | 811.2 | 660.79 ± 69.56 | no |
73-
74- As far as I'm concerned, Microfetch is significantly faster than every other
75- fetch tool that I have tried. The only downsides of using Rust for the project
76- (in exchange for speed and maintainability) is the slightly "bloated" dependency
77- trees, and the increased build times. The latter is very easily mitigated with
78- Nix's binary cache. Since Microfetch is already in Nixpkgs, you are recommended
79- to use it to utilize the binary cache properly
93+ Below are the benchmarks that I've used to back up my claims of Microfetch's
94+ speed. It is fast, it is _ very_ fast and that is the point of its existence. It
95+ _ could_ be faster, and it will be. Eventually.
96+
97+ At this point in time, the performance may be sometimes influenced by
98+ hardware-specific race conditions or even your kernel configuration. Which means
99+ that Microfetch's speed may (at times) depend on your hardware setup. However,
100+ even with the worst possible hardware I could find in my house, I've achieved a
101+ nice less-than-1ms invocation time. Which is pretty good. While Microfetch
102+ _ could_ be made faster, we're in the territory of environmental bottlenecks
103+ given how little Microfetch actually allocates.
104+
105+ Below are the actual benchmarks with Hyperfine measured on my Desktop system.
106+ The benchmarks were performed under medium system load, and may not be the same
107+ on your system. Please _ also_ note that those benchmarks will not be always kept
108+ up to date, but I will try to update the numbers as I make Microfetch faster.
109+
110+ | Command | Mean [ µs] | Min [ µs] | Max [ µs] | Relative | Written by raf? |
111+ | :----------- | ----------------: | -------: | -------: | -------------: | --------------: |
112+ | ` microfetch ` | 604.4 ± 64.2 | 516.0 | 1184.6 | 1.00 | Yes |
113+ | ` fastfetch ` | 140836.6 ± 1258.6 | 139204.7 | 143299.4 | 233.00 ± 24.82 | No |
114+ | ` pfetch ` | 177036.6 ± 1614.3 | 174199.3 | 180830.2 | 292.89 ± 31.20 | No |
115+ | ` neofetch ` | 406309.9 ± 1810.0 | 402757.3 | 409526.3 | 672.20 ± 71.40 | No |
116+ | ` nitch ` | 127743.7 ± 1391.7 | 123933.5 | 130451.2 | 211.34 ± 22.55 | No |
117+ | ` macchina ` | 13603.7 ± 339.7 | 12642.9 | 14701.4 | 22.51 ± 2.45 | No |
118+
119+ The point stands that Microfetch is significantly faster than every other fetch
120+ tool I have tried. This is to be expected, of course, since Microfetch is
121+ designed _ explicitly_ for speed and makes some tradeoffs to achieve it's
122+ signature speed.
80123
81124### Benchmarking Individual Functions
82125
@@ -87,6 +130,32 @@ To benchmark individual functions, [Criterion.rs] is used. See Criterion's
87130[ Getting Started Guide] for details or just run ` cargo bench ` to benchmark all
88131features of Microfetch.
89132
133+ ### Profiling Allocations and Timing
134+
135+ [ Hotpath ] : https://github.com/pawurb/hotpath
136+
137+ Microfetch uses [ Hotpath] for profiling function execution timing and heap
138+ allocations. This helps identify performance bottlenecks and track optimization
139+ progress. It is so effective that thanks to Hotpath, Microfetch has seen a 60%
140+ reduction in the number of allocations.
141+
142+ To profile timing:
143+
144+ ``` bash
145+ HOTPATH_JSON=true cargo run --features=hotpath
146+ ```
147+
148+ To profile allocations:
149+
150+ ``` bash
151+ HOTPATH_JSON=true cargo run --features=hotpath,hotpath-alloc-count-total
152+ ```
153+
154+ The JSON output can be analyzed with the ` hotpath ` CLI tool for detailed
155+ performance metrics. On pull requests, GitHub Actions automatically profiles
156+ both timing and allocations, posting comparison comments to help catch
157+ performance regressions.
158+
90159## Installation
91160
92161> [ !NOTE]
0 commit comments