Skip to content

Commit f24c5c2

Browse files
lwalewchrbrunk
andauthored
refactor: merge cli (#65)
* refactor: add cli * fix: app not launching * refactor: `app` command -> `gui` * chore: remove redundant text * feat: remove launch app if else * feat: update launch app pattern with public * docs: update documentation with new CLI tool usage --------- Co-authored-by: Christoph Brunken <[email protected]>
1 parent 1506c7d commit f24c5c2

File tree

7 files changed

+349
-210
lines changed

7 files changed

+349
-210
lines changed

README.md

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -47,28 +47,36 @@ can be found [here](https://instadeep.com/).
4747

4848
## 🚀 Usage
4949

50-
MLIPAudit can be used as two separate CLI tools, the benchmarking script and
51-
a UI app for visualization of results. Furthermore, for advanced users that want to add
52-
their own benchmarks or create their own app with our existing benchmark classes, we
53-
also offer to use MLIPAudit as a library.
50+
MLIPAudit can be used via its CLI tool `mlipaudit`, which can carry out two main tasks:
51+
the benchmarking task and a graphical UI app for visualization of results. Furthermore,
52+
for advanced users that want to add their own benchmarks or create their own app with
53+
our existing benchmark classes, we also offer to use MLIPAudit as a library.
5454

55-
### CLI Tools
56-
57-
After installation via pip, the `mlipaudit` command line tool is available. It executes
58-
a benchmark run and can be configured via some command line arguments. Run the following
59-
to obtain an overview of these configuration options:
55+
After installation via pip, the `mlipaudit` command is available in your terminal.
56+
Run the following to obtain an overview of two main tasks, `benchmark` and `gui`:
6057

6158
```bash
6259
mlipaudit -h
6360
```
6461

65-
The `-h` flag prints the help message of the script with the info on how to use it.
62+
The `-h` flag prints the help message with the info on how to use the tool.
63+
See below, for details on the two available tasks.
64+
65+
### The benchmarking task
66+
67+
The first task is `benchmark`. It executes a benchmark run and can be configured
68+
via some command line arguments. To print the help message for this specific task,
69+
run:
70+
71+
```bash
72+
mlipaudit benchmark -h
73+
```
6674

6775
For example, to launch a full benchmark for a model located at `/path/to/model.zip`,
6876
you can run:
6977

7078
```bash
71-
mlipaudit -m /path/to/model.zip -o /path/to/output
79+
mlipaudit benchmark -m /path/to/model.zip -o /path/to/output
7280
```
7381

7482
In this case, benchmark results are written to the directory `/path/to/output`. In this
@@ -81,10 +89,19 @@ For a tutorial on how to run models that are not native to the
8189
[mlip](https://github.com/instadeepai/mlip) library, see
8290
[this](https://instadeep.com/) section of our documentation.
8391

84-
To visualize the detailed results (potentially of multiple models), run:
92+
### The graphical user interface
93+
94+
To visualize the detailed results (potentially of multiple models), the `gui` task can
95+
be run. To get more information, run:
96+
97+
```bash
98+
mlipaudit gui -h
99+
```
100+
101+
For example, to display the results stored at `/path/to/output`, execute:
85102

86103
```bash
87-
mlipauditapp /path/to/output
104+
mlipaudit gui /path/to/output
88105
```
89106

90107
This should automatically open a webpage in your browser with a graphical user interface
@@ -111,7 +128,7 @@ documentation for details on the available functions.
111128
You can use these functions to build your own benchmarking script and GUI pages for our
112129
app. For inspiration, we recommend to take a look at the main scripts for
113130
these tools in this repo, located at `src/mlipaudit/main.py` and
114-
`src/mlipaudit/app.py`, respectively.`
131+
`src/mlipaudit/app.py`, respectively.
115132

116133
## 🤗 Data
117134

docs/source/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ line. For example,
1919

2020
.. code-block:: bash
2121
22-
mlipaudit -m /path/to/visnet.zip /path/to/mace.zip -o /path/to/output
22+
mlipaudit benchmark -m /path/to/visnet.zip /path/to/mace.zip -o /path/to/output
2323
2424
runs the complete benchmark suite for two models, `visnet` and `mace` and
2525
stores the results in JSON files in the `/path/to/output` directory. **The results**
@@ -31,7 +31,7 @@ To visualize these results, we provide a graphical user interface based on
3131

3232
.. code-block:: bash
3333
34-
mlipauditapp /path/to/output
34+
mlipaudit gui /path/to/output
3535
3636
to launch the app (opens a browser window automatically and displays the UI).
3737

docs/source/tutorials/cli/index.rst

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,25 @@ Tutorial: CLI tools
44
===================
55

66
After installation and activating the respective Python environment, the command line
7-
tools `mlipaudit` and `mlipauditapp` should be available:
7+
tool `mlipaudit` should be available with two tasks:
88

9-
* `mlipaudit`: The **benchmarking CLI tool**. It runs the full or partial benchmark
10-
suite for one or more models. Results will be stored locally in multiple JSON files
11-
in an intuitive directory structure.
12-
* `mlipauditapp`: The **UI app** for visualization of the results. Running it opens a
9+
* `mlipaudit benchmark`: The **benchmarking CLI task**. It runs the full or partial
10+
benchmark suite for one or more models. Results will be stored locally in multiple
11+
JSON files in an intuitive directory structure.
12+
* `mlipaudit gui`: The **UI app** for visualization of the results. Running it opens a
1313
browser window and displays the web app. Implementation is based
1414
on `streamlit <https://streamlit.io/>`_.
1515

16-
Benchmarking CLI tool
17-
---------------------
16+
Benchmarking task
17+
-----------------
1818

19-
The tool has the following command line options:
19+
The benchmarking CLI task is invoked by running
20+
21+
.. code-block:: bash
22+
23+
mlipaudit benchmark [OPTIONS]
24+
25+
and has the following command line options:
2026

2127
* `-h / --help`: Prints info on usage of tool into terminal.
2228
* `-m / --models`: Paths to the
@@ -56,7 +62,7 @@ For example, if you want to run the entire benchmark suite for two models, say
5662

5763
.. code-block:: bash
5864
59-
mlipaudit -m /path/to/visnet_1.zip /path/to/mace_2.zip -o /path/to/output
65+
mlipaudit benchmark -m /path/to/visnet_1.zip /path/to/mace_2.zip -o /path/to/output
6066
6167
The output directory then contains an intuitive folder structure of models and
6268
benchmarks with the aforementioned `result.json` files. Each of these files will
@@ -83,9 +89,10 @@ by running
8389

8490
.. code-block:: bash
8591
86-
mlipauditapp /path/to/output
92+
mlipaudit gui /path/to/output
8793
88-
in the terminal. This should open a browser window automatically.
94+
in the terminal. This should open a browser window automatically. More information
95+
can be obtained by running `mlipaudit gui -h`.
8996

9097
The landing page of the app will provide you with some basic information about the app
9198
and with a table of all the evaluated models with their overall score.
@@ -139,7 +146,7 @@ You can now run your benchmarks like this:
139146

140147
.. code-block:: bash
141148
142-
mlipaudit -m /path/to/my_model.py -o /path/to/output
149+
mlipaudit benchmark -m /path/to/my_model.py -o /path/to/output
143150
144151
Note that the model name that will be assigned to the model will be `my_model`.
145152

pyproject.toml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,6 @@ dependencies = [
2020

2121
[project.scripts]
2222
mlipaudit = "mlipaudit.main:main"
23-
mlipauditapp = "mlipaudit.app:launch_app"
2423

2524
[build-system]
2625
requires = ["hatchling"]

src/mlipaudit/app.py

Lines changed: 33 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@
1818
from typing import Callable
1919

2020
import streamlit as st
21-
from streamlit import runtime as st_runtime
2221
from streamlit.web import cli as st_cli
2322

2423
from mlipaudit.benchmark import BenchmarkResult
@@ -66,25 +65,41 @@ def _get_pages_for_category(
6665
]
6766

6867

69-
def main():
70-
"""Main of our UI app.
68+
def _parse_app_args(argvs: list[str]) -> tuple[str, bool]:
69+
"""Parse the command line arguments for the app.
70+
71+
Args:
72+
argvs: The command line arguments.
73+
74+
Returns:
75+
The parsed arguments.
7176
7277
Raises:
7378
RuntimeError: if results directory is not passed as argument.
7479
"""
75-
if len(sys.argv) < 2:
80+
if len(argvs) < 2:
7681
raise RuntimeError(
7782
"You must provide the results directory as a command line argument, "
78-
"like this: mlipauditapp /path/to/results"
83+
"like this: mlipaudit gui /path/to/results"
7984
)
8085
is_public = False
81-
if len(sys.argv) == 3 and sys.argv[2] == "__hf":
86+
if len(argvs) == 3 and argvs[2] == "__public":
8287
is_public = True
83-
else:
84-
if not Path(sys.argv[1]).exists():
85-
raise RuntimeError("The specified results directory does not exist.")
8688

87-
results_dir = sys.argv[1]
89+
if not Path(argvs[1]).exists():
90+
raise RuntimeError("The specified results directory does not exist.")
91+
92+
results_dir = argvs[1]
93+
return results_dir, is_public
94+
95+
96+
def main() -> None:
97+
"""Main of our UI app.
98+
99+
Raises:
100+
RuntimeError: if results directory is not passed as argument.
101+
"""
102+
results_dir, is_public = _parse_app_args(argvs=sys.argv)
88103

89104
results = load_benchmark_results_from_disk(results_dir, BENCHMARKS)
90105
scores = load_scores_from_disk(scores_dir=results_dir)
@@ -102,7 +117,7 @@ def main():
102117
benchmark_pages[name] = st.Page(
103118
functools.partial(
104119
page_wrapper.get_page_func(),
105-
data_func=_data_func_from_key(name, results),
120+
data_func=_data_func_from_key(name, results), # type: ignore
106121
),
107122
title=name.replace("_", " ").capitalize(),
108123
url_path=name,
@@ -148,15 +163,14 @@ def main():
148163
pg.run()
149164

150165

151-
def launch_app():
166+
def launch_app(results_dir: str, is_public: bool) -> None:
152167
"""Figures out whether run by streamlit or not. Then calls `main()`."""
153-
if st_runtime.exists():
154-
main()
155-
else:
156-
original_args_without_exec = sys.argv[1:]
157-
sys.argv = ["streamlit", "run", __file__] + original_args_without_exec
158-
sys.exit(st_cli.main())
168+
args = [results_dir]
169+
if is_public:
170+
args.append("__public")
171+
sys.argv = ["streamlit", "run", __file__] + args
172+
sys.exit(st_cli.main())
159173

160174

161175
if __name__ == "__main__":
162-
launch_app()
176+
main()

0 commit comments

Comments
 (0)