Skip to content

Commit 8526804

Browse files
Move instructions to run the notebooks to a new 'docs' directory and update the Readme to provide more information about what this repository is about. (#9)
1 parent e2f258c commit 8526804

File tree

2 files changed

+62
-37
lines changed

2 files changed

+62
-37
lines changed

README.md

Lines changed: 16 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,49 +1,28 @@
1-
# access-models-scaling
2-
Scaling data for the ACCESS models and the generate to generate them.
1+
# ACCESS-NRI Model Scaling Repository
32

4-
## Setup
3+
## About
54

6-
The notebooks require that you're running on NCI Gadi for running the models to
7-
generate the model outputs. However, if you're working with existing data, you
8-
can run the notebook on whichever machine the data lives on.
5+
The [ACCESS-NRI Model Scaling Repository](https://github.com/ACCESS-NRI/access-models-scaling/) is a collection of Jupyter Notebooks that generate and display scaling data for ACCESS-NRI models.
96

10-
### 1. Getting the notebooks
7+
We expect this data to be useful to both users and developers. It can be used to guide decisions about parallelisation layouts and CPU core counts when developing model configurations in order to ensure good balance between performance and parallel efficiency. It can also help identifying areas where codes can be improved and optimised. Finally, the provided plots and tables that can be used in NCMAS applications and other similar merit allocation schemes when requesting HPC resources.
118

12-
Cloning the repository is the easiest way to get the notebooks.
9+
Currently the repository includes scaling data for the following models:
1310

14-
On Gadi, you'll probably want to clone the repository somewhere on scratch.
11+
* [ACCESS-ESM1.6](https://github.com/ACCESS-NRI/access-models-scaling/blob/main/ESM1p6-scaling.ipynb)
12+
* [ACCESS-rAM3](https://github.com/ACCESS-NRI/access-models-scaling/blob/main/ram3.ipynb)
13+
* [ACCESS-OM3 Global 25km](https://github.com/ACCESS-NRI/access-models-scaling/blob/main/accessom3_global_25km.ipynb)
14+
* [ACCESS-OM3 Pan-Antarctic 4km](https://github.com/ACCESS-NRI/access-models-scaling/blob/main/accessom3_panan_4km.ipynb)
1515

16-
```bash
17-
cd /scratch/$PROJECT/$USER
18-
git clone https://github.com/ACCESS-NRI/access-models-scaling.git
19-
cd access-models-scaling
20-
```
16+
We expect to regularly add new models to this list and update the existing notebooks when new versions of the models are available.
2117

22-
### 2. Creating a virtual environment
18+
## What is not included in these notebooks
2319

24-
Before starting the notebook, a Python virtual environment can be used to ensure all the
25-
dependencies are accessible.
20+
The notebooks do not include any type of resource usage estimate, like SUs per model year, as this information can strongly depend on the actual configuration being used. For ACCESS-NRI configurations, this information can sometimes be found in the [release notes](https://forum.access-hive.org.au/search?q=tags%253Amodel%20%2523access-nri-releases%20order%253Alatest). If this information is not available, or you do not know how to obtain it, we suggest opening a [new help request](https://forum.access-hive.org.au/new-topic?&body=%3Cdiv%20data-theme-toc%253D%22true%22%3E%3C%252Fdiv%3E%0A%0A%3C!--%20These%20are%20comments%20and%20not%20visible%20once%20you%20post.%20Ignore%20or%20delete%20sections%20if%20not%20relevant%20--%3E%0A%0A%3C!--%20Choose%20an%20appropriate%20category.%20If%20not%20sure%252C%20leave%20as%20General%20--%3E%0A%0A%2523%2523%20Description%20of%20request%253A%0A%0A%2523%2523%20Environment%253A%0A%0A%3C!--%20NCI%253F%20ARE%253F%20Gadi%20login%20node%253F%20PBS%20job%253F%20--%3E%0A%0A%3C!--%20List%20software%20versions%20--%3E%0A%0A%2523%2523%20What%20executed%253A%0A%0A%3C!--%20Copy%20and%20paste%20any%20commands%20and%20output%20in%20a%20code%20block%20--%3E%0A%3C!--%20For%20code%20you%20are%20writing%252C%20prepare%20a%20minimal%20reproducible%20example%20(https%253A%252F%252Fforum.access-hive.org.au%252Fdocs%253Ftopic%253D843)%20--%3E%0A%0A%2523%2523%20Actual%20results%253A%0A%0A%3C!--%20Copy%20full%20error%20messages%20--%3E%0A%0A%2523%2523%20Expected%20results%253A%0A%0A%2523%2523%20Additional%20info%253A&category_id=4&tags=help) on the ACCESS-Hive Forum.
2621

27-
Create a virtual environment with a recent-ish version of Python.
22+
## Where do I ask questions?
2823

29-
```bash
30-
# on gadi:
31-
module load python3/3.12.1
32-
python -m venv .venv
33-
. .venv/bin/activate
34-
```
24+
We welcome feedback and contributions through the [ACCESS-Hive forum](https://forum.access-hive.org.au/). Please create a topic in the #technical category and [follow the guidelines for requesting help from ACCESS-NRI](https://forum.access-hive.org.au/t/access-help-and-support/908) should you need it. You can also open an issue [on Github](https://github.com/ACCESS-NRI/access-models-scaling/issues/new/).
3525

36-
### 3. Install dependencies and register environment
26+
## Running the notebooks
3727

38-
The dependencies will be enough to run the notebook, run the models, and then process the timing data.
39-
40-
```bash
41-
pip install -r requirements.txt
42-
```
43-
44-
### 4. Start the notebook
45-
46-
This can be done on the login node (e.g. through VSCode) or NCI ARE. If through ARE:
47-
* remember to set the path to your venv in the Advanced Settings -> Python or Conda virtual environment base.
48-
* A "small" compute size will be enough as not a lot of compute is happening.
49-
* If you plan on generating the data from scratch, you can either set a long walltime eg. 12h, or start a short job to launch the jobs, then start a job later to process the results.
28+
In case you are interested in running the notebooks, please follow [these instructions](docs/running_the_notebooks.md)].

docs/running_the_notebooks.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Running the notebooks
2+
3+
The notebooks need to be run on NCI Gadi, as one of the steps is running the
4+
models to generate the model outputs. However, if you're working with existing
5+
data, you can run the notebook on whichever machine the data lives on.
6+
7+
## 1. Getting the notebooks
8+
9+
Cloning the repository is the easiest way to get the notebooks.
10+
11+
On Gadi, you'll probably want to clone the repository somewhere on scratch.
12+
13+
```bash
14+
cd /scratch/$PROJECT/$USER
15+
git clone https://github.com/ACCESS-NRI/access-models-scaling.git
16+
cd access-models-scaling
17+
```
18+
19+
## 2. Creating a virtual environment
20+
21+
Before starting the notebook, a Python virtual environment can be used to ensure all the
22+
dependencies are accessible.
23+
24+
Create a virtual environment with a recent-ish version of Python.
25+
26+
```bash
27+
# on gadi:
28+
module load python3/3.12.1
29+
python -m venv .venv
30+
. .venv/bin/activate
31+
```
32+
33+
## 3. Install dependencies and register environment
34+
35+
The dependencies will be enough to run the notebook, run the models, and then process the timing data.
36+
37+
```bash
38+
pip install -r requirements.txt
39+
```
40+
41+
## 4. Start the notebook
42+
43+
This can be done on the login node (e.g. through VSCode) or NCI ARE. If through ARE:
44+
* remember to set the path to your venv in the Advanced Settings -> Python or Conda virtual environment base.
45+
* A "small" compute size will be enough as not a lot of compute is happening.
46+
* If you plan on generating the data from scratch, you can either set a long walltime eg. 12h, or start a short job to launch the jobs, then start a job later to process the results.

0 commit comments

Comments
 (0)