Skip to content

Commit 6495338

Browse files
authored
rename book lock file and other small nb changes (#51)
1 parent 0629cac commit 6495338

File tree

6 files changed

+213279
-105
lines changed

6 files changed

+213279
-105
lines changed

book/background/5_software.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ There are two options for creating a software environment: [pixi](https://pixi.s
1010
1. Clone the book's GitHub repository:
1111
```git clone https://github.com/e-marshall/cloud-open-source-geospatial-datacube-workflows.git```
1212

13-
2. Navigate into the repo environment:
13+
2. Navigate into the repo environment:
1414
```cd cloud-open-source-geospatial-datacube-workflows```
1515

16-
3. There is a small data cube included in the repo that is used in the tutorials. We don't want git to track this so we tell it to ignore this file path.
16+
3. There is a small data cube included in the repo that is used in the tutorials. We don't want git to track this so we tell it to ignore this file path.
1717
```git update-index --assume-unchanged book/itslive/data/raster_data/regional_glacier_velocity_vector_cube.zarr/.```
1818

19-
4. Execute `pixi run` for each tutorial:
20-
```pixi run itslive```
21-
```pixi run sentinel1```
19+
4. Execute `pixi run` for each tutorial:
20+
```pixi run itslive```
21+
```pixi run sentinel1```
2222

2323
Note that the first `pixi run` will download specific versions of all required Python libraries to a hidden directory `./.pixi`. Subsequent runs activate that environment and execute code within it. You can also run `pixi shell` to "activate" the environment (set paths to executables and auxiliary files) and `exit` to deactivate it.
2424

@@ -27,16 +27,17 @@ Note that the first `pixi run` will download specific versions of all required P
2727
1. Clone this book's GitHub repository:
2828
```git clone https://github.com/e-marshall/cloud-open-source-geospatial-datacube-workflows.git```
2929

30-
2. Navigate into the `book` sub-directory:
30+
2. Navigate into the `book` sub-directory:
3131
```cd cloud-open-source-geospatial-datacube-workflows/book```
3232

3333
3. Create and activate a conda environment from the `environment.yml` file located in the repo:
34-
```conda env create -f environment.yml```
34+
```conda env create -f environment.yml```
35+
```conda activate book```
3536

36-
4. There is a small data cube included in the repo that is used in the tutorials. We don't want git to track this so we tell it to ignore this file path.
37+
4. There is a small data cube included in the repo that is used in the tutorials. We don't want git to track this so we tell it to ignore this file path.
3738
```git update-index --assume-unchanged book/itslive/data/raster_data/regional_glacier_velocity_vector_cube.zarr/.```
3839

39-
5. Start Jupyterlab and navigate to the directories containing the Jupyter notebooks (`itslive/nbs` and `s1/nbs`):
40+
5. Start Jupyterlab and navigate to the directories containing the Jupyter notebooks (`itslive/nbs` and `s1/nbs`):
4041
```jupyterlab```
4142

4243
Both tutorials use functions that are stored in scripts associated with each dataset. You can find these scripts here: [`itslive_tools.py`](../itslive/nbs/itslive_tools.py) and [`s1_tools.py`](../sentinel1/nbs/s1_tools.py).

book/itslive/nbs/2_larger_than_memory_data.ipynb

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -547,9 +547,7 @@
547547
"source": [
548548
"{{conclusion}}\n",
549549
"\n",
550-
"In this notebook, we identified a strategy for reading, chunking, and organizing this dataset that works within the memory constraints of my laptop and the size of the data. \n",
551-
"\n",
552-
"In the next notebook, we use vector data to narrow our focus in on a spatial area of interest and start examining ice velocity data."
550+
"In this notebook, we identified a strategy for reading, chunking, and organizing this dataset that works within the memory constraints of my laptop and the size of the data. In the next notebook, we use vector data to narrow our focus in on a spatial area of interest and start examining ice velocity data."
553551
]
554552
}
555553
],

book/sentinel1/nbs/1_read_asf_data.ipynb

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
"```\n",
119119
".\n",
120120
"└── s1_asf_data\n",
121-
" ├── S1A_IW_20210502T121414_DVP_RTC30_G_gpuned_1424\n",
121+
" ├── S1A_IW_20220214T121353_DVP_RTC30_G_gpuned_E1E7\n",
122122
" │ ├── S1A_IW_20220214T121353_DVP_RTC30_G_gpuned_51E7_VH.tif.xml\n",
123123
" │ ├── S1A_IW_20220214T121353_DVP_RTC30_G_gpuned_51E7_rgb.kmz\n",
124124
" │ ├── S1A_IW_20220214T121353_DVP_RTC30_G_gpuned_51E7_shape.prj\n",
@@ -505,9 +505,7 @@
505505
"source": [
506506
"## Conclusion\n",
507507
"\n",
508-
"This notebook demonstrated reading large data into memory by creating a virtual dataset that references that full dataset without directly reading it. \n",
509-
"\n",
510-
"However, we also saw that reading the data in this way produces an object that lacks important metadata. The next notebook will go through the steps of locating and adding relevant metadata to the backscatter data cubes read in this notebook.\n"
508+
"This notebook demonstrated reading large data into memory by creating a virtual dataset that references that full dataset without directly reading it. However, we also saw that reading the data in this way produces an object that lacks important metadata. The next notebook will go through the steps of locating and adding relevant metadata to the backscatter data cubes read in this notebook.\n"
511509
]
512510
}
513511
],

0 commit comments

Comments
 (0)