Skip to content

Commit 704ff22

Browse files
authored
Nits (#52)
* rename book lock file and other small nb changes * small changes
1 parent 6495338 commit 704ff22

File tree

6 files changed

+17
-9
lines changed

6 files changed

+17
-9
lines changed

book/_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ only_build_toc_files: false
1313
# Force re-execution of notebooks on each build.
1414
# See https://jupyterbook.org/content/execute.html
1515
execute:
16-
execute_notebooks: cache #'auto'
16+
execute_notebooks: cache #'auto'
1717
allow_errors: true
1818
timeout: 1500
1919
exclude_patterns:

book/endmatter/about_this_book.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# About this book
2-
3-
These tutorials were initially developed while Emma Marshall interned with the Summer Internships in Parallel Computational Sciences ([SIParCS](https://www.cisl.ucar.edu/outreach/internships)) program at the National Center for Atmospheric Research ([NCAR](https://ncar.ucar.edu/)). Jessica Scheick, Scott Henderson, and Deepak Cherian were internship supervisors for this project. The internship was also supported by a NASA Open Source Tools, Frameworks, and Libraries program (Award 80NSSC22K0345), with a specific focus on developing educational resources for working with cloud-hosted data using Xarray. Tutorial development continued after the conclusion of the SIParCS internship when Emma Marshall returned to the University of Utah as a Ph.D. student, where she was supported by a FINESST Fellowship Grant (80NSSC22K1536).
2+
# ====================
43
## Contributing
54

65
If you'd like to contribute to this book, please start a discussion or raise an issue in the GitHub [repository](https://github.com/e-marshall/cloud-open-source-geospatial-datacube-workflows).

book/intro/1_getting_started.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,3 +56,6 @@ This tutorial focuses on another satellite dataset: [Sentinel-1](https://www.esa
5656

5757
A summary of the lessons learned throughout the tutorials and synthesis of these ideas into suggestions and best practices for developing scientific workflows analyzing n-dimensional earth observation data.
5858

59+
## About this book
60+
61+
These tutorials were initially developed while Emma Marshall interned with the Summer Internships in Parallel Computational Sciences ([SIParCS](https://www.cisl.ucar.edu/outreach/internships)) program at the National Center for Atmospheric Research ([NCAR](https://ncar.ucar.edu/)). Jessica Scheick, Scott Henderson, and Deepak Cherian were internship supervisors for this project. The internship was also supported by a NASA Open Source Tools, Frameworks, and Libraries program (Award 80NSSC22K0345), with a specific focus on developing educational resources for working with cloud-hosted data using Xarray. Tutorial development continued after the conclusion of the SIParCS internship when Emma Marshall returned to the University of Utah as a Ph.D. student, where she was supported by a FINESST Fellowship Grant (80NSSC22K1536).

book/introduction.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,9 @@ We focus on data derived from different types of satellite imagery that are publ
77

88
The goal of this book is to reduce barriers to entry to working with earth observation data for scientific analysis. It features two stand-alone tutorials, each detailing steps involved in a typical workflow, from data access and organization to exploratory data analysis and visualization.
99

10-
Underpinning these examples is a focus on understanding the different components of n-dimensional, gridded datasets, how they relate to the tools we use to work with them (in this case, the Python package Xarray), and how a strong understanding of a scientific dataset within the context of your chosen data model can enable more efficient and intuitive analysis.
10+
Underpinning these examples is a focus on understanding the different components of n-dimensional, gridded datasets, how they relate to the tools we use to work with them (in this case, the Python package Xarray), and how a strong understanding of a scientific dataset within the context of your chosen data model can enable more efficient and intuitive analysis.
11+
12+
{{break}}
13+
14+
```{figure} background/imgs/cube.png
15+
```

book/itslive/nbs/4_exploratory_data_analysis_single.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -287,7 +287,8 @@
287287
"single_glacier_raster_web.v.mean(dim=\"mid_date\").plot(ax=ax, cmap=\"viridis\", alpha=0.75, add_colorbar=True)\n",
288288
"single_glacier_vector_web.plot(ax=ax, facecolor=\"None\", edgecolor=\"red\", alpha=0.75)\n",
289289
"# Add basemap\n",
290-
"cx.add_basemap(ax, crs=single_glacier_vector_web.crs, source=cx.providers.Esri.WorldImagery)"
290+
"cx.add_basemap(ax, crs=single_glacier_vector_web.crs, source=cx.providers.Esri.WorldImagery)\n",
291+
"fig.suptitle('Mean velocity over the time series (m/y) and RGI 7 glacier outline (red) with satellite image basemap from ESRI World Imagery')"
291292
]
292293
},
293294
{
@@ -355,7 +356,7 @@
355356
"single_glacier_raster[\"cov\"].plot(ax=ax, linestyle=\"None\", marker=\"x\", alpha=0.75)\n",
356357
"\n",
357358
"# Specify axes labels and title\n",
358-
"fig.suptitle(\"Velocity data coverage ovver time\", fontsize=16)\n",
359+
"fig.suptitle(\"Velocity data coverage over time\", fontsize=16)\n",
359360
"ax.set_ylabel(\"Coverage (proportion)\", x=-0.05, fontsize=12)\n",
360361
"ax.set_xlabel(\"Date\", fontsize=12);"
361362
]
@@ -1338,7 +1339,7 @@
13381339
"metadata": {
13391340
"celltoolbar": "Tags",
13401341
"kernelspec": {
1341-
"display_name": "Python 3 (ipykernel)",
1342+
"display_name": "geospatial_datacube_book_env",
13421343
"language": "python",
13431344
"name": "python3"
13441345
},

book/sentinel1/nbs/4_read_pc_data.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -210106,13 +210106,13 @@
210106210106
]
210107210107
},
210108210108
{
210109+
210109210110
"cell_type": "code",
210110210111
"execution_count": 15,
210111210112
"id": "ce64e11b",
210112210113
"metadata": {},
210113-
"outputs": [],
210114210114
"source": [
210115-
"da = da.persist()"
210115+
"Load the data into memory (this may take a few minutes):"
210116210116
]
210117210117
},
210118210118
{

0 commit comments

Comments
 (0)