Skip to content

Commit f6ed7b3

Browse files
committed
Consolidate get_data -> demo, polish a bit
We no longer need two stages; downloading isn't required since we can change the geopackage URL from https -> http to make it work 🎉
1 parent 2f3f312 commit f6ed7b3

File tree

2 files changed

+333
-585
lines changed

2 files changed

+333
-585
lines changed

modules/06-geojupyter/demo.ipynb

Lines changed: 333 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,333 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "189c7230-a3b3-4c90-b053-68bb5857d5a6",
6+
"metadata": {},
7+
"source": [
8+
"# GeoJupyter demo"
9+
]
10+
},
11+
{
12+
"cell_type": "markdown",
13+
"id": "77660f5a-8fe5-436b-bcc2-13cc2dd39be4",
14+
"metadata": {},
15+
"source": [
16+
"## Today: [🔗JupyterGIS](https://jupytergis.readthedocs.io/)"
17+
]
18+
},
19+
{
20+
"cell_type": "markdown",
21+
"id": "e34edad4-220d-4acb-a214-048c5bba3b32",
22+
"metadata": {},
23+
"source": [
24+
"JupyterGIS is a **real-time collaborative** Geographical Information System (GIS) environment in JupyterLab.\n",
25+
"\n",
26+
"You can [🔗try it right now in JupyterLite](https://jupytergis.readthedocs.io/en/latest/lite/lab/)!\n",
27+
"\n",
28+
"Let's explore some functionality together (based on [🔗Carl Boettiger](https://ourenvironment.berkeley.edu/people/carl-boettiger)'s [🔗ESPM-288 course](https://espm-288.carlboettiger.info/)). We'll explore whether neighborhoods that were highly-rated (A) under the disciminatory 1930s practice of [🔗redlining](https://en.wikipedia.org/wiki/Redlining) are greener today than neighborhoods graded D?"
29+
]
30+
},
31+
{
32+
"cell_type": "markdown",
33+
"id": "a7ada6c0-e94b-400b-bc3f-d6d97fa6ab3a",
34+
"metadata": {},
35+
"source": [
36+
"### Constants\n",
37+
"\n",
38+
"Some variables that will be used throughout the Notebook."
39+
]
40+
},
41+
{
42+
"cell_type": "code",
43+
"execution_count": null,
44+
"id": "5ec4b0db-7b9c-4779-9bef-01ae8bb14041",
45+
"metadata": {},
46+
"outputs": [],
47+
"source": [
48+
"from pathlib import Path\n",
49+
"\n",
50+
"DATA_DIR = Path().cwd() / \"data\"\n",
51+
"INEQUALITY_GEOJSON_FILE = DATA_DIR / \"redlining_newhaven_ct.geojson\"\n",
52+
"NDVI_FILE = DATA_DIR / \"ndvi.tif\""
53+
]
54+
},
55+
{
56+
"cell_type": "markdown",
57+
"id": "b2133145-ef92-4d5e-84b8-a53546ef1fe1",
58+
"metadata": {},
59+
"source": [
60+
"### Get historical redlining data\n",
61+
"\n",
62+
"We're using [🔗DuckDB](https://duckdb.org/) to connect to a [🔗geopackage](https://www.geopackage.org/) dataset containing data about redlining, and filter that data to select residential neighborhoods in New Haven, Connecticut, USA."
63+
]
64+
},
65+
{
66+
"cell_type": "code",
67+
"execution_count": null,
68+
"id": "b3a59b77-e25f-4353-ac66-2269fcb90107",
69+
"metadata": {},
70+
"outputs": [],
71+
"source": [
72+
"import ibis\n",
73+
"from ibis import _\n",
74+
"\n",
75+
"\n",
76+
"con = ibis.duckdb.connect(extensions=[\"spatial\"])\n",
77+
"\n",
78+
"redlines = (\n",
79+
" con\n",
80+
" .read_geo(\"/vsicurl/http://dsl.richmond.edu/panorama/redlining/static/mappinginequality.gpkg\")\n",
81+
" .filter(_.city == \"New Haven\", _.residential)\n",
82+
")\n",
83+
"\n",
84+
"new_haven_redlining = redlines.execute().set_crs(\"EPSG:4326\")\n",
85+
"new_haven_redlining.to_file(INEQUALITY_GEOJSON_FILE, engine=\"fiona\")\n",
86+
"\n",
87+
"new_haven_bbox = new_haven_redlining.total_bounds"
88+
]
89+
},
90+
{
91+
"cell_type": "markdown",
92+
"id": "63f92fa9-49cd-4e9e-a19d-7d6a60747b65",
93+
"metadata": {},
94+
"source": [
95+
"Let's explore the data a little bit. Hover over the polygons after running the cell below! Does anything jump out at you?"
96+
]
97+
},
98+
{
99+
"cell_type": "code",
100+
"execution_count": null,
101+
"id": "2c86c5ab-e38d-4503-a45b-8cd5c588515d",
102+
"metadata": {},
103+
"outputs": [],
104+
"source": [
105+
"new_haven_redlining.explore(column=\"grade\", cmap=\"inferno\")"
106+
]
107+
},
108+
{
109+
"cell_type": "markdown",
110+
"id": "61803898-9832-44a9-a04f-23bdb1a4ac3c",
111+
"metadata": {},
112+
"source": [
113+
"### Calculating NDVI\n",
114+
"\n",
115+
"We're going to calculate NDVI from Sentinel-2 data."
116+
]
117+
},
118+
{
119+
"cell_type": "markdown",
120+
"id": "e6dd8c69-3254-45a1-b8f9-6a5004bb5e7b",
121+
"metadata": {},
122+
"source": [
123+
"#### Open Sentinel-2 data\n",
124+
"\n",
125+
"We are using a [🔗STAC catalog](https://stacspec.org/en) to locate the data files we're interested in (covering New Haven during Summer 2024, with <20% cloud cover) and opening them as an Xarray DataSet."
126+
]
127+
},
128+
{
129+
"cell_type": "code",
130+
"execution_count": null,
131+
"id": "ad4f51ef-822c-4f31-b9e4-ec28864d9f2b",
132+
"metadata": {},
133+
"outputs": [],
134+
"source": [
135+
"import odc.stac\n",
136+
"from pystac_client import Client\n",
137+
"\n",
138+
"items = Client.open(\n",
139+
" \"https://earth-search.aws.element84.com/v1\"\n",
140+
").search(\n",
141+
" collections = ['sentinel-2-l2a'],\n",
142+
" bbox=new_haven_bbox,\n",
143+
" datetime = \"2024-06-01/2024-09-01\",\n",
144+
" query={\"eo:cloud_cover\": {\"lt\": 20}}\n",
145+
").item_collection()\n",
146+
"\n",
147+
"data = odc.stac.load(\n",
148+
" items,\n",
149+
" bands=[\"nir08\", \"red\"],\n",
150+
" bbox=new_haven_bbox,\n",
151+
" resolution=10,\n",
152+
" groupby=\"solar_day\",\n",
153+
" chunks = {}, # this tells odc to use dask\n",
154+
")\n",
155+
"data"
156+
]
157+
},
158+
{
159+
"cell_type": "markdown",
160+
"id": "81a2b33a-117d-411b-8f73-4dd873252738",
161+
"metadata": {},
162+
"source": [
163+
"#### Do the NDVI calculation"
164+
]
165+
},
166+
{
167+
"cell_type": "code",
168+
"execution_count": null,
169+
"id": "1a1225c8-80f6-4207-b065-7a302524eff8",
170+
"metadata": {},
171+
"outputs": [],
172+
"source": [
173+
"ndvi = (\n",
174+
" (data.nir08 - data.red) / (data.red + data.nir08)\n",
175+
").median(\n",
176+
" \"time\",\n",
177+
" keep_attrs=True,\n",
178+
").where(\n",
179+
" ndvi < 1\n",
180+
").compute()\n",
181+
"\n",
182+
"ndvi.plot.imshow()"
183+
]
184+
},
185+
{
186+
"cell_type": "markdown",
187+
"id": "809c8abf-2d96-4e22-8a41-2604ccc8e419",
188+
"metadata": {},
189+
"source": [
190+
"#### Save the NDVI raster to file"
191+
]
192+
},
193+
{
194+
"cell_type": "code",
195+
"execution_count": null,
196+
"id": "1155a480-3e54-4e9e-adc2-c191e4ff55fe",
197+
"metadata": {},
198+
"outputs": [],
199+
"source": [
200+
"import rioxarray\n",
201+
"\n",
202+
"ndvi.rio.reproject(\n",
203+
" \"EPSG:4326\",\n",
204+
").rio.to_raster(\n",
205+
" raster_path=NDVI_FILE, \n",
206+
" driver=\"COG\",\n",
207+
")"
208+
]
209+
},
210+
{
211+
"cell_type": "markdown",
212+
"id": "41a3f550-04da-4170-9e69-b4131d428fdd",
213+
"metadata": {},
214+
"source": [
215+
"### Calculating mean NDVI for each New Haven neighborhood\n",
216+
"\n",
217+
"To find out whether neighborhoods graded \"A\" are greener than neighborhoods graded \"D\", we'll calculate the mean NDVI for each neighborhood using [🔗exactextract](https://isciences.github.io/exactextract/background.html), which is known for its capability to include fractional grid cells in its calculation (as opposed to other tools, where a cell is binary, either in or out)."
218+
]
219+
},
220+
{
221+
"cell_type": "code",
222+
"execution_count": null,
223+
"id": "40d9b9d9-2cd1-4f14-8758-ea6c4bcd0c97",
224+
"metadata": {},
225+
"outputs": [],
226+
"source": [
227+
"from exactextract import exact_extract\n",
228+
"\n",
229+
"new_haven_redlining_and_ndvi = exact_extract(\n",
230+
" NDVI_FILE,\n",
231+
" new_haven_redlining,\n",
232+
" \"mean_ndvi=mean\",\n",
233+
" include_geom = True,\n",
234+
" include_cols=[\"label\", \"grade\", \"city\", \"fill\"],\n",
235+
" output=\"pandas\",\n",
236+
")\n",
237+
"\n",
238+
"new_haven_redlining_and_ndvi.explore(column=\"mean_ndvi\")"
239+
]
240+
},
241+
{
242+
"cell_type": "markdown",
243+
"id": "455001f2-e26b-4b23-97a5-610151fbc659",
244+
"metadata": {},
245+
"source": [
246+
"## Future"
247+
]
248+
},
249+
{
250+
"cell_type": "markdown",
251+
"id": "f5f96150-f589-4fdd-b21f-2a0b2d933426",
252+
"metadata": {},
253+
"source": [
254+
"### Story maps / \"scrolly telling\"\n",
255+
"\n",
256+
"Story map support for JupyterGIS is in progress.\n",
257+
"\n",
258+
"We anticipate working with the [🔗MyST](https://mystmd.org/) and [🔗Closeread](https://closeread.dev/) developers to develop interactive scrollytelling experiences in MyST Markdown documents."
259+
]
260+
},
261+
{
262+
"cell_type": "markdown",
263+
"id": "ec8b9c06-a35f-49c4-a061-0009518c57a7",
264+
"metadata": {},
265+
"source": [
266+
"### \"microgis\" (placeholder name)\n",
267+
"\n",
268+
"We're working on a [🔗project](https://github.com/geojupyter/jupyter-microgis) to provide an instant layered visual environment for any number of Python datasets (starting with rioxarray DataArrays and GeoPandas GeoDataFrames) in a widget.\n",
269+
"The goal is to minimize time-to-visualization.\n",
270+
"\n",
271+
"It would provide sensible default symbology choices, and customization would be available with as-needed complexity.\n",
272+
"In other words, you shouldn't need to learn a complex symbology expression language when your needs are simple, but complex expression is available if you need it.\n",
273+
"\n",
274+
"```python\n",
275+
"from microgis import explore\n",
276+
"\n",
277+
"\n",
278+
"explore(\n",
279+
" da1, da2, gdf1,\n",
280+
" {\n",
281+
" \"data\": gdf2,\n",
282+
" \"symbology\": {\n",
283+
" \"choropleth\": {\n",
284+
" \"steps\": 11,\n",
285+
" \"classification\": \"natural\",\n",
286+
" },\n",
287+
" },\n",
288+
" },\n",
289+
")\n",
290+
"```"
291+
]
292+
},
293+
{
294+
"cell_type": "markdown",
295+
"id": "e5152113-a7bc-45dd-84d1-0d19e22196d2",
296+
"metadata": {},
297+
"source": [
298+
"### More!\n",
299+
"\n",
300+
":::{image} https://geojupyter.org/assets/images/community-diagram.svg\n",
301+
":width: 400px\n",
302+
":align: center\n",
303+
":::\n",
304+
"\n",
305+
"GeoJupyter's priorities are broad, and are based on our community's needs. We can only know what those needs are if you join us!\n",
306+
"\n",
307+
"Please join the [🔗Jupyter Zulip](https://jupyter.zulipchat.com) today and find us in the `#geojupyter` channel!\n",
308+
"\n"
309+
]
310+
}
311+
],
312+
"metadata": {
313+
"kernelspec": {
314+
"display_name": "Python 3 (ipykernel)",
315+
"language": "python",
316+
"name": "python3"
317+
},
318+
"language_info": {
319+
"codemirror_mode": {
320+
"name": "ipython",
321+
"version": 3
322+
},
323+
"file_extension": ".py",
324+
"mimetype": "text/x-python",
325+
"name": "python",
326+
"nbconvert_exporter": "python",
327+
"pygments_lexer": "ipython3",
328+
"version": "3.13.0"
329+
}
330+
},
331+
"nbformat": 4,
332+
"nbformat_minor": 5
333+
}

0 commit comments

Comments
 (0)