Skip to content

Commit b87a9aa

Browse files
Add filesystem in reading schema, & change to correct version of lsdb
1 parent 8484fba commit b87a9aa

File tree

2 files changed

+6
-4
lines changed

2 files changed

+6
-4
lines changed

.binder/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ pyarrow>=10.0.1
1212
hpgeom
1313
pandas>=1.5.2
1414
dask[distributed]
15-
lsdb>=0.5.6
15+
lsdb>=0.6.4
1616
psutil
1717
ray
1818
s3fs

tutorials/parquet-catalog-demos/irsa-hats-with-lsdb.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ We will use lsdb to leverage HATS partitioning for performing fast spatial queri
4848

4949
```{code-cell} ipython3
5050
# Uncomment the next line to install dependencies if needed.
51-
# !pip install s3fs lsdb>=0.5.6 pyarrow pandas numpy astropy dask matplotlib
51+
# !pip install s3fs "lsdb>=0.6.4" pyarrow pandas numpy astropy dask matplotlib
5252
```
5353

5454
```{code-cell} ipython3
@@ -208,7 +208,8 @@ Using pyarrow parquet, we can [read the schema](https://arrow.apache.org/docs/py
208208

209209
```{code-cell} ipython3
210210
euclid_schema = pq.read_schema(
211-
f"s3://{euclid_q1_bucket}/{euclid_q1_hats_prefix}/{euclid_q1_schema_path}"
211+
f"s3://{euclid_q1_bucket}/{euclid_q1_hats_prefix}/{euclid_q1_schema_path}",
212+
filesystem=s3
212213
)
213214
type(euclid_schema)
214215
```
@@ -314,7 +315,8 @@ We can define column and row filters for the ZTF catalog based on its schema, si
314315

315316
```{code-cell} ipython3
316317
ztf_schema = pq.read_schema(
317-
f"s3://{ztf_bucket}/{ztf_hats_prefix}/{ztf_schema_path}"
318+
f"s3://{ztf_bucket}/{ztf_hats_prefix}/{ztf_schema_path}",
319+
filesystem=s3
318320
)
319321
ztf_schema_df = pq_schema_to_df(ztf_schema)
320322
ztf_schema_df

0 commit comments

Comments
 (0)