Skip to content

Commit b64e804

Browse files
committed
Remove unneeded catalog param keywords from docs
Signed-off-by: Jason T. Brown <[email protected]>
1 parent 37e8aba commit b64e804

File tree

8 files changed

+11
-18
lines changed

8 files changed

+11
-18
lines changed

pyrasterframes/src/main/python/docs/languages.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ red_nir_monthly_2017.printSchema()
4242

4343
```python, step_3_python
4444
red_nir_tiles_monthly_2017 = spark.read.raster(
45-
catalog=red_nir_monthly_2017,
45+
red_nir_monthly_2017,
4646
catalog_col_names=['red', 'nir'],
4747
tile_dimensions=(256, 256)
4848
)

pyrasterframes/src/main/python/docs/local-algebra.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ catalog_df = spark.createDataFrame([
4040
Row(red=uri_pattern.format(4), nir=uri_pattern.format(8))
4141
])
4242
df = spark.read.raster(
43-
catalog=catalog_df,
43+
catalog_df,
4444
catalog_col_names=['red', 'nir']
4545
)
4646
df.printSchema()

pyrasterframes/src/main/python/docs/nodata-handling.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ from pyspark.sql import Row
9090
blue_uri = 'https://s22s-test-geotiffs.s3.amazonaws.com/luray_snp/B02.tif'
9191
scl_uri = 'https://s22s-test-geotiffs.s3.amazonaws.com/luray_snp/SCL.tif'
9292
cat = spark.createDataFrame([Row(blue=blue_uri, scl=scl_uri),])
93-
unmasked = spark.read.raster(catalog=cat, catalog_col_names=['blue', 'scl'])
93+
unmasked = spark.read.raster(cat, catalog_col_names=['blue', 'scl'])
9494
unmasked.printSchema()
9595
```
9696

pyrasterframes/src/main/python/docs/numpy-pandas.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ cat = spark.read.format('aws-pds-modis-catalog').load() \
5151
(col('acquisition_date') < lit('2018-02-22'))
5252
)
5353

54-
spark_df = spark.read.raster(catalog=cat, catalog_col_names=['B01']) \
54+
spark_df = spark.read.raster(cat, catalog_col_names=['B01']) \
5555
.select(
5656
'acquisition_date',
5757
'granule_id',

pyrasterframes/src/main/python/docs/raster-read.pymd

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -101,8 +101,6 @@ modis_catalog = spark.read \
101101
.withColumn('red' , F.concat('base_url', F.lit("_B01.TIF"))) \
102102
.withColumn('nir' , F.concat('base_url', F.lit("_B02.TIF")))
103103

104-
modis_catalog.printSchema()
105-
106104
print("Available scenes: ", modis_catalog.count())
107105
```
108106

@@ -124,10 +122,7 @@ equator.select('date', 'gid')
124122
Now that we have prepared our catalog, we simply pass the DataFrame or CSV string to the `raster` DataSource to load the imagery. The `catalog_col_names` parameter gives the columns that contain the URI's to be read.
125123

126124
```python, read_catalog
127-
rf = spark.read.raster(
128-
catalog=equator,
129-
catalog_col_names=['red', 'nir']
130-
)
125+
rf = spark.read.raster(equator, catalog_col_names=['red', 'nir'])
131126
rf.printSchema()
132127
```
133128

@@ -179,7 +174,7 @@ mb.printSchema()
179174

180175
If a band is passed into `band_indexes` that exceeds the number of bands in the raster, a projected raster column will still be generated in the schema but the column will be full of `null` values.
181176

182-
You can also pass a `catalog` and `band_indexes` together into the `raster` reader. This will create a projected raster column for the combination of all items passed into `catalog_col_names` and `band_indexes`. Again if a band in `band_indexes` exceeds the number of bands in a raster, it will have a `null` value for the corresponding column.
177+
You can also pass a _catalog_ and `band_indexes` together into the `raster` reader. This will create a projected raster column for the combination of all items in `catalog_col_names` and `band_indexes`. Again if a band in `band_indexes` exceeds the number of bands in a raster, it will have a `null` value for the corresponding column.
183178

184179
Here is a trivial example with a _catalog_ over multiband rasters. We specify two columns containing URIs and two bands, resulting in four projected raster columns.
185180

@@ -191,7 +186,7 @@ mb_cat = pd.DataFrame([
191186
},
192187
])
193188
mb2 = spark.read.raster(
194-
catalog=spark.createDataFrame(mb_cat),
189+
spark.createDataFrame(mb_cat),
195190
catalog_col_names=['foo', 'bar'],
196191
band_indexes=[0, 1],
197192
tile_dimensions=(64,64)

pyrasterframes/src/main/python/docs/supervised-learning.pymd

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,8 @@ catalog_df = pd.DataFrame([
3333
{b: uri_base.format(b) for b in cols}
3434
])
3535

36-
df = spark.read.raster(catalog=catalog_df,
37-
catalog_col_names=cols,
38-
tile_dimensions=(128, 128)
39-
).repartition(100)
36+
df = spark.read.raster(catalog_df, catalog_col_names=cols, tile_dimensions=(128, 128)) \
37+
.repartition(100)
4038

4139
df = df.select(
4240
rf_crs(df.B01).alias('crs'),

pyrasterframes/src/main/python/docs/time-series.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ We then [reproject](https://gis.stackexchange.com/questions/247770/understanding
9797
```python read_catalog
9898
raster_cols = ['B01', 'B02',] # red and near-infrared respectively
9999
park_rf = spark.read.raster(
100-
catalog=park_cat.select(['acquisition_date', 'granule_id', 'geo_simp'] + raster_cols),
100+
park_cat.select(['acquisition_date', 'granule_id', 'geo_simp'] + raster_cols),
101101
catalog_col_names=raster_cols) \
102102
.withColumn('park_native', st_reproject('geo_simp', lit('EPSG:4326'), rf_crs('B01'))) \
103103
.filter(st_intersects('park_native', rf_geometry('B01')))

pyrasterframes/src/main/python/docs/unsupervised-learning.pymd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ filenamePattern = "L8-B{}-Elkton-VA.tiff"
3737
catalog_df = pd.DataFrame([
3838
{'b' + str(b): os.path.join(resource_dir_uri(), filenamePattern.format(b)) for b in range(1, 8)}
3939
])
40-
df = spark.read.raster(catalog=catalog_df, catalog_col_names=catalog_df.columns)
40+
df = spark.read.raster(catalog_df, catalog_col_names=catalog_df.columns)
4141
df = df.select(
4242
rf_crs(df.b1).alias('crs'),
4343
rf_extent(df.b1).alias('extent'),

0 commit comments

Comments
 (0)