@@ -137,14 +137,14 @@ The following more complex scenario demonstrates the full cycling between
137137PDAL and Python:
138138
139139* Read a small testfile from GitHub into a Numpy array
140- * Filters those arrays with Numpy for Intensity
140+ * Filters the array with Numpy for Intensity
141141* Pass the filtered array to PDAL to be filtered again
142- * Write the filtered array to an LAS file.
142+ * Write the final filtered array to a LAS file and a TileDB _ array
143+ via the `TileDB-PDAL integration `_ using the `TileDB writer plugin `_
143144
144145.. code-block :: python
145146
146147 import pdal
147- import numpy as np
148148
149149 data = " https://github.com/PDAL/PDAL/blob/master/test/data/las/1.2-with-color.las?raw=true"
150150
@@ -170,18 +170,27 @@ PDAL and Python:
170170 print (pipeline.execute()) # 387 points
171171 clamped = pipeline.arrays[0 ]
172172
173- # Write our intensity data to an LAS file
173+ # Write our intensity data to a LAS file and a TileDB array. For TileDB it is
174+ # recommended to use Hilbert ordering by default with geospatial point cloud data,
175+ # which requires specifying a domain extent. This can be determined automatically
176+ # from a stats filter that computes statistics about each dimension (min, max, etc.).
174177 pipeline = pdal.Writer.las(
175- filename = " clamped2 .las" ,
178+ filename = " clamped .las" ,
176179 offset_x = " auto" ,
177180 offset_y = " auto" ,
178181 offset_z = " auto" ,
179182 scale_x = 0.01 ,
180183 scale_y = 0.01 ,
181184 scale_z = 0.01 ,
182185 ).pipeline(clamped)
186+ pipeline |= pdal.Filter.stats() | pdal.Writer.tiledb(array_name = " clamped" )
183187 print (pipeline.execute()) # 387 points
184188
189+ # Dump the TileDB array schema
190+ import tiledb
191+ with tiledb.open(" clamped" ) as a:
192+ print (a.schema)
193+
185194 Executing Streamable Pipelines
186195................................................................................
187196Streamable pipelines (pipelines that consist exclusively of streamable PDAL
@@ -288,6 +297,9 @@ USE-CASE : Take a LiDAR map, create a mesh from the ground points, split into ti
288297 .. _`Numpy` : http://www.numpy.org/
289298.. _`schema` : http://www.pdal.io/dimensions.html
290299.. _`metadata` : http://www.pdal.io/development/metadata.html
300+ .. _`TileDB` : https://tiledb.com/
301+ .. _`TileDB-PDAL integration` : https://docs.tiledb.com/geospatial/pdal
302+ .. _`TileDB writer plugin` : https://pdal.io/stages/writers.tiledb.html
291303
292304.. image :: https://github.com/PDAL/python/workflows/Build/badge.svg
293305 :target: https://github.com/PDAL/python/actions?query=workflow%3ABuild
0 commit comments