Skip to content

Latest commit

 

History

History
81 lines (63 loc) · 2.89 KB

File metadata and controls

81 lines (63 loc) · 2.89 KB

pypi Python Version

CLI for Distributed Point Cloud Sampling

pip install nps-cli

Get started

nps --help
Usage: nps [OPTIONS]

Options:
  --cv-path TEXT                  Path to CloudVolume data.  [required]
  --mip INTEGER                   MIP level to use.  [default: 0]
  --timestamp INTEGER             Optional timestamp for the dataset version
                                  (graphene only).
  --sample_svids                  Sample SVIDs in addition to points (default:
                                  False) Graphene only.
  -o, --output-dir DIRECTORY      Output directory.  [default: ./nps_output]
  --worker-type [LocalWorker|LSFWorker|SlurmWorker]
                                  Type of worker to use for sampling.
                                  [default: LocalWorker]
  --num-workers INTEGER           Number of workers for blockwise sampling.
                                  [default: 8]
  --cpus-per-worker INTEGER       Number of CPUs per worker.  [default: 4]
  --queue TEXT                    Queue name (for LSF backend).  [default:
                                  local]
  --fraction FLOAT                Fraction of points to sample [0.0, 1.0].
                                  [default: 0.001]
  --bbox INTEGER...               Bounding box: begin_x begin_y begin_z
                                  end_x end_y end_z (in voxels).
  --block-size INTEGER...         Block size in voxels (X Y Z).  [default:
                                  128, 128, 128]
  -h, --help                      Show this message and exit.

Example usage

nps --cv-path precomputed://gs://neuroglancer-janelia-flyem-hemibrain/v1.0/segmentation

Sample point clouds within a FlyEM Hemibrain subvolume:

nps --cv-path precomputed://gs://neuroglancer-janelia-flyem-hemibrain/v1.0/segmentation --bbox 15347 19712 18606 15859 20224 19118 --fraction 0.01

Reading Point Clouds

Please refer to the pocaduck repo on how to read point clouds from the output directory:

from pocaduck import Query

# Create a query object
query = Query(storage_config=<PATH>) # path to folder where nps output is stored

# Get all available labels
labels = query.get_labels()
print(f"Available labels: {labels}")

# Get all points for a label (aggregated across all blocks)
points = query.get_points(label=12345)
print(f"Retrieved {points.shape[0]} points for label 12345")

# Close the query connection when done
query.close()

For optimized point cloud reading, consider this.

Deploy

python -m build
twine upload dist/*