|
| 1 | +# Getting Started with Acquire |
| 2 | + |
| 3 | +Acquire (`acquire-imaging` [on PyPI](https://pypi.org/project/acquire-imaging/)) is a Python package providing a multi-camera video streaming library focused on performant microscopy, with support for up to two simultaneous, independent, video streams. |
| 4 | + |
| 5 | +This tutorial covers Acquire installation and shows an example of using Acquire with its provided simulated cameras to demonstrate the acquisition process. |
| 6 | + |
| 7 | +## Installation |
| 8 | + |
| 9 | +To install Acquire on Windows, macOS, or Ubuntu, simply run the following command: |
| 10 | + |
| 11 | +``` |
| 12 | +python -m pip install acquire-imaging |
| 13 | +``` |
| 14 | + |
| 15 | +We recommend installing `Acquire` in a fresh conda environment or virtualenv. |
| 16 | +For example, to install `Acquire` in a conda environment named `acquire`: |
| 17 | + |
| 18 | +``` |
| 19 | +conda create -n acquire python=3.10 # follow the prompts and proceed with the defaults |
| 20 | +conda activate acquire |
| 21 | +python -m pip install acquire-imaging |
| 22 | +``` |
| 23 | + |
| 24 | +or with virtualenv: |
| 25 | + |
| 26 | +```shell |
| 27 | +$ python -m venv venv |
| 28 | +$ . ./venv/bin/activate # or on Windows: .\venv\Scripts\Activate.bat or .\venv\Scripts\Activate.ps1 |
| 29 | +(venv) $ python -m pip install acquire-imaging |
| 30 | +``` |
| 31 | + |
| 32 | +Once you have Acquire installed, simply call `import acquire` in your script, notebook, or module to start utilizing the package. |
| 33 | + |
| 34 | +```python |
| 35 | +import acquire |
| 36 | +``` |
| 37 | + |
| 38 | +## Supported Cameras and File Formats |
| 39 | + |
| 40 | +Acquire supports the following cameras (currently only on Windows): |
| 41 | + |
| 42 | +- [Hamamatsu Orca Fusion BT (C15440-20UP)](https://www.hamamatsu.com/eu/en/product/cameras/cmos-cameras/C15440-20UP.html) |
| 43 | +- [Vieworks VC-151MX-M6H00](https://www.visionsystech.com/products/cameras/vieworks-vc-151mx-sony-imx411-sensor-ultra-high-resolution-cmos-camera-151-mp) |
| 44 | +- [FLIR Blackfly USB3 (BFLY-U3-23S6M-C)](https://www.flir.com/products/blackfly-usb3/?model=BFLY-U3-23S6M-C&vertical=machine+vision&segment=iis) |
| 45 | +- [FLIR Oryx 10GigE (ORX-10GS-51S5M-C)](https://www.flir.com/products/oryx-10gige/?model=ORX-10GS-51S5M-C&vertical=machine+vision&segment=iis) |
| 46 | + |
| 47 | +Acquire also supports the following output file formats: |
| 48 | + |
| 49 | +- [Tiff](https://en.wikipedia.org/wiki/TIFF) |
| 50 | +- [Zarr](https://zarr.dev/) |
| 51 | + |
| 52 | +Acquire also provides a few simulated cameras, as well as raw byte storage and "trash," which discards all data written to it. |
| 53 | + |
| 54 | +## Tutorial Prerequisites |
| 55 | + |
| 56 | +We will be streaming to [TIFF](http://bigtiff.org/), using [scikit-image](https://scikit-image.org/) to load and inspect the data, and visualizing the data using [napari](https://napari.org/stable/). |
| 57 | + |
| 58 | +You can install the prerequisites with: |
| 59 | + |
| 60 | +``` |
| 61 | +python -m pip install "napari[all]" scikit-image |
| 62 | +``` |
| 63 | + |
| 64 | +## Setup for Acquisition |
| 65 | + |
| 66 | +In Acquire parlance, the combination of a source (camera), filter, and sink (output) is called a **video stream**. |
| 67 | +We will generate data using simulated cameras (our source) and output to TIFF on the filesystem (our sink). |
| 68 | +(For this tutorial, we will not use a filter.) |
| 69 | +Acquire supports up to two such video streams. |
| 70 | + |
| 71 | +Sources are implemented as **Camera** devices, and sinks are implemented as **Storage** devices. |
| 72 | +We'll start by seeing all the devices that Acquire supports: |
| 73 | + |
| 74 | +```python |
| 75 | +import acquire |
| 76 | + |
| 77 | +runtime = acquire.Runtime() |
| 78 | +dm = runtime.device_manager() |
| 79 | + |
| 80 | +for device in dm.devices(): |
| 81 | + print(device) |
| 82 | +``` |
| 83 | + |
| 84 | +The **runtime** is the main entry point in Acquire. |
| 85 | +Through the runtime, you configure your devices, start acquisition, check acquisition status, inspect data as it streams from your cameras, and terminate acquisition. |
| 86 | + |
| 87 | +Let's configure our devices now. |
| 88 | +To do this, we'll get a copy of the current runtime configuration. |
| 89 | +We can update the configuration with identifiers from the runtime's **device manager**, but these devices won't be created until we start the acquisition. |
| 90 | + |
| 91 | +Before configuring the streams, grab the current configuration of the `Runtime` object with: |
| 92 | + |
| 93 | +```python |
| 94 | +config = runtime.get_configuration() |
| 95 | +``` |
| 96 | + |
| 97 | +Video streams are configured independently. |
| 98 | +Configure the first video stream by setting properties on `config.video[0]` and the second video stream with `config.video[1]`. |
| 99 | +We'll be using simulated cameras, one generating a radial sine pattern and one generating a random pattern. |
| 100 | + |
| 101 | +```python |
| 102 | +config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: radial sin") |
| 103 | + |
| 104 | +# how many adjacent pixels in each direction to combine by averaging; here, 1 means not to combine |
| 105 | +config.video[0].camera.settings.binning = 1 |
| 106 | + |
| 107 | +# how long (in microseconds) your camera should collect light from the sample; for simulated cameras, |
| 108 | +# this is just a waiting period before generating the next frame |
| 109 | +config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms |
| 110 | + |
| 111 | +# the data type representing each pixel; here we choose unsigned 8-bit integer |
| 112 | +config.video[0].camera.settings.pixel_type = acquire.SampleType.U8 |
| 113 | + |
| 114 | +# the shape, in pixels, of the image; width first, then height |
| 115 | +config.video[0].camera.settings.shape = (1024, 768) |
| 116 | +``` |
| 117 | + |
| 118 | + |
| 119 | +```python |
| 120 | +config.video[1].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: uniform random") |
| 121 | + |
| 122 | +# how many adjacent pixels in each direction to combine by averaging; here, 1 means not to combine |
| 123 | +config.video[1].camera.settings.binning = 1 |
| 124 | + |
| 125 | +# how long (in microseconds) your camera should collect light from the sample; for simulated cameras, |
| 126 | +# this is just a waiting period before generating the next frame |
| 127 | +config.video[1].camera.settings.exposure_time_us = 1e4 # 10 ms |
| 128 | + |
| 129 | +# the data type representing each pixel; here we choose unsigned 8-bit integer |
| 130 | +config.video[1].camera.settings.pixel_type = acquire.SampleType.U8 |
| 131 | + |
| 132 | +# the shape, in pixels, of the image; width first, then height |
| 133 | +config.video[1].camera.settings.shape = (1280, 720) |
| 134 | +``` |
| 135 | + |
| 136 | +Now we'll configure each output, or sink device. |
| 137 | +For both simulated cameras, we'll be writing to [TIFF](http://bigtiff.org/), a well-known format for storing image data. |
| 138 | +For now, we'll simply specify the output file name. |
| 139 | + |
| 140 | +```python |
| 141 | +config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tiff") |
| 142 | + |
| 143 | +# what file or directory to write the data to |
| 144 | +config.video[0].storage.settings.filename = "output1.tif" |
| 145 | +``` |
| 146 | + |
| 147 | + |
| 148 | +```python |
| 149 | +config.video[1].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tiff") |
| 150 | + |
| 151 | +# what file or directory to write the data to |
| 152 | +config.video[1].storage.settings.filename = "output2.tif" |
| 153 | +``` |
| 154 | + |
| 155 | +Finally, let's specify how many frames to generate for each camera before stopping our simulated acquisition. |
| 156 | +We also need to register our configuration with the runtime using the `set_configuration` method. |
| 157 | + |
| 158 | +If you want to let the runtime acquire effectively forever, you can set `max_frame_count` to `2**64 - 1`. |
| 159 | + |
| 160 | +```python |
| 161 | +config.video[0].max_frame_count = 100 # collect 100 frames |
| 162 | +config.video[1].max_frame_count = 150 # collect 150 frames |
| 163 | + |
| 164 | +config = runtime.set_configuration(config) |
| 165 | +``` |
| 166 | + |
| 167 | +!!! note |
| 168 | + |
| 169 | + If you run this tutorial multiple times, you can clear output from previous runs with: |
| 170 | + |
| 171 | + ```python |
| 172 | + from pathlib import Path |
| 173 | + |
| 174 | + Path(config.video[0].storage.settings.uri).unlink(missing_ok=True) |
| 175 | + Path(config.video[1].storage.settings.uri).unlink(missing_ok=True) |
| 176 | + ``` |
| 177 | + |
| 178 | +## Acquire Data |
| 179 | + |
| 180 | +To start acquiring data: |
| 181 | + |
| 182 | +```python |
| 183 | +runtime.start() |
| 184 | +``` |
| 185 | + |
| 186 | +Acquisition happens in a separate thread, so at any point we can check on the status by calling the `get_state` method. |
| 187 | + |
| 188 | +```python |
| 189 | +runtime.get_state() |
| 190 | +``` |
| 191 | + |
| 192 | +Finally, once we're done acquiring, we call `runtime.stop()`. |
| 193 | +This method will wait until you've reached the number of frames to collect specified in `config.video[0].max_frame_count` or `config.video[1].max_frame_count`, whichever is larger. |
| 194 | + |
| 195 | +```python |
| 196 | +runtime.stop() |
| 197 | +``` |
| 198 | + |
| 199 | +## Visualizing the data with napari |
| 200 | + |
| 201 | +Let's take a look at what we've written. |
| 202 | +We'll load each Zarr dataset as a Dask array and inspect its dimensions, then we'll use napari to view it. |
| 203 | + |
| 204 | +```python |
| 205 | +from skimage.io import imread |
| 206 | +import napari |
| 207 | + |
| 208 | +data1 = imread(config.video[0].storage.settings.filename) |
| 209 | +data2 = imread(config.video[1].storage.settings.filename) |
| 210 | + |
| 211 | +viewer1 = napari.view_image(data1) |
| 212 | + |
| 213 | +viewer2 = napari.view_image(data2) |
| 214 | +``` |
| 215 | + |
| 216 | +## Conclusion |
| 217 | + |
| 218 | +For more examples of using Acquire, check out our [tutorials page](tutorials/index.md). |
| 219 | + |
| 220 | +References: |
| 221 | +[Tiff]: https://en.wikipedia.org/wiki/TIFF |
| 222 | +[scikit-image]: https://scikit-image.org/ |
| 223 | +[napari]: https://napari.org/stable/ |
0 commit comments