FARO (Feedback Adaptive Real-time Optogenetics, also lighthouse in Spanish) acquires images, segments cells, extracts features, tracks them over time, and generates stimulation masks, all while the experiment is running. This enables closed-loop feedback control: stimulation patterns can be computed from the latest segmentation and applied within the same or next timepoint.
Pipeline <--> Controller <--> Microscope
-------- ---------- ----------
- segment - orchestrate - stage
- track experiment - camera
- extract - DMD/SLM
features - live cells
- stim mask
Microscope: hardware interface. Any microscope that implements useq-schema can be used. Works great with Micro-Manager / pymmcore-plus.
Pipeline: modular image processing. Performs segmentation, tracking, feature extraction. Decides if/where to photoactivate the sample.
Controller: experiment orchestrator. Queues acquisition events to the microscope, dispatches frames to the pipeline, and coordinates stimulation timing. A simulated controller (ControllerSimulated) can replay pre-acquired data from disk for testing or re-analysis.
Try experiments/02_demo_sim_optogenetic/ notebook to run a complete optogenetic feedback experiment on a simulated microscope, no hardware required.
# 1. Set microscope
mic = UniMMCoreSimulation(mmc=mmc)
mic.init_scope()
# 2. Assemble image processing pipeline
pipeline = ImageProcessingPipeline(
storage_path="/path/to/experiment",
segmentators=[SegmentationMethod("labels", OtsuSegmentator(), use_channel=0, save_tracked=True)],
feature_extractor=SimpleFE("labels"),
tracker=TrackerTrackpy(),
stimulator=MoveUp(),
)
# 3. Define experiment parameters
events = RTMSequence(
time_plan={"interval": 5.0, "loops": 20},
stage_positions=[{"x": 256, "y": 256}],
channels=[{"config": "BF", "exposure": 50}],
stim_channels=[{"config": "Cyan", "exposure": 50}],
stim_frames=range(5, 20),
)
# 4. Run!
ctrl = Controller(mic, pipeline)
ctrl.run_experiment(list(events), stim_mode="current")The pipeline is modular, each component is independent and can be swapped or set to None.
| Component | Purpose | Examples |
|---|---|---|
| Segmentation | Identify cells in images | OtsuSegmentator, SegmentorCellpose, SegmentatorStardist, remote via imaging-server-kit |
| Stimulation | Generate masks for DMD/SLM | StimWholeFOV, StimPercentageOfCell, CenterCircle, StimLine |
| Feature extraction | Measure cell properties | SimpleFE (position, area), FE_ErkKtr (ERK-KTR c/n ratio) |
| Tracking | Link cells across frames | TrackerTrackpy (via trackpy) |
pipeline = ImageProcessingPipeline(
storage_path="/path/to/experiment",
segmentators=segmentators, # list of SegmentationMethod
feature_extractor=fe,
tracker=tracker,
stimulator=stimulator,
feature_extractor_ref=ref_fe, # optional: for reference acquisition frames
)The Controller converts RTMEvents to MDAEvents, queues them through the microscope, and dispatches frames to the pipeline.
Experiments are defined as RTMSequence objects — an extension of useq's MDASequence. Multiple phases can be concatenated with +:
from faro.core.data_structures import Channel, PowerChannel, RTMSequence
phase_1 = RTMSequence(
time_plan={"interval": 60.0, "loops": 100},
stage_positions=fov_positions,
channels=[{"config": "miRFP", "exposure": 300}],
)
phase_2 = RTMSequence(
time_plan={"interval": 60.0, "loops": 150},
stage_positions=fov_positions,
channels=[{"config": "miRFP", "exposure": 300}],
)
events = phase_1 + phase_2Stimulation channels are acquired on specific frames, controlled via DMD/SLM. Define them with stim_channels and stim_frames:
seq = RTMSequence(
time_plan={"interval": 5.0, "loops": 50},
stage_positions=fov_positions,
channels=[{"config": "miRFP", "exposure": 300}],
stim_channels=(PowerChannel(config="CyanStim", exposure=200, power=10),),
stim_frames=range(10, 50),
)Stimulation modes (set via ctrl.run_experiment(events, stim_mode=...)):
"current": acquire frame, wait for segmentation mask, then stimulate in the same timepoint"previous": stimulate using the mask from the previous timepoint, then acquire
Reference channels are acquired on specific frames for one-time measurements whose features are broadcast to all timepoints — e.g., checking expression of an optogenetic tool, or a high-resolution image that would bleach the sample. Define them with ref_channels and ref_frames:
seq = RTMSequence(
time_plan={"interval": 5.0, "loops": 50},
stage_positions=fov_positions,
channels=[{"config": "miRFP", "exposure": 300}],
ref_channels=(Channel(config="mCitrine", exposure=600),),
ref_frames={-1}, # last frame only
)Alternatively, define the reference as a separate phase:
experiment = RTMSequence(time_plan=..., channels=..., ...)
ref_phase = RTMSequence(
time_plan={"interval": 0, "loops": 1},
stage_positions=fov_positions,
channels=[{"config": "mCitrine", "exposure": 600}],
rtm_metadata={"img_type": ImgType.IMG_REF},
)
events = experiment + ref_phaseBoth stim_frames and ref_frames accept:
- Sets:
{0, 5, 10}— specific frames - Ranges:
range(10, 50)orrange(0, 50, 2)— contiguous or strided - Negative indices:
-1= last frame,-2= second-to-last
axis_order controls the nesting of time, position, and channel dimensions (inherited from useq's MDASequence). The default is "tpcz":
axis_order |
Iteration | Use case |
|---|---|---|
"tpcz" (default) |
All positions at t=0, then all at t=1, ... | Maximize temporal resolution per position |
"ptcz" |
All timepoints at p=0, then all at p=1, ... | Complete one position before moving to the next |
# Visit all 3 positions at each timepoint before advancing
seq = RTMSequence(
time_plan={"interval": 5.0, "loops": 50},
stage_positions=[(0, 0, 0), (100, 100, 0), (200, 200, 0)],
channels=[{"config": "BF", "exposure": 50}],
axis_order="tpcz", # default: (t=0,p=0), (t=0,p=1), (t=0,p=2), (t=1,p=0), ...
)
# Complete all timepoints at each position before moving on
seq = RTMSequence(
...,
axis_order="ptcz", # (t=0,p=0), (t=1,p=0), ..., (t=49,p=0), (t=0,p=1), ...
)Stimulation and reference channels are assigned per-timepoint, so they work correctly regardless of axis order. For example, stim_frames={3} stimulates all positions at t=3, whether they are visited consecutively (tpcz) or spread across the run (ptcz).
When an experiment has more FOV positions than can be imaged within a single timepoint interval, FOV batching automatically partitions positions into sequential batches with adjusted timing.
from faro.core.utils import check_fov_batching, apply_fov_batching
events = list(seq)
# Check whether all FOVs fit in one batch
check_fov_batching(events, time_per_fov=2.0)
# If not, split into batches with adjusted timing
events = apply_fov_batching(events, time_per_fov=2.0)check_fov_batching computes how many FOVs fit in parallel (interval / time_per_fov) and reports whether batching is needed. apply_fov_batching offsets overflow FOVs into subsequent batches so that each batch runs within the interval. Timepoint indices are adjusted so the imaging order remains physically sensible.
from faro.core.controller import Controller
ctrl = Controller(mic, pipeline)
ctrl.run_experiment(events, stim_mode="current")validate_events() runs automatically before the experiment starts (disable with validate=False). It checks both pipeline compatibility and hardware limits.
Call run_experiment() once, then continue_experiment() to append more phases. The Analyzer (and all per-FOV tracking state) is reused, so timesteps, filenames, and particle IDs continue seamlessly.
ctrl = Controller(mic, pipeline)
# Phase 1: baseline — find cells, measure growth rate
phase1 = RTMSequence(time_plan={"interval": 10, "loops": 60}, ...)
ctrl.run_experiment(phase1, validate=False)
# Analyse phase-1 results to decide what to do next
df = pd.read_parquet("tracks/000_latest.parquet")
fast_growers = df.groupby("particle")["area"].apply(lambda x: x.diff().mean())
# Phase 2: stimulate based on analysis
phase2 = RTMSequence(time_plan={"interval": 10, "loops": 120}, ...)
ctrl.continue_experiment(phase2)
# Always call finish_experiment() when done
ctrl.finish_experiment()To add events while an experiment is still running, use extend_experiment():
ctrl.run_experiment(baseline_events, validate=False) # runs in background thread
ctrl.extend_experiment(extra_events) # non-blocking, appends to running acquisition| Method | When to use |
|---|---|
run_experiment() |
First acquisition — creates a fresh Analyzer |
continue_experiment() |
Subsequent phases — reuses Analyzer, offsets timesteps |
extend_experiment() |
Mid-run additions — pushes events into the running loop |
finish_experiment() |
Cleanup — shuts down Analyzer, resets state |
ControllerSimulated loads pre-acquired images from disk instead of from the camera, enabling testing and re-analysis without hardware.
It supports both TIFF (raw/, ref/ folders) and OME-Zarr (acquisition.ome.zarr) source layouts. When an OME-Zarr store is found, raw frames are read from zarr; reference images fall back to TIFFs in ref/.
from faro.core.controller import ControllerSimulated
ctrl = ControllerSimulated(mic, pipeline, old_data_project_path="/path/to/old_experiment")
ctrl.run_experiment(events, stim_mode="current")Use cases:
- Testing: run the full pipeline on demo data without any microscope hardware
- Re-analysis: replay raw images through a new pipeline (different segmentation, tracking, etc.)
- Validation: verify analysis logic reproducibly on known data
See experiments/11_erk_experiments_full_fov_stim/stim_rtmsequence_demo_mic.ipynb for a working example.
The offline re-analysis pipeline (ImageProcessingPipeline_postExperiment) reprocesses images from a previous experiment with new segmentation, tracking, or feature extraction parameters — without re-acquiring.
from faro.core.pipeline_post import ImageProcessingPipeline_postExperiment
pipeline = ImageProcessingPipeline_postExperiment(
img_storage_path="/path/to/original_experiment",
out_path="/path/to/new_output",
events=events,
segmentators=[SegmentationMethod("labels", SegmentorCellpose(), use_channel=0)],
feature_extractor=FE_ErkKtr("labels"),
tracker=TrackerTrackpy(),
n_jobs=4,
)
pipeline.run()Key features:
- Dual input format: reads from both TIFF and OME-Zarr source experiments
- Reuse old segmentations: set
use_old_segmentations=Trueto skip re-segmenting and only recompute tracking/features - Parallel FOV processing: uses
n_jobsthreads to process multiple FOVs concurrently - Hard-linking: when outputting to OME-Zarr, raw data resolution levels are hard-linked instead of copied (falls back to copy on network shares)
- Timestep gap correction:
correct_timestep_jumps=Truebackfills missing timesteps
See experiments/90_reanalysis/reanalysis.ipynb for a complete example.
The pipeline writes acquired images, segmentation masks, and stimulation masks to disk. Three writer backends are available:
| Writer | Format | Best for |
|---|---|---|
TiffWriter |
Individual TIFF files | Quick inspection, legacy compatibility |
OmeZarrWriter |
OME-Zarr v0.5 | Streaming acquisition, cloud-friendly, single multi-dimensional array |
OmeZarrWriterPlate |
OME-Zarr v0.5 (plate layout) | Multi-position experiments viewed as a spatial mosaic |
Streams all data into a single OME-Zarr v0.5 store. Raw images are stored as a single multi-dimensional array (t, c, y, x) for single-position experiments or (t, p, c, y, x) for multi-position experiments. Segmentation labels are stored as NGFF label groups.
experiment/
├── acquisition.ome.zarr/
│ ├── 0/ raw data array
│ └── labels/
│ ├── labels/ segmentation masks
│ └── stim_mask/ stimulation masks
└── tracks/ parquet files
from faro.core.writers import OmeZarrWriter
writer = OmeZarrWriter(
storage_path="/path/to/experiment",
dtype="uint16",
store_stim_images=False, # True: include stim channels in raw array
n_timepoints=None, # None = unbounded (resizable)
raw_chunk_t=1, # temporal chunk size for raw data
label_shard_t=50, # temporal shard size for labels
)
pipeline = ImageProcessingPipeline(
storage_path="/path/to/experiment",
writer=writer,
...
)The stream is initialized automatically by the Controller before the first frame is written — no manual setup required.
Stores each FOV position as a separate well in an OME-Zarr plate. When opened in napari with napari-ome-zarr, positions are tiled spatially as a mosaic rather than stacked along a position slider. This makes it easy to get an overview of all positions at once.
experiment/
├── acquisition.ome.zarr/
│ ├── A/
│ │ ├── 1/ well for position 0
│ │ │ └── 0/ image group (t, c, y, x)
│ │ ├── 2/ well for position 1
│ │ └── ...
└── tracks/
from faro.core.writers import OmeZarrWriterPlate
writer = OmeZarrWriterPlate(
storage_path="/path/to/experiment",
dtype="uint16",
)
pipeline = ImageProcessingPipeline(
storage_path="/path/to/experiment",
writer=writer,
...
)Saves each frame as a separate compressed TIFF file:
experiment/
├── raw/ 000_000.tiff, 000_001.tiff, ...
├── labels/ 000_000.tiff, ...
├── stim_mask/ ...
└── tracks/ 000_latest.parquet, ...
from faro.core.writers import TiffWriter
pipeline = ImageProcessingPipeline(
storage_path="/path/to/experiment",
writer=TiffWriter("/path/to/experiment"),
...
)OME-Zarr files can be viewed with napari using the napari-ome-zarr plugin. The easiest way to install napari as a standalone tool is with uv tool:
uv tool install napari[pyqt6] --with napari-ome-zarrThis makes napari available as a global command. Open an OME-Zarr dataset directly from the terminal:
napari /path/to/experiment/acquisition.ome.zarrYou can also create a desktop shortcut pointing to the napari executable for quick access. To find its location:
uv tool dirExisting TIFF-based experiments can be migrated to OME-Zarr using the conversion utility:
from faro.core.conversion import convert_tiff_to_omezarr
convert_tiff_to_omezarr(
src_path="/path/to/tiff_experiment",
dst_path="/path/to/omezarr_experiment",
)The microscope provides the hardware interface. Any microscope that implements the useq-schema MDA protocol can be used, the Controller never depends on pymmcore-plus directly.
AbstractMicroscope # useq MDA interface
├─ PyMMCoreMicroscope # implements via pymmcore-plus / CMMCorePlus
│ ├─ MMDemo # Micro-Manager demo hardware
│ ├─ UniMMCoreSimulation # simulated microscope
│ ├─ PymmcoreProxyMic # remote via pymmcore-proxy
│ └─ pertzlab/
│ ├─ Jungfrau
│ ├─ Moench
│ └─ Niesen
└─ InscoperMicroscope # implements via Inscoper SDK (planned)
| Method | Purpose |
|---|---|
run_mda(event_iter) |
Start MDA acquisition, returns thread handle |
connect_frame(callback) |
Connect frameReady: callback(img, event) |
disconnect_frame(callback) |
Disconnect frameReady |
cancel_mda() |
Cancel running MDA |
resolve_group(config_name) |
Return channel group for a config name (optional) |
resolve_power(channel) |
Return (device, property, power) (optional) |
validate_hardware(events) |
Check events against hardware limits (optional) |
init_scope() |
Load config, set up hardware |
post_experiment() |
Cleanup after experiment |
PyMMCoreMicroscope implements the MDA methods via CMMCorePlus. Concrete subclasses typically only need init_scope().
The PyMMCoreMicroscope branch uses pymmcore-plus as its hardware layer. Each microscope needs a Micro-Manager configuration file with:
- Channel presets for each fluorophore (e.g.,
GFP,mCherry,miRFP) - A
System > Startuppreset for initial hardware configuration - Device properties for cameras, light sources, filter wheels, etc.
For microscopes with controllable light source power, define a POWER_PROPERTIES mapping so PowerChannel objects resolve to the correct device:
POWER_PROPERTIES = {
"CyanStim": ("Spectra", "Cyan_Level"), # config_name -> (device, property)
}Create a new file in faro/microscope/ and inherit from PyMMCoreMicroscope:
import pymmcore_plus
from faro.microscope.pymmcore import PyMMCoreMicroscope
class MyScope(PyMMCoreMicroscope):
MICROMANAGER_PATH = "C:\\Program Files\\Micro-Manager-2.0"
MICROMANAGER_CONFIG = "path/to/config.cfg"
CHANNEL_GROUP = "Channel"
def __init__(self):
super().__init__()
pymmcore_plus.use_micromanager(self.MICROMANAGER_PATH)
self.mmc = pymmcore_plus.CMMCorePlus()
self.init_scope()
def init_scope(self):
self.mmc.loadSystemConfiguration(self.MICROMANAGER_CONFIG)
self.mmc.setChannelGroup(channelGroup=self.CHANNEL_GROUP)
def post_experiment(self):
pass # optional cleanupFor DMD support, set up self.dmd in __init__(), see pertzlab/moench.py for an example.
This project uses uv for dependency management.
git clone https://github.com/pertzlab/faro.git
cd faro
uv syncOptional dependency groups are available for segmentation backends and simulation:
| Extra | Packages | Use case |
|---|---|---|
cellpose |
cellpose, torch | Cellpose segmentation |
stardist |
stardist, tensorflow, csbdeep | StarDist segmentation |
convpaint |
napari-convpaint, scipy | ConvPaint segmentation |
virtual_microscope |
virtual-microscope | Fully simulated microscope with synthetic cell images. For a quick demo, the built-in Micro-Manager demo adapter works without this extra. |
Install one or more extras with uv sync:
uv sync --extra cellpose
uv sync --extra cellpose --extra stardist
uv sync --extra virtual_microscopeAlternatively, with pip (installs the package with all its dependencies):
pip install ".[cellpose]"
pip install ".[cellpose,stardist]"Contributions are welcome. Please submit pull requests or open issues.
MIT License. See LICENSE for details.
