Skip to content
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
b070be2
Support NeuroConv 0.9.1
Feb 13, 2026
3008eb5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 13, 2026
42a4812
Pin dependencies and remove stale dandi-staging workaround
bendichter Feb 13, 2026
5432d4d
Apply suggestion from @bendichter
bendichter Feb 19, 2026
4ed1670
Support NeuroConv 0.9.3
bendichter Feb 20, 2026
cdeb453
Fix metadata type coercion for schemas with patternProperties
bendichter Feb 20, 2026
d5bc913
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 20, 2026
1ce8ec0
Simplify number coercion to a single float field
bendichter Feb 20, 2026
dc6f954
Fix emission_lambda type error for BrukerTiff converter
bendichter Feb 20, 2026
e8b6225
Add WhiteMatterRecordingInterface to supported interfaces
bendichter Feb 20, 2026
d2530ed
Regenerate storybook schemas for updated interfaces
bendichter Feb 20, 2026
96592fe
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 20, 2026
9cc92df
Merge pull request #1060 from NeurodataWithoutBorders/add-whitematter…
bendichter Feb 20, 2026
4ba0d16
Add Plexon2RecordingInterface to supported interfaces
bendichter Feb 20, 2026
c6ceacb
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 20, 2026
aae1497
Merge pull request #1061 from NeurodataWithoutBorders/add-plexon2-rec…
bendichter Feb 20, 2026
0590c10
Restore file-path and directory-path format types in schemas
bendichter Feb 20, 2026
66eec01
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 20, 2026
a02494b
Gitignore auto-generated storybook schema files
bendichter Feb 20, 2026
c4afb83
Merge pull request #1062 from NeurodataWithoutBorders/fix-schema-form…
bendichter Feb 20, 2026
12527d3
Apply suggestion from @rly
rly Mar 3, 2026
8f25716
Update workflows to use macos-15-intel
rly Mar 3, 2026
16f35f1
Keep numcodecs env comment
rly Mar 3, 2026
c380435
Use macos-15-intel for mac build/deployment
rly Mar 3, 2026
b33d2f8
Apply suggestion from @rly
rly Mar 3, 2026
728c5bb
Apply suggestion from @bendichter
bendichter Mar 3, 2026
8c22189
Fix trailing comma in supported_interfaces.json
bendichter Mar 4, 2026
1111698
Trigger CI to populate macos-15-intel ephys cache
bendichter Mar 4, 2026
ea7c00f
Retrigger CI after clearing cache space
bendichter Mar 4, 2026
4fe2908
Bump ephys cache key to v2 to bypass stuck reservation
bendichter Mar 4, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build_and_deploy_mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on:

jobs:
deploy-on-mac:
runs-on: macos-13
runs-on: macos-15-intel
# NOTE: macos-latest is an arm64 mac, and the dependency sonpy (Spike2RecordingInterface) has a .so file that
# works only on mac x64. This causes issues building and deploying on mac arm64. So we use macos-13 (x64)
# to build and deploy both the x64 and arm64 versions of the app.
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/example_data_cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
fail-fast: false
matrix:
python-version: ["3.12"]
os: [ubuntu-latest, windows-latest, macos-latest, macos-13]
os: [ubuntu-latest, windows-latest, macos-latest, macos-15-intel]

steps:

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/testing_dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
- os: macos-15-intel # Mac x64 runner
label: environments/environment-MAC-intel.yml

- os: windows-latest
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/testing_dev_e2e_with_live_services.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ jobs:
- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
- os: macos-15-intel # Mac x64 runner
label: environments/environment-MAC-intel.yml

- os: windows-latest
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/testing_dev_with_live_services.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,10 @@ jobs:
- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
- os: macos-15-intel # Mac x64 runner
label: environments/environment-MAC-intel.yml


- os: windows-latest
label: environments/environment-Windows.yml

Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/testing_flask_build_and_dist.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,10 @@ jobs:
- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
- os: macos-15-intel # Mac x64 runner
label: environments/environment-MAC-intel.yml


- os: windows-latest
label: environments/environment-Windows.yml

Expand Down Expand Up @@ -76,7 +77,7 @@ jobs:
run: pip uninstall matplotlib --yes

# Fix for macos build - remove bad sonpy file
- if: matrix.os == 'macos-latest' || matrix.os == 'macos-13'
- if: matrix.os == 'macos-latest'
run: rm -f "$CONDA_PREFIX/lib/python3.9/site-packages/sonpy/linux/sonpy.so"

- name: Build PyFlask distribution
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/testing_pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jobs:
- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
- os: macos-15-intel # Mac x64 runner
label: environments/environment-MAC-intel.yml

- os: windows-latest
Expand Down
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -42,3 +42,9 @@ src/build

# PyCharm
.idea/

# Auto-generated storybook schemas (regenerate with: python generateInterfaceSchema.py)
stories/inputs/interface_schemas/
stories/pages/SourceData.stories.js

ophys_testing_data/
16 changes: 7 additions & 9 deletions environments/environment-Linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,11 @@ dependencies:
- flask-cors == 4.0.0
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.1
- pandas < 3.0 # pandas 3.0 returns read-only arrays, breaking spikeinterface Phy extractor
- neo == 0.14.1 # 0.14.2 is not compatible with neuroconv < 0.7.5
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector == 0.6.2
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.9.3
- scikit-learn == 1.6.1 # Tutorial data generation
- tqdm_publisher == 0.1.1 # Progress bars
- tzlocal == 5.3.1 # Frontend timezone handling
- ndx-pose == 0.2.2
- nwbinspector == 0.6.5
- tables
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- numcodecs == 0.15.1 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
19 changes: 9 additions & 10 deletions environments/environment-MAC-apple-silicon.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ dependencies:
- lxml = 4.9.3 # PyPI build fails due to x64/arm64 mismatch so install from conda-forge
- pyedflib = 0.1.38 # PyPI build fails due to x64/arm64 mismatch so install from conda-forge
- numpy # May have x64/arm64 mismatch issues so install from conda-forge
- pytables = 3.9.1 # PyPI build fails on arm64; must be <3.9.2 per neuroconv 0.6.1 constraint
- pytables = 3.10.2 # PyPI build fails on arm64 so install from conda-forge (used by neuroconv deps)
- jsonschema = 4.18.0 # Also installs jsonschema-specifications
- pip
- pip:
Expand All @@ -23,12 +23,11 @@ dependencies:
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
# NOTE: the NeuroConv wheel on PyPI includes sonpy which is not compatible with arm64, so build and install
# NeuroConv from GitHub, which will remove the sonpy dependency when building from Mac arm64
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.1
- pandas < 3.0 # pandas 3.0 returns read-only arrays, breaking spikeinterface Phy extractor
- neo == 0.14.1 # 0.14.2 is not compatible with neuroconv < 0.7.5
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector == 0.6.2
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- h5py < 3.13 # 3.13+ uses HDF5 1.14.4 features not in pytables 3.10.2's bundled HDF5 1.14.2
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.9.3
- scikit-learn == 1.6.1 # Tutorial data generation
- tqdm_publisher == 0.1.1 # Progress bars
- tzlocal == 5.3.1 # Frontend timezone handling
- ndx-pose == 0.2.2
- nwbinspector == 0.6.5
- numcodecs == 0.15.1 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
17 changes: 7 additions & 10 deletions environments/environment-MAC-intel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,19 @@ dependencies:
- pip:
- setuptools==70.0.0
- PyInstaller==6.7.0
- scipy<1.12.0 # Fix needed for scipy._lib._testutils
- chardet == 5.1.0
- configparser == 6.0.0
- flask == 2.3.2
- flask-cors == 4.0.0
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.1
- pandas < 3.0 # pandas 3.0 returns read-only arrays, breaking spikeinterface Phy extractor
- neo == 0.14.1 # 0.14.2 is not compatible with neuroconv < 0.7.5
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector == 0.6.2
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.9.3
- scikit-learn == 1.6.1 # Tutorial data generation
- tqdm_publisher == 0.1.1 # Progress bars
- tzlocal == 5.3.1 # Frontend timezone handling
- ndx-pose == 0.2.2
- nwbinspector == 0.6.5
- numcodecs == 0.15.1 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- h5py == 3.12.1 # 3.13.0 uses features in hdf5 1.14.4 that is not available in earlier hdf5 libs packaged
# with tables==3.9.1 (latest that can be used by neuroconv 0.6.0).
# h5py and tables need to be consistent for electron build for unknown reason
Expand Down
16 changes: 7 additions & 9 deletions environments/environment-Windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,11 @@ dependencies:
- flask-cors === 3.0.10
- flask_restx == 1.1.0
- werkzeug < 3.0 # werkzeug 3.0 deprecates features used by flask 2.3.2. Remove this when updating flask.
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.6.1
- pandas < 3.0 # pandas 3.0 returns read-only arrays, breaking spikeinterface Phy extractor
- neo == 0.14.1 # 0.14.2 is not compatible with neuroconv < 0.7.5
- scikit-learn == 1.4.0 # Tutorial data generation
- tqdm_publisher >= 0.0.1 # Progress bars
- tzlocal >= 5.2 # Frontend timezone handling
- ndx-pose == 0.1.1
- nwbinspector == 0.6.2
- neuroconv[dandi,compressors,ecephys,ophys,behavior,text] == 0.9.3
- scikit-learn == 1.6.1 # Tutorial data generation
- tqdm_publisher == 0.1.1 # Progress bars
- tzlocal == 5.3.1 # Frontend timezone handling
- ndx-pose == 0.2.2
- nwbinspector == 0.6.5
- tables
- numcodecs < 0.16.0 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
- numcodecs == 0.15.1 # numcodecs 0.16.0 is not compatible with zarr 2.18.5
2 changes: 1 addition & 1 deletion src/electron/frontend/core/components/Table.js
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ export class Table extends LitElement {
return;
}

const isUndefined = value == "";
const isUndefined = value === "" || value == null;

if (isUndefined && required) {
instanceThis.#handleValidationResult(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ const propsToIgnore = {
es_key: true,
exclude_shanks: true,
load_sync_channel: true,
stream_id: true, // NOTE: May be desired for other interfaces
// stream_id: true, // NOTE: Unhidden — required by SpikeGLX in neuroconv >= 0.9.0
nsx_override: true,
combined: true,
plane_no: true,
Expand Down
106 changes: 88 additions & 18 deletions src/pyflask/manageNeuroconv/manage_neuroconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,18 +143,45 @@ def resolve_references(schema: dict, root_schema: Optional[dict] = None) -> dict

if "$ref" in schema:
resolver = RefResolver.from_schema(root_schema)
return resolver.resolve(schema["$ref"])[1]
resolved = resolver.resolve(schema["$ref"])[1]
return resolve_references(resolved, root_schema)

if "properties" in schema:
for key, prop_schema in schema["properties"].items():
schema["properties"][key] = resolve_references(prop_schema, root_schema)

if "patternProperties" in schema:
for key, prop_schema in schema["patternProperties"].items():
schema["patternProperties"][key] = resolve_references(prop_schema, root_schema)

if "items" in schema:
schema["items"] = resolve_references(schema["items"], root_schema)

return schema


def replace_nan_strings(obj):
"""Recursively replace string 'NaN' values with float NaN throughout a nested dict/list structure.

This handles cases where the metadata schema is incomplete (e.g. BrukerTiffSinglePlaneConverter
doesn't provide Ophys schema), so schema-based coercion in replace_none_with_nan misses some fields.
String 'NaN' values are always JavaScript NaN artifacts from JSON serialization and never valid metadata.
"""
if isinstance(obj, dict):
for key, value in obj.items():
if value == "NaN":
obj[key] = math.nan
else:
replace_nan_strings(value)
elif isinstance(obj, list):
for i, item in enumerate(obj):
if item == "NaN":
obj[i] = math.nan
else:
replace_nan_strings(item)
return obj


def replace_none_with_nan(json_object: dict, json_schema: dict) -> dict:
"""
Recursively search a JSON object and replace None values with NaN where appropriate.
Expand All @@ -178,23 +205,22 @@ def coerce_schema_compliance_recursive(obj, schema):
if regex.match(key):
coerce_schema_compliance_recursive(value, pattern_schema)

elif key in schema.get("properties", {}):
# Also check regular properties (not elif — schemas can have both patternProperties and properties)
if key in schema.get("properties", {}):
prop_schema = schema["properties"][key]
if prop_schema.get("type") == "number" and (value is None or value == "NaN"):
obj[key] = (
math.nan
) # Turn None into NaN if a number is expected (JavaScript JSON.stringify turns NaN into None)
elif prop_schema.get("type") == "number" and isinstance(value, int):
obj[key] = float(
value
) # Turn integer into float if a number, the JSON Schema equivalent to float, is expected (JavaScript coerces floats with trailing zeros to integers)
if prop_schema.get("type") == "number" and not isinstance(value, float):
if value is None or value == "NaN":
obj[key] = math.nan
else:
try:
obj[key] = float(value)
except (ValueError, TypeError):
pass
else:
coerce_schema_compliance_recursive(value, prop_schema)
elif isinstance(obj, list):
for item in obj:
coerce_schema_compliance_recursive(
item, schema.get("items", schema if "properties" else {})
) # NEUROCONV PATCH
coerce_schema_compliance_recursive(item, schema.get("items", schema if "properties" in schema else {}))

return obj

Expand Down Expand Up @@ -363,7 +389,7 @@ class CustomNWBConverter(NWBConverter):

# Handle temporal alignment inside the converter
# TODO: this currently works off of cross-scoping injection of `alignment_info` - refactor to be more explicit
def temporally_align_data_interfaces(self):
def temporally_align_data_interfaces(self, metadata=None, conversion_options=None):
set_interface_alignment(self, alignment_info=alignment_info)

# From previous issue regarding SpikeGLX not generating previews of correct size
Expand Down Expand Up @@ -596,7 +622,6 @@ def on_recording_interface(name, recording_interface):

# Configure electrode columns
defs["ElectrodeColumn"] = electrode_def
defs["ElectrodeColumn"]["required"] = list(electrode_def["properties"].keys())

new_electrodes_properties = {
properties["name"]: {key: value for key, value in properties.items() if key != "name"}
Expand All @@ -610,6 +635,26 @@ def on_recording_interface(name, recording_interface):
"additionalProperties": True, # Allow for new columns
}

if has_electrodes:
# Ensure ElectrodeColumns includes entries for all Electrode schema properties
# (needed for frontend linked-table validation in neuroconv >= 0.6.2)
existing_electrode_columns = ecephys_metadata.get("ElectrodeColumns", [])
existing_col_names = {col["name"] for col in existing_electrode_columns}
for prop_name, prop_info in new_electrodes_properties.items():
if prop_name not in existing_col_names:
# Infer data_type from schema type (required by update_recording_properties_from_table_as_json)
schema_type = prop_info.get("type", "str")
data_type = {"number": "float64", "integer": "int64", "boolean": "bool", "array": "object"}.get(
schema_type, "str"
)
existing_electrode_columns.append(
{
"name": prop_name,
"description": prop_info.get("description", "No description."),
"data_type": data_type,
}
)

if has_units:

unitprops_def = defs["UnitProperties"]
Expand All @@ -619,7 +664,6 @@ def on_recording_interface(name, recording_interface):

# Configure electrode columns
defs["UnitColumn"] = unitprops_def
defs["UnitColumn"]["required"] = list(unitprops_def["properties"].keys())

new_units_properties = {
properties["name"]: {key: value for key, value in properties.items() if key != "name"}
Expand All @@ -633,6 +677,31 @@ def on_recording_interface(name, recording_interface):
"additionalProperties": True, # Allow for new columns
}

# Ensure UnitColumns includes entries for all Unit schema properties
# (needed for frontend linked-table validation in neuroconv >= 0.6.2)
existing_unit_columns = metadata["Ecephys"].get("UnitColumns", [])
existing_col_names = {col["name"] for col in existing_unit_columns}
for prop_name, prop_info in new_units_properties.items():
if prop_name not in existing_col_names:
schema_type = prop_info.get("type", "str")
data_type = {"number": "float64", "integer": "int64", "boolean": "bool", "array": "object"}.get(
schema_type, "str"
)
existing_unit_columns.append(
{
"name": prop_name,
"description": prop_info.get("description", "No description."),
"data_type": data_type,
}
)

# Allow additional properties on Device definitions (e.g. manufacturer from neuroconv)
for modality_key in ("Ecephys", "Ophys"):
modality_schema = schema.get("properties", {}).get(modality_key, {})
device_def = modality_schema.get("definitions", {}).get("Device", {})
if device_def:
device_def["additionalProperties"] = True

# TODO: generalize logging stuff
log_base = GUIDE_ROOT_FOLDER / "logs"
log_base.mkdir(exist_ok=True)
Expand Down Expand Up @@ -1157,6 +1226,7 @@ def get_conversion_info(info: dict) -> dict:

# Ensure Ophys NaN values are resolved
resolved_metadata = replace_none_with_nan(info["metadata"], resolve_references(converter.get_metadata_schema()))
replace_nan_strings(resolved_metadata)

ecephys_metadata = resolved_metadata.get("Ecephys")

Expand Down Expand Up @@ -1367,7 +1437,7 @@ def upload_folder_to_dandi(
return automatic_dandi_upload(
dandiset_id=dandiset_id,
nwb_folder_path=Path(nwb_folder_path),
staging=sandbox, # Map sandbox parameter to staging for external API
sandbox=sandbox,
cleanup=cleanup,
number_of_jobs=number_of_jobs or 1,
number_of_threads=number_of_threads or 1,
Expand Down Expand Up @@ -1400,7 +1470,7 @@ def upload_project_to_dandi(
return automatic_dandi_upload(
dandiset_id=dandiset_id,
nwb_folder_path=CONVERSION_SAVE_FOLDER_PATH / project, # Scope valid DANDI upload paths to GUIDE projects
staging=sandbox, # Map sandbox parameter to staging for external API
sandbox=sandbox,
cleanup=cleanup,
number_of_jobs=number_of_jobs,
number_of_threads=number_of_threads,
Expand Down
Loading
Loading