Conversation
📝 WalkthroughWalkthroughAdds attachment renaming support and compose endpoints (layer/mag/attachment); threads attachment renamings through dataset update flows; renames LayerLink fields in frontend; bumps API version to v13; converts many List usages to Seq; extends DataLayerAttachments with helper operations; adds routes and minor build/message updates. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala (1)
126-137:⚠️ Potential issue | 🟠 MajorPattern matching assumes
Listbut parameter is nowSeq.The
magIterfunction signature changed to acceptSeq[Vec3Int], but the pattern matching on lines 128-129 uses List-specific syntax (List()andhead :: tail). If the passedSeqis not aList(e.g.,Vector), these patterns won't match and will fall through incorrectly.Since
dataLayer.resolutions(line 139) is nowSeq[Vec3Int], andsortBypreserves the collection type, this could cause issues if the underlying implementation isn't aList.🐛 Proposed fix using Seq-compatible pattern matching
- def magIter(positions: List[Vec3Int], remainingMags: Seq[Vec3Int]): Fox[Option[(Vec3Int, Vec3Int)]] = - remainingMags match { - case List() => Fox.successful(None) - case head :: tail => + def magIter(positions: List[Vec3Int], remainingMags: Seq[Vec3Int]): Fox[Option[(Vec3Int, Vec3Int)]] = + remainingMags match { + case Seq() => Fox.successful(None) + case head +: tail => (for { foundPosition <- searchPositionIter(positions, head) } yieldAlternatively, convert to List explicitly at the call site on line 139:
- magIter(createPositions(dataLayer).distinct, dataLayer.resolutions.sortBy(_.maxDim)) + magIter(createPositions(dataLayer).distinct, dataLayer.resolutions.sortBy(_.maxDim).toList)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala` around lines 126 - 137, magIter currently patterns on List-specific constructors but its parameter is a Seq[Vec3Int]; change magIter to handle Seq safely (e.g., match on Seq() and head +: tail or use .toList at the start) so it works for non-List Seq implementations, and ensure callers such as where dataLayer.resolutions are sorted (the call that invokes magIter) pass the expected collection type or explicitly convert with .toList; update magIter and its recursive call to use the Seq-compatible pattern (or convert once to List before recursion) and keep references to magIter and searchPositionIter consistent.
🧹 Nitpick comments (1)
build.sbt (1)
28-30: Consider removing this dev-only change before merge.The PR objectives mention "remove dev-only changes" as a TODO. This
cancelablesetting is a developer convenience and may fall into that category.If this is intentional to keep, consider using the modern sbt 1.x syntax for consistency with the rest of the file:
// Allow Ctrl+C in interactive sbt to cancel commands, not sbt itself -cancelable in Global := true +Global / cancelable := true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build.sbt` around lines 28 - 30, The line "cancelable in Global := true" is a dev-only sbt setting that should be removed before merge (or converted to the modern sbt 1.x syntax if you intend to keep it); either delete that setting from build.sbt, or replace it with the 1.x form "Global / cancelable := true" and gate it behind a dev-only check (e.g., an environment or system property) so it does not affect CI/production builds—look for the cancelable setting in build.sbt to apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/models/dataset/ComposeService.scala`:
- Around line 168-169: The duplicate check on updatedLayer.mags is only using
_.mag.maxDim which allows different mag vectors with the same max to be treated
as duplicates; change the uniqueness check to compare the full mag vector (e.g.
use distinctBy(_.mag) or equivalent) so ComposeService's updatedLayer.mags are
deduplicated by the entire mag value rather than just maxDim, keeping the rest
of the Fox.fromBool(...).
In `@app/models/dataset/DatasetService.scala`:
- Around line 366-375: applyAttachmentRenamings currently builds a single
renamingMap and calls renamingMap(attachment.name) for every attachment across
all dataLayers which throws for unrelated names and ignores layerName; fix by
grouping attachmentRenamings by layerName into a Map[String, Map[String,String]]
and when mapping dataLayers use the specific layer's renaming map (e.g., inside
existingDataSource.dataLayers.map { dl => val layerRenamings =
grouped.getOrElse(dl.name, Map.empty); dl.mapped(attachment =>
layerRenamings.get(attachment.name).map(newName => attachment.copy(name =
newName)).getOrElse(attachment)) } ), so lookups are partial and renames are
applied only to the matching layer.
- Around line 453-459: The current applyAttachmentUpdates drops existing
attachments when attachmentUpdatesOpt is None (meaning the field was omitted),
causing unintended deletions; change the logic in applyAttachmentUpdates so that
if attachmentUpdatesOpt is None you simply return existingAttachmentsOpt
(preserve existing attachments), if both existingAttachmentsOpt and
attachmentUpdatesOpt are Some call
existingAttachments.dropMissing(attachmentUpdates) as now, and if existing is
None but attachmentUpdatesOpt is Some return that attachmentUpdates; update
references: applyAttachmentUpdates, existingAttachmentsOpt,
attachmentUpdatesOpt, DataLayerAttachments.dropMissing, and ensure downstream
removedPaths/deletePaths only act when attachments were explicitly updated.
---
Outside diff comments:
In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala`:
- Around line 126-137: magIter currently patterns on List-specific constructors
but its parameter is a Seq[Vec3Int]; change magIter to handle Seq safely (e.g.,
match on Seq() and head +: tail or use .toList at the start) so it works for
non-List Seq implementations, and ensure callers such as where
dataLayer.resolutions are sorted (the call that invokes magIter) pass the
expected collection type or explicitly convert with .toList; update magIter and
its recursive call to use the Seq-compatible pattern (or convert once to List
before recursion) and keep references to magIter and searchPositionIter
consistent.
---
Nitpick comments:
In `@build.sbt`:
- Around line 28-30: The line "cancelable in Global := true" is a dev-only sbt
setting that should be removed before merge (or converted to the modern sbt 1.x
syntax if you intend to keep it); either delete that setting from build.sbt, or
replace it with the 1.x form "Global / cancelable := true" and gate it behind a
dev-only check (e.g., an environment or system property) so it does not affect
CI/production builds—look for the cancelable setting in build.sbt to apply the
change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dcfbda34-256e-457c-9497-abbfd13d440b
📒 Files selected for processing (22)
app/controllers/DatasetController.scalaapp/controllers/LegacyApiController.scalaapp/models/dataset/ComposeService.scalaapp/models/dataset/Dataset.scalaapp/models/dataset/DatasetService.scalabuild.sbtconf/messagesconf/webknossos.latest.routesconf/webknossos.versioned.routesfrontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsxfrontend/javascripts/types/api_types.tsutil/src/main/scala/com/scalableminds/util/mvc/ApiVersioning.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr/NgffMetadata.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr/NgffMetadataV0_5.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/WebknossosZarrExplorer.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scalawebknossos-datastore/conf/datastore.versioned.routeswebknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingLayer.scalawebknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingMags.scalawebknossos-tracingstore/conf/tracingstore.versioned.routes
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala (1)
44-83:⚠️ Potential issue | 🟠 MajorKeep these mutation helpers from creating ambiguous attachment state.
withAddedcan append a second attachment with the same name or silently replace existingsegmentIndex/cumsum, andrenameByMapcan collapse two distinct names into one. After that,getByTypeAndNameonly sees the first match, so later operations become ambiguous or lossy. You already havecontainsDuplicateNames; using it here would keep that invariant local.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala` around lines 44 - 83, The helpers currently allow creating ambiguous duplicate-named attachments (breaking getByTypeAndName); fix by enforcing the no-duplicates invariant: in withAdded, check the target collection (meshes, agglomerates, connectomes) for an existing attachment with the same name and refuse (throw IllegalArgumentException or return unchanged) instead of appending, and for segmentIndex/cumsum require they are empty before setting; in mapped and renameByMap, apply the mapping/renaming then call containsDuplicateNames and if it reports duplicates, abort (throw) so you never return a DataLayerAttachments instance with collapsed/duplicate names; reference the methods withAdded, mapped, renameByMap and containsDuplicateNames to locate changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/DatasetController.scala`:
- Around line 476-479: DatasetController currently forwards
request.body.attachmentRenamings into
datasetService.updateDataSourceFromUserChanges without applying same-request
layer renames; update the controller to normalize attachment renames first:
build a Map from request.body.layerRenamings.getOrElse(Seq.empty) mapping
oldLayerName -> newLayerName, then transform each entry in
request.body.attachmentRenamings.getOrElse(Seq.empty) by replacing its layerName
with the mapped newLayerName when present (preserving other fields), and pass
that normalized sequence instead of the raw attachmentRenamings into
datasetService.updateDataSourceFromUserChanges; use the existing symbols
DatasetController, datasetService.updateDataSourceFromUserChanges,
request.body.layerRenamings and request.body.attachmentRenamings to locate and
change the call.
---
Outside diff comments:
In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala`:
- Around line 44-83: The helpers currently allow creating ambiguous
duplicate-named attachments (breaking getByTypeAndName); fix by enforcing the
no-duplicates invariant: in withAdded, check the target collection (meshes,
agglomerates, connectomes) for an existing attachment with the same name and
refuse (throw IllegalArgumentException or return unchanged) instead of
appending, and for segmentIndex/cumsum require they are empty before setting; in
mapped and renameByMap, apply the mapping/renaming then call
containsDuplicateNames and if it reports duplicates, abort (throw) so you never
return a DataLayerAttachments instance with collapsed/duplicate names; reference
the methods withAdded, mapped, renameByMap and containsDuplicateNames to locate
changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 02bf38b2-1271-4d30-93ad-2518734a03dd
📒 Files selected for processing (3)
app/controllers/DatasetController.scalaapp/models/dataset/DatasetService.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala
🚧 Files skipped from review as they are similar to previous changes (1)
- app/models/dataset/DatasetService.scala
| datasetService.updateDataSourceFromUserChanges(dataset, | ||
| dataSourceUpdates, | ||
| request.body.layerRenamings.getOrElse(Seq.empty))) | ||
| request.body.layerRenamings.getOrElse(Seq.empty), | ||
| request.body.attachmentRenamings.getOrElse(Seq.empty))) |
There was a problem hiding this comment.
Normalize attachment renames against layer renames in the same request.
Line 478 forwards attachmentRenamings unchanged even though the request already contains the old→new layer mapping. A payload that renames a layer and still references the pre-rename layerName for an attachment rename will only apply half of the change. Please rewrite or reject those mixed requests here before delegating.
🔧 Suggested normalization in the controller
+ layerRenamingMap = request.body.layerRenamings.getOrElse(Seq.empty).map(r => r.oldName -> r.newName).toMap
+ normalizedAttachmentRenamings = request.body.attachmentRenamings.getOrElse(Seq.empty).map { renaming =>
+ renaming.copy(layerName = layerRenamingMap.getOrElse(renaming.layerName, renaming.layerName))
+ }
_ <- Fox.runOptional(request.body.dataSource)(
dataSourceUpdates =>
datasetService.updateDataSourceFromUserChanges(dataset,
dataSourceUpdates,
request.body.layerRenamings.getOrElse(Seq.empty),
- request.body.attachmentRenamings.getOrElse(Seq.empty)))
+ normalizedAttachmentRenamings))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/DatasetController.scala` around lines 476 - 479,
DatasetController currently forwards request.body.attachmentRenamings into
datasetService.updateDataSourceFromUserChanges without applying same-request
layer renames; update the controller to normalize attachment renames first:
build a Map from request.body.layerRenamings.getOrElse(Seq.empty) mapping
oldLayerName -> newLayerName, then transform each entry in
request.body.attachmentRenamings.getOrElse(Seq.empty) by replacing its layerName
with the mapped newLayerName when present (preserving other fields), and pass
that normalized sequence instead of the raw attachmentRenamings into
datasetService.updateDataSourceFromUserChanges; use the existing symbols
DatasetController, datasetService.updateDataSourceFromUserChanges,
request.body.layerRenamings and request.body.attachmentRenamings to locate and
change the call.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/LegacyApiController.scala`:
- Around line 79-85: The map over dataSourceUpdates.dataLayers in
updatesWithUndoneAttachmentChanges currently only handles StaticColorLayer and
StaticSegmentationLayer and will throw a MatchError for other layer types;
modify the anonymous partial function in that map to be exhaustive by adding a
fallback case (e.g., case other => other) so non-matching layer types are passed
through unchanged, while still calling
findOriginalAttachments(existingDataSource, s.name) for the two handled layer
types.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3fbf5382-6faa-46f4-a5aa-a7e017d13f1b
📒 Files selected for processing (6)
app/controllers/LegacyApiController.scalaapp/models/dataset/DatasetService.scalaconf/messagesconf/webknossos.latest.routesunreleased_changes/9375.mdwebknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
✅ Files skipped from review due to trivial changes (2)
- conf/messages
- unreleased_changes/9375.md
🚧 Files skipped from review as they are similar to previous changes (1)
- app/models/dataset/DatasetService.scala
| updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy( | ||
| dataLayers = dataSourceUpdates.dataLayers.map { | ||
| case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | ||
| case s: StaticSegmentationLayer => | ||
| s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | ||
| } | ||
| ) |
There was a problem hiding this comment.
Non-exhaustive pattern match will throw MatchError for other layer types.
The pattern match in .map only handles StaticColorLayer and StaticSegmentationLayer. If dataSourceUpdates.dataLayers contains any other layer type (e.g., remote layers, virtual layers), this will throw a runtime scala.MatchError.
🐛 Proposed fix: Add a fallback case to pass through other layer types unchanged
updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy(
dataLayers = dataSourceUpdates.dataLayers.map {
case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
case s: StaticSegmentationLayer =>
s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
+ case other => other // Pass through layer types without attachments
}
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy( | |
| dataLayers = dataSourceUpdates.dataLayers.map { | |
| case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | |
| case s: StaticSegmentationLayer => | |
| s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | |
| } | |
| ) | |
| updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy( | |
| dataLayers = dataSourceUpdates.dataLayers.map { | |
| case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | |
| case s: StaticSegmentationLayer => | |
| s.copy(attachments = findOriginalAttachments(existingDataSource, s.name)) | |
| case other => other // Pass through layer types without attachments | |
| } | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/LegacyApiController.scala` around lines 79 - 85, The map over
dataSourceUpdates.dataLayers in updatesWithUndoneAttachmentChanges currently
only handles StaticColorLayer and StaticSegmentationLayer and will throw a
MatchError for other layer types; modify the anonymous partial function in that
map to be exhaustive by adding a fallback case (e.g., case other => other) so
non-matching layer types are passed through unchanged, while still calling
findOriginalAttachments(existingDataSource, s.name) for the two handled layer
types.
Allow more dataset mutation via API, as used by the python libs.
Steps to test:
TODOs:
Follow-up
Both will follow in #9402
Issues:
$PR_NUMBER.mdfile inunreleased_changesor use./tools/create-changelog-entry.py)