Skip to content

Allow more dataset CRUD via API#9375

Open
fm3 wants to merge 19 commits intomasterfrom
remote-dataset-more-crud
Open

Allow more dataset CRUD via API#9375
fm3 wants to merge 19 commits intomasterfrom
remote-dataset-more-crud

Conversation

@fm3
Copy link
Copy Markdown
Member

@fm3 fm3 commented Mar 9, 2026

Allow more dataset mutation via API, as used by the python libs.

Steps to test:

  • Test CRUD via API
  • Test that old compose via frontend still works (renamed some api fields)
  • Corresponding libs PR: sorting projects #1438

TODOs:

  • RemoteAttachments.delete_attachment
    • Backend needs to respect missing attachments in updatePartial call
      • ensure frontend or older libs versions don’t already skip the attachments?
      • ensure libs don’t drop attachments on updates when opened via zarr streaming? →LIBS
      • bump API version to v13, ignore attachment changes on older (otherwise we lose attachments unwanted)
  • RemoteAttachments.rename_attachment
  • RemoteDataset.add_layer_as_ref (linked layer)
    • Needs new protocol similar to compose (ensure permission checks)
  • RemoteLayer.add_mag_as_ref (linked mag)
    • Same as above
  • RemoteAttachments.add_attachments_as_ref
    • Same as above
  • Determine which updates are allowed for non-virtual datasets, make sure those work, make sure others are rejected

Follow-up

  • RemoteLayer.add_mag_as_copy with transfer_mode=HTTP
    • Needs new upload protocol. Use resumable here too?
  • RemoteAttachments.add_attachment_as_copy with transfer_mode=HTTP
    • Needs new upload protocol (analog to mag)

Both will follow in #9402

Issues:


  • Added changelog entry (create a $PR_NUMBER.md file in unreleased_changes or use ./tools/create-changelog-entry.py)
  • Removed dev-only changes like prints and application.conf edits
  • Considered common edge cases
  • Needs datastore update after deployment

@fm3 fm3 self-assigned this Mar 9, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

Adds attachment renaming support and compose endpoints (layer/mag/attachment); threads attachment renamings through dataset update flows; renames LayerLink fields in frontend; bumps API version to v13; converts many List usages to Seq; extends DataLayerAttachments with helper operations; adds routes and minor build/message updates.

Changes

Cohort / File(s) Summary
Controller endpoints & API surface
app/controllers/DatasetController.scala, app/controllers/LegacyApiController.scala
Added AttachmentRenaming and attachmentRenamings to update DTOs; added composeAddLayer, composeAddMag, composeAddAttachment endpoints; introduced updatePartialV12 in LegacyApiController with analytics injection and attachment-aware datasource update handling.
Routing / API versioning
conf/webknossos.latest.routes, conf/webknossos.versioned.routes, webknossos-datastore/conf/datastore.versioned.routes, webknossos-tracingstore/conf/tracingstore.versioned.routes, util/src/main/scala/.../ApiVersioning.scala
Added compose POST routes; mapped LegacyApiController.updatePartialV12 across many API versioned routes; bumped CURRENT_API_VERSION to 13.
Dataset service & data-source updates
app/models/dataset/DatasetService.scala, app/models/dataset/Dataset.scala
Extended updateDataSourceFromUserChanges/applyDataSourceUpdates to accept and apply attachmentRenamings; added validation and collision checks; adjusted cache invalidation ordering.
Compose service logic & DTOs
app/models/dataset/ComposeService.scala
Renamed compose DTO fields to source/target naming; added ComposeAddMagRequest and ComposeAddAttachmentRequest; implemented addLayer, addMag, addAttachment flows with validation and duplicate checks.
Data-layer attachments API
webknossos-datastore/app/.../DataLayerAttachments.scala, webknossos-datastore/app/.../DataLayer.scala
Added dropMissing, withAdded, getByTypeAndName, renameByMap; added withAttachments helper; switched layer APIs to use Seq for mags/resolutions/transformations.
Datastore & tracing collection refactors
webknossos-datastore/app/.../NgffMetadata*.scala, webknossos-datastore/app/.../WebknossosZarrExplorer.scala, webknossos-datastore/app/.../FindDataService.scala, webknossos-tracingstore/app/.../VolumeTracing*.scala
Replaced many List types with Seq in mags/resolutions/transformations and updated callsites/constructors accordingly.
Frontend composition changes
frontend/javascripts/types/api_types.ts, frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx
Renamed LayerLink fields: datasetId→sourceDatasetId, datasetName→sourceDatasetName, sourceName→sourceLayerName, newName→targetLayerName; updated UI logic/validation and labels.
Build & localization
build.sbt, conf/messages
Added cancelable in Global := true to sbt settings; added dataset.composeInPlace.mustBeVirtual message key.
Docs
unreleased_changes/9375.md
Added unreleased entry documenting new API routes for dataset composition.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • MichaelBuessemeyer
  • normanrz

Poem

🐰 I hopped through layers, renamed names with care,
I stitched mags and attachments — bits dancing in the air,
v13 now hums as routes and DTOs align,
Seq whispers where List once drew the line,
A tiny rabbit cheer for composition, short and fair! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Allow more dataset CRUD via API' clearly summarizes the main objective of the PR, which is to extend the API for additional dataset mutation operations.
Description check ✅ Passed The description provides context about allowing more dataset mutation via API as used by Python libs, includes test steps, and references related issues and follow-up PRs.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch remote-dataset-more-crud
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@fm3 fm3 changed the title WIP Allow more dataset CRUD via API Allow more dataset CRUD via API Mar 18, 2026
@fm3 fm3 marked this pull request as ready for review March 18, 2026 16:44
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala (1)

126-137: ⚠️ Potential issue | 🟠 Major

Pattern matching assumes List but parameter is now Seq.

The magIter function signature changed to accept Seq[Vec3Int], but the pattern matching on lines 128-129 uses List-specific syntax (List() and head :: tail). If the passed Seq is not a List (e.g., Vector), these patterns won't match and will fall through incorrectly.

Since dataLayer.resolutions (line 139) is now Seq[Vec3Int], and sortBy preserves the collection type, this could cause issues if the underlying implementation isn't a List.

🐛 Proposed fix using Seq-compatible pattern matching
-    def magIter(positions: List[Vec3Int], remainingMags: Seq[Vec3Int]): Fox[Option[(Vec3Int, Vec3Int)]] =
-      remainingMags match {
-        case List() => Fox.successful(None)
-        case head :: tail =>
+    def magIter(positions: List[Vec3Int], remainingMags: Seq[Vec3Int]): Fox[Option[(Vec3Int, Vec3Int)]] =
+      remainingMags match {
+        case Seq() => Fox.successful(None)
+        case head +: tail =>
           (for {
             foundPosition <- searchPositionIter(positions, head)
           } yield

Alternatively, convert to List explicitly at the call site on line 139:

-    magIter(createPositions(dataLayer).distinct, dataLayer.resolutions.sortBy(_.maxDim))
+    magIter(createPositions(dataLayer).distinct, dataLayer.resolutions.sortBy(_.maxDim).toList)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala`
around lines 126 - 137, magIter currently patterns on List-specific constructors
but its parameter is a Seq[Vec3Int]; change magIter to handle Seq safely (e.g.,
match on Seq() and head +: tail or use .toList at the start) so it works for
non-List Seq implementations, and ensure callers such as where
dataLayer.resolutions are sorted (the call that invokes magIter) pass the
expected collection type or explicitly convert with .toList; update magIter and
its recursive call to use the Seq-compatible pattern (or convert once to List
before recursion) and keep references to magIter and searchPositionIter
consistent.
🧹 Nitpick comments (1)
build.sbt (1)

28-30: Consider removing this dev-only change before merge.

The PR objectives mention "remove dev-only changes" as a TODO. This cancelable setting is a developer convenience and may fall into that category.

If this is intentional to keep, consider using the modern sbt 1.x syntax for consistency with the rest of the file:

 // Allow Ctrl+C in interactive sbt to cancel commands, not sbt itself
-cancelable in Global := true
+Global / cancelable := true
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.sbt` around lines 28 - 30, The line "cancelable in Global := true" is a
dev-only sbt setting that should be removed before merge (or converted to the
modern sbt 1.x syntax if you intend to keep it); either delete that setting from
build.sbt, or replace it with the 1.x form "Global / cancelable := true" and
gate it behind a dev-only check (e.g., an environment or system property) so it
does not affect CI/production builds—look for the cancelable setting in
build.sbt to apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/models/dataset/ComposeService.scala`:
- Around line 168-169: The duplicate check on updatedLayer.mags is only using
_.mag.maxDim which allows different mag vectors with the same max to be treated
as duplicates; change the uniqueness check to compare the full mag vector (e.g.
use distinctBy(_.mag) or equivalent) so ComposeService's updatedLayer.mags are
deduplicated by the entire mag value rather than just maxDim, keeping the rest
of the Fox.fromBool(...).

In `@app/models/dataset/DatasetService.scala`:
- Around line 366-375: applyAttachmentRenamings currently builds a single
renamingMap and calls renamingMap(attachment.name) for every attachment across
all dataLayers which throws for unrelated names and ignores layerName; fix by
grouping attachmentRenamings by layerName into a Map[String, Map[String,String]]
and when mapping dataLayers use the specific layer's renaming map (e.g., inside
existingDataSource.dataLayers.map { dl => val layerRenamings =
grouped.getOrElse(dl.name, Map.empty); dl.mapped(attachment =>
layerRenamings.get(attachment.name).map(newName => attachment.copy(name =
newName)).getOrElse(attachment)) } ), so lookups are partial and renames are
applied only to the matching layer.
- Around line 453-459: The current applyAttachmentUpdates drops existing
attachments when attachmentUpdatesOpt is None (meaning the field was omitted),
causing unintended deletions; change the logic in applyAttachmentUpdates so that
if attachmentUpdatesOpt is None you simply return existingAttachmentsOpt
(preserve existing attachments), if both existingAttachmentsOpt and
attachmentUpdatesOpt are Some call
existingAttachments.dropMissing(attachmentUpdates) as now, and if existing is
None but attachmentUpdatesOpt is Some return that attachmentUpdates; update
references: applyAttachmentUpdates, existingAttachmentsOpt,
attachmentUpdatesOpt, DataLayerAttachments.dropMissing, and ensure downstream
removedPaths/deletePaths only act when attachments were explicitly updated.

---

Outside diff comments:
In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala`:
- Around line 126-137: magIter currently patterns on List-specific constructors
but its parameter is a Seq[Vec3Int]; change magIter to handle Seq safely (e.g.,
match on Seq() and head +: tail or use .toList at the start) so it works for
non-List Seq implementations, and ensure callers such as where
dataLayer.resolutions are sorted (the call that invokes magIter) pass the
expected collection type or explicitly convert with .toList; update magIter and
its recursive call to use the Seq-compatible pattern (or convert once to List
before recursion) and keep references to magIter and searchPositionIter
consistent.

---

Nitpick comments:
In `@build.sbt`:
- Around line 28-30: The line "cancelable in Global := true" is a dev-only sbt
setting that should be removed before merge (or converted to the modern sbt 1.x
syntax if you intend to keep it); either delete that setting from build.sbt, or
replace it with the 1.x form "Global / cancelable := true" and gate it behind a
dev-only check (e.g., an environment or system property) so it does not affect
CI/production builds—look for the cancelable setting in build.sbt to apply the
change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dcfbda34-256e-457c-9497-abbfd13d440b

📥 Commits

Reviewing files that changed from the base of the PR and between 9d3b0a1 and 4612ae6.

📒 Files selected for processing (22)
  • app/controllers/DatasetController.scala
  • app/controllers/LegacyApiController.scala
  • app/models/dataset/ComposeService.scala
  • app/models/dataset/Dataset.scala
  • app/models/dataset/DatasetService.scala
  • build.sbt
  • conf/messages
  • conf/webknossos.latest.routes
  • conf/webknossos.versioned.routes
  • frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx
  • frontend/javascripts/types/api_types.ts
  • util/src/main/scala/com/scalableminds/util/mvc/ApiVersioning.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr/NgffMetadata.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr/NgffMetadataV0_5.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/WebknossosZarrExplorer.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/FindDataService.scala
  • webknossos-datastore/conf/datastore.versioned.routes
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingLayer.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingMags.scala
  • webknossos-tracingstore/conf/tracingstore.versioned.routes

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala (1)

44-83: ⚠️ Potential issue | 🟠 Major

Keep these mutation helpers from creating ambiguous attachment state.

withAdded can append a second attachment with the same name or silently replace existing segmentIndex/cumsum, and renameByMap can collapse two distinct names into one. After that, getByTypeAndName only sees the first match, so later operations become ambiguous or lossy. You already have containsDuplicateNames; using it here would keep that invariant local.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala`
around lines 44 - 83, The helpers currently allow creating ambiguous
duplicate-named attachments (breaking getByTypeAndName); fix by enforcing the
no-duplicates invariant: in withAdded, check the target collection (meshes,
agglomerates, connectomes) for an existing attachment with the same name and
refuse (throw IllegalArgumentException or return unchanged) instead of
appending, and for segmentIndex/cumsum require they are empty before setting; in
mapped and renameByMap, apply the mapping/renaming then call
containsDuplicateNames and if it reports duplicates, abort (throw) so you never
return a DataLayerAttachments instance with collapsed/duplicate names; reference
the methods withAdded, mapped, renameByMap and containsDuplicateNames to locate
changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/DatasetController.scala`:
- Around line 476-479: DatasetController currently forwards
request.body.attachmentRenamings into
datasetService.updateDataSourceFromUserChanges without applying same-request
layer renames; update the controller to normalize attachment renames first:
build a Map from request.body.layerRenamings.getOrElse(Seq.empty) mapping
oldLayerName -> newLayerName, then transform each entry in
request.body.attachmentRenamings.getOrElse(Seq.empty) by replacing its layerName
with the mapped newLayerName when present (preserving other fields), and pass
that normalized sequence instead of the raw attachmentRenamings into
datasetService.updateDataSourceFromUserChanges; use the existing symbols
DatasetController, datasetService.updateDataSourceFromUserChanges,
request.body.layerRenamings and request.body.attachmentRenamings to locate and
change the call.

---

Outside diff comments:
In
`@webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala`:
- Around line 44-83: The helpers currently allow creating ambiguous
duplicate-named attachments (breaking getByTypeAndName); fix by enforcing the
no-duplicates invariant: in withAdded, check the target collection (meshes,
agglomerates, connectomes) for an existing attachment with the same name and
refuse (throw IllegalArgumentException or return unchanged) instead of
appending, and for segmentIndex/cumsum require they are empty before setting; in
mapped and renameByMap, apply the mapping/renaming then call
containsDuplicateNames and if it reports duplicates, abort (throw) so you never
return a DataLayerAttachments instance with collapsed/duplicate names; reference
the methods withAdded, mapped, renameByMap and containsDuplicateNames to locate
changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 02bf38b2-1271-4d30-93ad-2518734a03dd

📥 Commits

Reviewing files that changed from the base of the PR and between 4612ae6 and 9428872.

📒 Files selected for processing (3)
  • app/controllers/DatasetController.scala
  • app/models/dataset/DatasetService.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayerAttachments.scala
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/models/dataset/DatasetService.scala

Comment on lines 476 to +479
datasetService.updateDataSourceFromUserChanges(dataset,
dataSourceUpdates,
request.body.layerRenamings.getOrElse(Seq.empty)))
request.body.layerRenamings.getOrElse(Seq.empty),
request.body.attachmentRenamings.getOrElse(Seq.empty)))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize attachment renames against layer renames in the same request.

Line 478 forwards attachmentRenamings unchanged even though the request already contains the old→new layer mapping. A payload that renames a layer and still references the pre-rename layerName for an attachment rename will only apply half of the change. Please rewrite or reject those mixed requests here before delegating.

🔧 Suggested normalization in the controller
+        layerRenamingMap = request.body.layerRenamings.getOrElse(Seq.empty).map(r => r.oldName -> r.newName).toMap
+        normalizedAttachmentRenamings = request.body.attachmentRenamings.getOrElse(Seq.empty).map { renaming =>
+          renaming.copy(layerName = layerRenamingMap.getOrElse(renaming.layerName, renaming.layerName))
+        }
         _ <- Fox.runOptional(request.body.dataSource)(
           dataSourceUpdates =>
             datasetService.updateDataSourceFromUserChanges(dataset,
                                                            dataSourceUpdates,
                                                            request.body.layerRenamings.getOrElse(Seq.empty),
-                                                           request.body.attachmentRenamings.getOrElse(Seq.empty)))
+                                                           normalizedAttachmentRenamings))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/DatasetController.scala` around lines 476 - 479,
DatasetController currently forwards request.body.attachmentRenamings into
datasetService.updateDataSourceFromUserChanges without applying same-request
layer renames; update the controller to normalize attachment renames first:
build a Map from request.body.layerRenamings.getOrElse(Seq.empty) mapping
oldLayerName -> newLayerName, then transform each entry in
request.body.attachmentRenamings.getOrElse(Seq.empty) by replacing its layerName
with the mapped newLayerName when present (preserving other fields), and pass
that normalized sequence instead of the raw attachmentRenamings into
datasetService.updateDataSourceFromUserChanges; use the existing symbols
DatasetController, datasetService.updateDataSourceFromUserChanges,
request.body.layerRenamings and request.body.attachmentRenamings to locate and
change the call.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/LegacyApiController.scala`:
- Around line 79-85: The map over dataSourceUpdates.dataLayers in
updatesWithUndoneAttachmentChanges currently only handles StaticColorLayer and
StaticSegmentationLayer and will throw a MatchError for other layer types;
modify the anonymous partial function in that map to be exhaustive by adding a
fallback case (e.g., case other => other) so non-matching layer types are passed
through unchanged, while still calling
findOriginalAttachments(existingDataSource, s.name) for the two handled layer
types.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3fbf5382-6faa-46f4-a5aa-a7e017d13f1b

📥 Commits

Reviewing files that changed from the base of the PR and between 9428872 and 0717a78.

📒 Files selected for processing (6)
  • app/controllers/LegacyApiController.scala
  • app/models/dataset/DatasetService.scala
  • conf/messages
  • conf/webknossos.latest.routes
  • unreleased_changes/9375.md
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
✅ Files skipped from review due to trivial changes (2)
  • conf/messages
  • unreleased_changes/9375.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/models/dataset/DatasetService.scala

Comment on lines +79 to +85
updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy(
dataLayers = dataSourceUpdates.dataLayers.map {
case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
case s: StaticSegmentationLayer =>
s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
}
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Non-exhaustive pattern match will throw MatchError for other layer types.

The pattern match in .map only handles StaticColorLayer and StaticSegmentationLayer. If dataSourceUpdates.dataLayers contains any other layer type (e.g., remote layers, virtual layers), this will throw a runtime scala.MatchError.

🐛 Proposed fix: Add a fallback case to pass through other layer types unchanged
 updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy(
   dataLayers = dataSourceUpdates.dataLayers.map {
     case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
     case s: StaticSegmentationLayer =>
       s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
+    case other => other // Pass through layer types without attachments
   }
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy(
dataLayers = dataSourceUpdates.dataLayers.map {
case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
case s: StaticSegmentationLayer =>
s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
}
)
updatesWithUndoneAttachmentChanges = dataSourceUpdates.copy(
dataLayers = dataSourceUpdates.dataLayers.map {
case s: StaticColorLayer => s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
case s: StaticSegmentationLayer =>
s.copy(attachments = findOriginalAttachments(existingDataSource, s.name))
case other => other // Pass through layer types without attachments
}
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/LegacyApiController.scala` around lines 79 - 85, The map over
dataSourceUpdates.dataLayers in updatesWithUndoneAttachmentChanges currently
only handles StaticColorLayer and StaticSegmentationLayer and will throw a
MatchError for other layer types; modify the anonymous partial function in that
map to be exhaustive by adding a fallback case (e.g., case other => other) so
non-matching layer types are passed through unchanged, while still calling
findOriginalAttachments(existingDataSource, s.name) for the two handled layer
types.

@fm3 fm3 requested a review from normanrz March 19, 2026 10:57
Copy link
Copy Markdown
Member

@normanrz normanrz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants