!!!info "Target users"
**Curators** - users responsible for adding new data to ODM and ensuring data harmonization and curation.
While curators typically do not have permission to define, create, or edit metadata templates, they can select
existing templates to use for their data. Their responsibilities include mapping metadata, associating data with
templates, and performing data updates.
**Researchers** - users who access ODM to identify and retrieve data suitable for further research and analysis.
Their activities include searching, browsing, and exporting data.
**Advanced** - users with access to advanced API functionalities, such as user management and data
management operations.
**Admins** - users who manage the organization in ODM, including overseeing users, groups, and permissions.
They are also responsible for creating, defining, and editing metadata templates as needed.
-
New
Cellentity: ODM now supports storing and managing per-cell metadata and expression for single-cell datasets. -
Cell Groups: Each Cell record belongs to a Cell Group, representing a single-cell table/group.
-
Cell metadata import (TSV only): Import cell metadata via Jobs API (
/api/v1/jobs/import/cellsor/api/v1/jobs/import/cells/multipart) or theodm-import-datascript (--cells/-c) and link to Samples/Libraries/Preparations. -
Cell expression import: Import expression via Jobs API (
/api/v1/jobs/import/expressionor/api/v1/jobs/import/expression/multipart);.br/.lz4archives recommended. Link expression to a Cell metadata group (1:1). -
Validation: Enforces required fields (e.g.,
barcode,batch) and rejects duplicate barcodes within a group; invalid types are ignored with warnings. -
[BETA] Analytics: New endpoints for Cell Ratio, Gene Summary, and Differential Expression calculations over filtered single-cell populations.
-
Deletion: Remove Cell metadata or expression groups via the manage-data/data endpoint.
Read more on Working with Single Cell Data page.
We have added multipart form-data upload endpoints to the jobs import API, enabling direct file uploads (no dataLink required) for common ODM import flows:
POST /api/v1/jobs/import/samples/multipart— upload Sample metadata (TSV)POST /api/v1/jobs/import/libraries/multipart— upload Library metadata (TSV)POST /api/v1/jobs/import/preparations/multipart— upload Preparation metadata (TSV)POST /api/v1/jobs/import/cells/multipart— upload Cell metadata (TSV)POST /api/v1/jobs/import/expression/multipart— upload tabular expression data (TSV, GCT)POST /api/v1/jobs/import/variant/multipart— upload variation data/metadata (VCF, TSV)POST /api/v1/jobs/import/flow-cytometry/multipart— upload flow cytometry data/metadata (FACS, TSV)POST /api/v1/jobs/import/file/multipart— upload a file attachment via Jobs import
ODM now supports an attachment transformation workflow that converts uploaded attachments into ODM-indexable formats and can automatically kick off the relevant jobs import flow.
-
Transform attachments into supported formats
Use transformations to convert files ODM can’t ingest directly (e.g., CSV) into formats it can index (e.g., TSV for metadata imports).
-
Image + configuration model
Each transformation uses:
- a transformation image (script) defining how to convert input → output, and
- a configuration defining the destination (e.g., Samples, Libraries, Preparations, Cells, Expression, Variants).
-
Discover available transformation images
New API to list available images via Processors Controller:
GET /transformations/images. -
Create and manage transformation configurations
Define reusable configs via:
POST /transformations/configurations(and retrieve viaGET /transformations/configurations) Example: config to route csv → samples. -
Run transformations as jobs
Trigger a transformation run using an attachment accession, configuration, and image reference:
POST /transformations/jobsTrack progress via:GET /transformations/jobs/{id}. -
Automatic import after conversion
On successful transformation, ODM automatically uploads the converted file and triggers the corresponding Jobs import multipart endpoint (e.g., CSV → TSV →
POST /api/v1/jobs/import/samples/multipart), creating a new metadata group in the same Study where the original attachment was stored.Read more on Attachments transformation page.
- Updated search behavior across all endpoints: Metadata search endpoints now query against the latest committed metadata, ensuring results reflect the most up-to-date committed changes.
- Filters and queries remain supported: All existing filter and query parameters continue to work as before, but are now applied to the latest committed dataset.
New Data classes were introduced:
- Spatial transcriptomics
SPT - Phenomics
PHE - Copy number alterations
CNA - Microbiome / Metagenomics
MIC - Immune repertoire
IMR - Genetic screens (CRISPR / RNAi)
GSC - Cell imaging
CIM - Nanopore data class was renamed to Long-read sequencing (Nanopore, PacBio)
LRS MTXlabel for Metabolomics was renamed toMTB
- Starting with this release, creating a Study no longer automatically creates four empty Sample entries. Instead, use the Add Samples button to upload a Samples metadata file.
- Sample deletion is no longer available via the GUI. Instead, a new option is available to delete a single Sample object via the manageData endpoint.
Starting with this release, the Template Editor includes a new feature to view the list of Studies that use the template you’re viewing. In the new Study Browser window, you’ll see the list of Studies available to you.
- Preview PDFs directly from ME: Attachments with filenames ending in
.pdfcan now be previewed from the attachment list. - One-click preview: A Preview action/button is available for PDF attachments.
- Opens in a new browser tab: Clicking Preview opens the file in a new tab using the browser’s built-in PDF viewer.
- Instant token display: When creating an ODM personal access token from the Profile page, the token value is now shown immediately after the user confirms creation (and enters their password, if prompted).
- Email step removed: Token creation no longer requires an email with a secure link/code exchange.
- The user identifier for Azure token authentication was changed from
subjecttooid, enabling login without defining an Azure app scope. - As a result, all users registered in ODM via Azure SSO will need to log in via the UI once again to have their records updated in the database.
A new default facet has been added to filter Studies by valid or invalid metadata, validated against the applied template. Metadata is considered invalid if at least one field is invalid in any Study entity.
- Removing the email-based token delivery flow to eliminate a potential attack vector.
- Spring Boot upgrade to v4 with updated transitive dependencies resolved plenty of security vulnerabilities.
- Three vulnerabilities (SQL injection, path traversal, log injection) identified and fixed by code analysis tools.
Release 1.61 introduces powerful enhancements focused on improving data governance and preventing common user errors. With the new ability to transfer study ownership directly through the GUI and safeguards around technical metadata fields, this release helps organizations maintain cleaner data, clearer responsibilities, and greater platform stability.
Historically, changing the owner of a Study required a manual MySQL script. The new GUI workflow enables safe, auditable ownership transfer directly in the application.
- Transfer Study ownership from within the UI.
- Bulk propagation: ownership cascades to all Study‑scoped objects (Study, Samples, Libraries, Preparations, and all associated data objects).
- Automatic permission realignment (new owner gets full access; previous owner’s access reevaluated).
- Full audit trail (Audit Logs + Version History entry; see Section 2).
Ownership transfer can be initiated under the following rules:
| Initiator | Conditions | Allowed? | Notes |
|---|---|---|---|
| Current Study Owner | Always | Yes | Owner can transfer regardless of other permissions. |
| User with Manage organisation and Access all data | Study is not shared with user | Yes | Designed for super‑admin use cases. |
| User with Manage organisation (with access to the Study) | Study is shared with user | Yes | Covers org‑level admins who can see the Study. |
| Any other user | — | No | Not permitted. |
Additional rules:
- Target must be another active user (cannot transfer to deactivated accounts).
- Transfers are one‑Study‑at‑a‑time in this release (no bulk multi‑Study transfer UI).
- Navigate to the Study menu.
- Hover on Owner option in the Study menu.
- Select Transfer ownership (visible if you meet transfer criteria above).
- Choose the new owner from the active‑user list (searchable dropdown).
- Review confirmation dialog showing:
- Current owner
- New owner
- Access impact summary (new owner gains full access; previous owner loses unless shared)
- Cascade notice: applies to all Study objects & versions.
- Confirm to execute transfer.
- Owner field updates immediately.
- New owner receives full access to the Study and all related objects (all versions).
- Previous owner loses access unless the Study is explicitly shared back to them.
Each transfer generates a Changing Ownership historical entry, similar to entries in Study’s metadata Version History:
Captured: Previous Owner • New Owner • Initiating User • Date & Time.
Display Examples:
- Ownership transferred — From
User AtoUser BbyUser Con 2025‑07‑18 14:32 UTC. - Ownership transferred —
User Atransferred ownership toUser Bon 2025‑07‑18 14:32 UTC. (transfer by current owner)
!!! note "Ownership Version History entries are informational only; you cannot roll back to a prior owner from the Version History window. To change again, perform a new transfer."
-
Renamed to
genestack:accessionacross the system. -
Editing and deletion of this field is disabled in the Template Editor (TE).
-
Field is automatically added by script for the following entities:
studygenestack:sampleObjectgenestack:libraryObjectgenestack:preparationObjectgenestack:facsParentgenestack:genomicsParentgenestack:transcriptomicsParent
If a user attempts to manually add or edit this field, the script responds accordingly:
- If added manually:
Template field "genestack:accession" of type "{data_type}" is predefined and can be omitted. (Script continues)
- If edited:
Template field "genestack:accession" of type "{data_type}" is predefined and cannot be edited. (Script stops)
The following fields are now predefined, read-only, and cannot be modified or added manually:
Features (string)Features (numeric)Values (numeric)Data Class
Script responses for these fields:
-
If added manually:
Template field "{field_name}" of type "{data_type}" is predefined and can be omitted. (Script continues)
-
If edited:
Template field "{field_name}" of type "{data_type}" is predefined and cannot be edited. (Script stops)
Exported files now reflect the updated field name: field previously named Accession is now genestack:accession.
These updates not only reduce the risk of accidental metadata issues but also streamline collaboration and oversight across teams.
This release focuses on supporting structure (contents) parsing for HDF5 files.
-
HDF5 File Parsing and Search Capabilities:
- Added support to identify and parse HDF5 files during the import process (detected by
.h5and.h5adextensions). - File contents (groups and datasets) are now stored in the ODM and displayed in the GUI.
- Users can search for HDF5 files based on:
- Full path (e.g.,
/obs/__categories/integrated_snn_res.0.5) - Partial path (e.g.,
__categories/integrated_snn_res.0.5) - Dataset name (e.g.,
seurat_clusters) - Group name (e.g.,
obsm)
- Full path (e.g.,
- Added support to identify and parse HDF5 files during the import process (detected by
-
Enhanced GUI for File Contents Visualization:
-
Added a new Metadata Editor panel to display HDF5 and archive contents:
- Shown in the sidebar with an interactive tree structure.
- Groups are collapsed by default and can be expanded to reveal datasets and nested groups.
- "Expand all" and "Collapse all" buttons are available.
- Users can copy dataset or group paths via a "Copy path" icon on hover.
-
If content parsing fails, the "Contents" button is disabled and a hover message displays: "The contents could not be parsed."
-
-
API Support for HDF5 and Archive Content Search:
- Introduced new and updated API endpoints:
- Find HDF5 files by structure (contents):
GET /api/v1/as-curator/filesGET /api/v1/as-user/files
- Find Studies by linked HDF5 files structure (contents):
GET /api/v1/as-curator/integration/link/studies/by/filesGET /api/v1/as-user/integration/link/studies/by/files
- Find HDF5 files by structure (contents):
- Query parameters supported:
full path,partial path,group,dataset- New parameter
includeContentsadded to include file contents in the response. - If parsing fails, the response includes:
"contents": {"error": "The contents could not be parsed."} - Default behavior excludes the "Contents" section unless
includeContents=trueis specified.
- Response structure includes metadata and parsed contents (group, dataset, file, folder, h5 pairs).
- Introduced new and updated API endpoints:
-
Attach Files to Studies Without Samples
Users can now attach files to a study even if no samples have been added. This allows curators to store and manage files related to an experiment during the early stages, such as experiment design documents.
Key Changes:
- The "Add data" button is now available for users even when a study has no samples.
- Upon clicking "Add data," a loading window opens with the focus set to "Attach a file."
-
Support for Compressed Files and Archives
Users can now attach compressed files and archives without the need to decompress them manually. ODM recognizes and processes the contents of these archives.
Supported Formats:
.zip.gz
Behavior:
-
Single File Archive:
- The file is loaded as an attachment.
-
Multiple File Archive:
- The archive itself is loaded as an attachment.
- Its structure (contents) is parsed, stored, and displayed in the original file structure.
- Any HDF5 file's structure inside the archive is also parsed, stored, and displayed.
-
Attach Files via External Links
Users can now attach files by providing a link to an external storage location.
Key Changes:
- Users can switch between
Local computerandExternal linkwhen selecting Attach a file. - When an external link is provided, the file is automatically fetched and uploaded to the configured S3 bucket.
- An attached file is created in ODM from the provided external link.
- Users can switch between
-
Known limitations for Attached files:
- S3 bucket is mandatory to upload and work with Attached files functionality in the ODM.
- Export: If attachment's metadata was updated and got a new version, attached file cannot be exported itself from the ODM.
Workaround: export is available from exporting whole Study. We are working on improvements for this functionality in
1.61release.
Previously, the Study Browser and API displayed draft (staging) metadata. This could result in discrepancies and display unverified information. With this update, only the latest published metadata will be visible, improving data consistency and trustworthiness.
Key Updates:
-
Study Browser Display:
- Metadata shown under study titles (study descriptions) will now reflect only the latest published version.
-
Advanced Search Functionality:
- All advanced search queries will return results based on published metadata, including:
- Full-text search
- Search by wildcards
- Search with logical operators
- Ontology-based search
- Facet search
- All advanced search queries will return results based on published metadata, including:
-
User and Curator API Endpoints:
- Both User and Curator APIs now return metadata exclusively from the latest published version, ensuring external integrations access validated information.
-
Restrict default template deletion
- Updated
wipeStudymethod to prevent the deletion of a template marked as the default. - If an attempt is made to delete the default template, the following error message is shown: Default template could not be deleted from the system. Please, set another template as default and try again.
- Updated
-
Save Template
Starting from this version, after editing template fields, users must click either the "Save changes" or "Cancel" button. This ensures that unwanted changes are prevented.
When a data file imported from a local computer is deleted, the corresponding file on S3 is now automatically removed.
This applies to direct deletions resulting removing file from related entities (study, sample group, library group, or preparation group).
Existing attachments have been migrated to ensure accurate linking between data files and their S3 counterparts.








