Skip to content

Conversation

@rroelke
Copy link
Member

@rroelke rroelke commented Nov 5, 2025

This pull request resolves CORE-290 on the release-2.29 branch for backport.

It combines three pull requests to main which were all used in the resolution of CORE-290:
#5670
#5675
#5655

Merge conflicts were all trivial and/or in test code.

Report

In CORE-290 a customer reported issues with corrupt arrays after running consolidation. The symptom was memory allocation errors when opening an array. The root cause turned out to be the consolidation itself was writing new fragments where the fragment domain did not match the number of tiles in the fragment.

Root cause analysis

Stepping through a minimal reproducer revealed that consolidation is not at fault at all - instead it is the global order writer max_fragment_size_ field. This field, presumably meant to be used for consolidation only but valid for other writes, instructs the writer to write multiple fragments each under the requested size as necessary. When a write splits its data into multiple fragments this way, nothing need be done for sparse arrays. But dense fragment metadata relies on the domain to determine the number of tiles, and the global order writer did not update the fragment metadata domain when splitting its write into multiple fragments.

This pull request fixes that issue by ensuring that the global order writer updates the fragment metadata domain to reflect what was actually written into that fragment.

This is easier said than done. In CORE-290 I offered four ways we can fix this. The chosen solution:

Design

The fragment metadata domain is a bounding rectangle. This means that the global order writer must split the tiles of its input into fragments at tile boundaries which bisect the bounding rectangle into two smaller bounding rectangles.

To do so, we add a first pass identify_fragment_tile_boundaries which returns a list of tile offsets where new fragments will begin. Upon finishing a fragment, we use that tile offset to determine which rectangle within the target subarray the fragment actually represents, and update the fragment metadata accordingly. We use new functions is_rectangular_domain to determine whether a (start_tile, num_tiles) pair identifies a rectangle, and domain_tile_offset to compute that rectangle.

Much of the complexity comes from the usage of the global order writer which does happen in consolidation: multi-part writes. A user (or a consolidation operation) can set a domain D which it intends to write into, and then actually fill in all of the cells over multiple submit calls which stream in the cells to write. It is not required for these cells to be tile aligned. Because of that, and the need to write rectangle fragments, a single submit cannot always determine whether a tail of tiles belongs to its current fragment or must be deferred to the next. To get around this we keep those tiles in memory in the global_write_state_ and prepend them to the user input in the next submit.

Testing

We add unit tests for is_rectangular_domain and domain_tile_offset, including both example tests as well as rapidcheck properties to assert general claims.

We add tests for the global order writer to make broad claims about what the writer is supposed to do with respect to the max_fragment_size parameter. We add examples and a rapidcheck test to exercise these claims. In particular:

  • no fragment should be created larger than this size
  • two adjacent fragments must exceed that size (otherwise they could be one fragment)
  • fragments correctly populate the domain of the write query

Since the original report originated from consolidation, we also add some consolidation tests mimicking the customer input.


TYPE: BUG
DESC: Backport CORE-290 to 2.29

@rroelke rroelke requested a review from a team as a code owner November 5, 2025 15:23
@rroelke rroelke changed the base branch from main to release-2.29 November 5, 2025 15:24
@rroelke rroelke requested review from ihnorton and ypatia November 5, 2025 15:24
rroelke and others added 3 commits November 5, 2025 14:20
This pull request is made in support of #5655 which remains in draft;
the intent is to pull these pieces out of that in order to have a more
focused code review of both pull requests.

We have a container class `IndexedList` which is used to store
non-movable data (and thus not usable in `std::vector`) with indexing
(and thus not usable in `std::list`). The implementation uses both
`std::list` and `std::vector` underneath.

This pull request makes some improvements to this class:
- we add the `splice` method whose semantics are the same as those of
the `std::list` `splice` method. This method transfers elements from one
`IndexedList` to another. In support of this we add new `iterator` and
`const_iterator` types.
- we provide the constructor definition in the header so that
implementations do not have to declare their own specialization.
- we update the `resize` method to accept variadic copyable arguments
which will be used to initialize each of the new elements of the
container.

We also add unit tests for `splice` correctness, and `resize` avoiding
the move constructor.

---
TYPE: NO_HISTORY
DESC: Add `IndexedList::splice`
)

This pull request does some refactoring and enhancing of our test data
structures `struct Fragment1D` and `struct Fragment2D` which are used as
containers for inputs to write queries and outputs of read queries.
1) we move the methods which generically create arrays using these and
write fragments using these to a header file instead of
`sparse_global_order_reader.cc`
2) we refactor functionality into a `template <typename DimensionTuple,
typename AttributeTuple> struct Fragment` which is a common base class
for `Fragment1D`, `Fragment2D`, a new `Fragment3D` and can be used with
an empty dimension tuple for dense fragments
3) we implement various fixes to enable variable-length dimensions
4) some other incidentals

This pull request has been refactored out of #5646 for two reasons:
1) it is all enhancements to test code which may be distracting from the
functional components of #5646 ;
2) I anticipate using it to add additional tests to #5655 .
Specifically, using the `Fragment` with an empty dimension tuple will be
very nice for writing tests on dense arrays.

There's no direct testing of this, but since the code was factored out
of #5646 all of it was used in testing there. Many components are
exercised due to the refactor of the existing tests.

---
TYPE: NO_HISTORY
DESC: refactors and enhancements to test data structure `struct
Fragment`
…imum fragment size (#5655)

In CORE-290 a customer reported issues with corrupt arrays after running consolidation. The symptom was memory allocation errors when opening an array. The root cause turned out to be the consolidation itself was writing new fragments where the fragment domain did not match the number of tiles in the fragment.

The fragment metadata domain is a bounding rectangle. This means that the global order writer must split the tiles of its input into fragments at tile boundaries which bisect the bounding rectangle into two smaller bounding rectangles.

To do so, we add a first pass identify_fragment_tile_boundaries which returns a list of tile offsets where new fragments will begin. Upon finishing a fragment, we use that tile offset to determine which rectangle within the target subarray the fragment actually represents, and update the fragment metadata accordingly. We use new functions is_rectangular_domain to determine whether a (start_tile, num_tiles) pair identifies a rectangle, and domain_tile_offset to compute that rectangle.

Much of the complexity comes from the usage of the global order writer which does happen in consolidation: multi-part writes. A user (or a consolidation operation) can set a domain D which it intends to write into, and then actually fill in all of the cells over multiple submit calls which stream in the cells to write. It is not required for these cells to be tile aligned. Because of that, and the need to write rectangle fragments, a single submit cannot always determine whether a tail of tiles belongs to its current fragment or must be deferred to the next. To get around this we keep those tiles in memory in the global_write_state_ and prepend them to the user input in the next submit.
@rroelke rroelke force-pushed the rr/release-2.29-core-290 branch from 51291de to abef3a4 Compare November 5, 2025 19:20
@ihnorton ihnorton merged commit f9cb375 into release-2.29 Nov 6, 2025
55 of 56 checks passed
@ihnorton ihnorton deleted the rr/release-2.29-core-290 branch November 6, 2025 03:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants