Improving Storage Migration with Changes to map_new_from_slices, map_unpack_to_slice, and contracttype
#1877
Replies: 2 comments 5 replies
-
|
This makes sense to me. The behavior might be slightly surprising if someone tried to inspect the map manually, but we don't expect this to be a common pattern (even though LLMs suggest this currently). But I wonder if we should name this more appropriately to the use case, like |
Beta Was this translation helpful? Give feedback.
-
|
@khomiakmaxim posted at stellar/stellar-docs#2228 (comment) highlighting that the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Data migration in Soroban contracts is currently more difficult than it needs to be. When a contract's data structures evolve—such as adding new optional fields—developers face significant friction.
Consider a simple upgrade scenario: a contract stores
DataV1with fields{a, b}, and needs to upgrade to a new wasm where that data will have a new fieldcand become{a, b, c}.Examples
Example 1: Adding an Optional Field
I expect a developer's natural assumption would be to try using an
Option<T>field. It's easy to assume based off the behaviour of many other APIs that data without the field would deserialize withc = Option::None:This fails because the host validates field count and names before the SDK or the developer can handle any mismatch.
Example 2: Explicit Fallback Logic
Knowing the simple approach doesn't work, a developer might try explicit fallback with
try_from_val. The name suggests you can try to convert from the val, and in the vast majority of implementationstry_from_valresults in a guest side error if it fails. However that's not the case withcontracttypestructs.The host traps with
Error(Object, UnexpectedSize)beforetry_from_valcan return anErr.Current Workarounds
Workaround 1: Version Marker
The best solution to work around this that I'm aware of today is to use an explicit version number and check it before reading.
This works but discovering this pattern is non-obvious.
This can be retrofitted on to any existing storage because the program can assume that if the version doesn't exist that it is the first version of the data.
Workaround 2: Map Inspection
I have also seen LLMs suggest work arounds to inspect the map, either its length or the symbols in the map, both both approaches are either very verbose or error prone. For example:
Proposal
Introduce v2 versions of the map packing/unpacking host functions with the following semantics:
map_unpack_to_slice_v2: When reading maps:a. Unpack missing keys as
Val::VOIDvalues (which convert toNoneforOption<T>fields).b. Ignore keys not requested if they are in the map being unpacked.
c. Return an Error instead of trapping on schema mismatches.
map_new_from_slices_v2: When writing maps, omit key-value pairs containing 'void' values, for the main purpose of symmetry with the unpack host fn, although as a bonus as an optimisation (originally proposed in https://github.com/orgs/stellar/discussions/1750).These changes would support a developer migrating using patterns like:
Why This Must Be Solved at the Host Level
Handling schema mismatches guest-side is impractical because validating all the fields would require iterating over the map in full and nullify any benefit of using the map unpack host fn. While in some cases a developer might optimise and only check the field count that is an incomplete check in the cases that fields have been removed or renamed.
Alternatives
1. Improve Documentation
There exists very little documentation for how to do migrations on Soroban, so I've opened an issue to improve the documentation about migrations, and if everyone needing to do migrations read that documentation, then that might suffice:
Related
Beta Was this translation helpful? Give feedback.
All reactions