Skip to content

Commit 49a0679

Browse files
committed
Revert "[nrf fromlist] fs: zms: multiple fixes from previous PR review"
This reverts commit 53f2704. Signed-off-by: Riadh Ghaddab <[email protected]>
1 parent 015b317 commit 49a0679

File tree

7 files changed

+181
-273
lines changed

7 files changed

+181
-273
lines changed

doc/services/storage/zms/zms.rst

Lines changed: 24 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -201,9 +201,9 @@ An entry has 16 bytes divided between these variables :
201201
202202
struct zms_ate {
203203
uint8_t crc8; /* crc8 check of the entry */
204-
uint8_t cycle_cnt; /* cycle counter for non-erasable devices */
205-
uint16_t len; /* data len within sector */
204+
uint8_t cycle_cnt; /* cycle counter for non erasable devices */
206205
uint32_t id; /* data id */
206+
uint16_t len; /* data len within sector */
207207
union {
208208
uint8_t data[8]; /* used to store small size data */
209209
struct {
@@ -218,31 +218,30 @@ An entry has 16 bytes divided between these variables :
218218
};
219219
} __packed;
220220
221-
.. note:: The CRC of the data is checked only when the whole the element is read.
222-
The CRC of the data is not checked for a partial read, as it is computed for the whole element.
221+
.. note:: The data CRC is checked only when the whole data of the element is read.
222+
The data CRC is not checked for a partial read, as it is computed for the complete set of data.
223223

224-
.. note:: Enabling the CRC feature on previously existing ZMS content without CRC enabled
225-
will make all existing data invalid.
224+
.. note:: Enabling the data CRC feature on a previously existing ZMS content without
225+
data CRC will make all existing data invalid.
226226

227227
.. _free-space:
228228

229229
Available space for user data (key-value pairs)
230230
***********************************************
231231

232-
For both scenarios ZMS should always have an empty sector to be able to perform the
233-
garbage collection (GC).
234-
So, if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store
235-
Key-value pairs and keep one sector empty to be able to launch GC.
236-
The empty sector will rotate between the 4 sectors in the partition.
232+
For both scenarios ZMS should have always an empty sector to be able to perform the garbage
233+
collection.
234+
So if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store
235+
Key-value pairs and keep always one (rotating sector) empty to be able to launch GC.
237236

238237
.. note:: The maximum single data length that could be written at once in a sector is 64K
239238
(This could change in future versions of ZMS)
240239

241240
Small data values
242241
=================
243242

244-
Values smaller than 8 bytes will be stored within the entry (ATE) itself, without writing data
245-
at the top of the sector.
243+
For small data values (<= 8 bytes), the data is stored within the entry (ATE) itself and no data
244+
is written at the top of the sector.
246245
ZMS has an entry size of 16 bytes which means that the maximum available space in a partition to
247246
store data is computed in this scenario as :
248247

@@ -266,7 +265,7 @@ Large data values
266265
=================
267266

268267
Large data values ( > 8 bytes) are stored separately at the top of the sector.
269-
In this case, it is hard to estimate the free available space, as this depends on the size of
268+
In this case it is hard to estimate the free available space as this depends on the size of
270269
the data. But we can take into account that for N bytes of data (N > 8 bytes) an additional
271270
16 bytes of ATE must be added at the bottom of the sector.
272271

@@ -287,17 +286,17 @@ This storage system is optimized for devices that do not require an erase.
287286
Using storage systems that rely on an erase-value (NVS as an example) will need to emulate the
288287
erase with write operations. This will cause a significant decrease in the life expectancy of
289288
these devices and will cause more delays for write operations and for initialization.
290-
ZMS uses a cycle count mechanism that avoids emulating erase operation for these devices.
289+
ZMS introduces a cycle count mechanism that avoids emulating erase operation for these devices.
291290
It also guarantees that every memory location is written only once for each cycle of sector write.
292291

293-
As an example, to erase a 4096 bytes sector on a non-erasable device using NVS, 256 flash writes
292+
As an example, to erase a 4096 bytes sector on a non erasable device using NVS, 256 flash writes
294293
must be performed (supposing that write-block-size=16 bytes), while using ZMS only 1 write of
295294
16 bytes is needed. This operation is 256 times faster in this case.
296295

297296
Garbage collection operation is also adding some writes to the memory cell life expectancy as it
298297
is moving some blocks from one sector to another.
299298
To make the garbage collector not affect the life expectancy of the device it is recommended
300-
to correctly dimension the partition size. Its size should be the double of the maximum size of
299+
to dimension correctly the partition size. Its size should be the double of the maximum size of
301300
data (including extra headers) that could be written in the storage.
302301

303302
See :ref:`free-space`.
@@ -308,10 +307,10 @@ Device lifetime calculation
308307
Storage devices whether they are classical Flash or new technologies like RRAM/MRAM has a limited
309308
life expectancy which is determined by the number of times memory cells can be erased/written.
310309
Flash devices are erased one page at a time as part of their functional behavior (otherwise
311-
memory cells cannot be overwritten) and for non-erasable storage devices memory cells can be
310+
memory cells cannot be overwritten) and for non erasable storage devices memory cells can be
312311
overwritten directly.
313312

314-
A typical scenario is shown here to calculate the life expectancy of a device:
313+
A typical scenario is shown here to calculate the life expectancy of a device.
315314
Let's suppose that we store an 8 bytes variable using the same ID but its content changes every
316315
minute. The partition has 4 sectors with 1024 bytes each.
317316
Each write of the variable requires 16 bytes of storage.
@@ -362,9 +361,9 @@ Existing features
362361
=================
363362
Version1
364363
--------
365-
- Supports non-erasable devices (only one write operation to erase a sector)
364+
- Supports non erasable devices (only one write operation to erase a sector)
366365
- Supports large partition size and sector size (64 bits address space)
367-
- Supports 32-bit IDs to store ID/Value pairs
366+
- Supports large IDs width (32 bits) to store ID/Value pairs
368367
- Small sized data ( <= 8 bytes) are stored in the ATE itself
369368
- Built-in Data CRC32 (included in the ATE)
370369
- Versionning of ZMS (to handle future evolution)
@@ -376,7 +375,7 @@ Future features
376375
- Add multiple format ATE support to be able to use ZMS with different ATE formats that satisfies
377376
requirements from application
378377
- Add the possibility to skip garbage collector for some application usage where ID/value pairs
379-
are written periodically and do not exceed half of the partition size (there is always an old
378+
are written periodically and do not exceed half of the partition size (ther is always an old
380379
entry with the same ID).
381380
- Divide IDs into namespaces and allocate IDs on demand from application to handle collisions
382381
between IDs used by different subsystems or samples.
@@ -395,9 +394,9 @@ functionality: :ref:`NVS <nvs_api>` and :ref:`FCB <fcb_api>`.
395394
Which one to use in your application will depend on your needs and the hardware you are using,
396395
and this section provides information to help make a choice.
397396

398-
- If you are using a non-erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the
399-
best fit for your storage subsystem as it is designed to avoid emulating erase operation using
400-
large block writes for these devices and replaces it with a single write call.
397+
- If you are using a non erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the
398+
best fit for your storage subsystem as it is designed very well to avoid emulating erase for
399+
these devices and replace it by a single write call.
401400
- For devices with large write_block_size and/or needs a sector size that is different than the
402401
classical flash page size (equal to erase_block_size), :ref:`ZMS <zms_api>` is also the best fit as there is
403402
the possibility to customize these parameters and add the support of these devices in ZMS.
@@ -415,41 +414,6 @@ verified to make sure that the application could work with one subsystem or the
415414
both solutions could be implemented, the best choice should be based on the calculations of the
416415
life expectancy of the device described in this section: :ref:`wear-leveling`.
417416

418-
Recommendations to increase performance
419-
***************************************
420-
421-
Sector size and count
422-
=====================
423-
424-
- The total size of the storage partition should be well dimensioned to achieve the best
425-
performance for ZMS.
426-
All the information regarding the effectively available free space in ZMS can be found
427-
in the documentation. See :ref:`free-space`.
428-
We recommend choosing a storage partition that can hold double the size of the key-value pairs
429-
that will be written in the storage.
430-
- The size of a sector needs to be dimensioned to hold the maximum data length that will be stored.
431-
Increasing the size of a sector will slow down the garbage collection operation which will
432-
occur less frequently.
433-
Decreasing its size, in the opposite, will make the garbage collection operation faster
434-
which will occur more frequently.
435-
- For some subsystems like :ref:`Settings <settings_api>`, all path-value pairs are split into two ZMS entries (ATEs).
436-
The header needed by the two entries should be accounted when computing the needed storage space.
437-
- Using small data to store in the ZMS entries can increase the performance, as this data is
438-
written within the entry header.
439-
For example, for the :ref:`Settings <settings_api>` subsystem, choosing a path name that is
440-
less than or equal to 8 bytes can make reads and writes faster.
441-
442-
Dimensioning cache
443-
==================
444-
445-
- When using ZMS API directly, the recommended cache size should be, at least, equal to
446-
the number of different entries that will be written in the storage.
447-
- Each additional cache entry will add 8 bytes to your RAM usage. Cache size should be carefully
448-
chosen.
449-
- If you use ZMS through :ref:`Settings <settings_api>`, you have to take into account that each Settings entry is
450-
divided into two ZMS entries. The recommended cache size should be, at least, twice the number
451-
of Settings entries.
452-
453417
Sample
454418
******
455419

0 commit comments

Comments
 (0)