Skip to content

Commit e323324

Browse files
rghaddabgithub-actions[bot]
authored andcommitted
[nrf fromlist] fs: zms: multiple fixes from previous PR review
This resolves some addressed comments in this PR zephyrproject-rtos/zephyr#77930 It adds as well a section in the documentation about some recommendations to increase ZMS performance. Upstream PR #: 80407 Signed-off-by: Riadh Ghaddab <[email protected]> (cherry picked from commit a40ba482bd65ba75540454468cbce4cfcbcc4bf2) (cherry picked from commit 53f2704)
1 parent 549a4a5 commit e323324

File tree

7 files changed

+273
-181
lines changed

7 files changed

+273
-181
lines changed

doc/services/storage/zms/zms.rst

Lines changed: 60 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -201,9 +201,9 @@ An entry has 16 bytes divided between these variables :
201201
202202
struct zms_ate {
203203
uint8_t crc8; /* crc8 check of the entry */
204-
uint8_t cycle_cnt; /* cycle counter for non erasable devices */
205-
uint32_t id; /* data id */
204+
uint8_t cycle_cnt; /* cycle counter for non-erasable devices */
206205
uint16_t len; /* data len within sector */
206+
uint32_t id; /* data id */
207207
union {
208208
uint8_t data[8]; /* used to store small size data */
209209
struct {
@@ -218,30 +218,31 @@ An entry has 16 bytes divided between these variables :
218218
};
219219
} __packed;
220220
221-
.. note:: The data CRC is checked only when the whole data of the element is read.
222-
The data CRC is not checked for a partial read, as it is computed for the complete set of data.
221+
.. note:: The CRC of the data is checked only when the whole the element is read.
222+
The CRC of the data is not checked for a partial read, as it is computed for the whole element.
223223

224-
.. note:: Enabling the data CRC feature on a previously existing ZMS content without
225-
data CRC will make all existing data invalid.
224+
.. note:: Enabling the CRC feature on previously existing ZMS content without CRC enabled
225+
will make all existing data invalid.
226226

227227
.. _free-space:
228228

229229
Available space for user data (key-value pairs)
230230
***********************************************
231231

232-
For both scenarios ZMS should have always an empty sector to be able to perform the garbage
233-
collection.
234-
So if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store
235-
Key-value pairs and keep always one (rotating sector) empty to be able to launch GC.
232+
For both scenarios ZMS should always have an empty sector to be able to perform the
233+
garbage collection (GC).
234+
So, if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store
235+
Key-value pairs and keep one sector empty to be able to launch GC.
236+
The empty sector will rotate between the 4 sectors in the partition.
236237

237238
.. note:: The maximum single data length that could be written at once in a sector is 64K
238239
(This could change in future versions of ZMS)
239240

240241
Small data values
241242
=================
242243

243-
For small data values (<= 8 bytes), the data is stored within the entry (ATE) itself and no data
244-
is written at the top of the sector.
244+
Values smaller than 8 bytes will be stored within the entry (ATE) itself, without writing data
245+
at the top of the sector.
245246
ZMS has an entry size of 16 bytes which means that the maximum available space in a partition to
246247
store data is computed in this scenario as :
247248

@@ -265,7 +266,7 @@ Large data values
265266
=================
266267

267268
Large data values ( > 8 bytes) are stored separately at the top of the sector.
268-
In this case it is hard to estimate the free available space as this depends on the size of
269+
In this case, it is hard to estimate the free available space, as this depends on the size of
269270
the data. But we can take into account that for N bytes of data (N > 8 bytes) an additional
270271
16 bytes of ATE must be added at the bottom of the sector.
271272

@@ -286,17 +287,17 @@ This storage system is optimized for devices that do not require an erase.
286287
Using storage systems that rely on an erase-value (NVS as an example) will need to emulate the
287288
erase with write operations. This will cause a significant decrease in the life expectancy of
288289
these devices and will cause more delays for write operations and for initialization.
289-
ZMS introduces a cycle count mechanism that avoids emulating erase operation for these devices.
290+
ZMS uses a cycle count mechanism that avoids emulating erase operation for these devices.
290291
It also guarantees that every memory location is written only once for each cycle of sector write.
291292

292-
As an example, to erase a 4096 bytes sector on a non erasable device using NVS, 256 flash writes
293+
As an example, to erase a 4096 bytes sector on a non-erasable device using NVS, 256 flash writes
293294
must be performed (supposing that write-block-size=16 bytes), while using ZMS only 1 write of
294295
16 bytes is needed. This operation is 256 times faster in this case.
295296

296297
Garbage collection operation is also adding some writes to the memory cell life expectancy as it
297298
is moving some blocks from one sector to another.
298299
To make the garbage collector not affect the life expectancy of the device it is recommended
299-
to dimension correctly the partition size. Its size should be the double of the maximum size of
300+
to correctly dimension the partition size. Its size should be the double of the maximum size of
300301
data (including extra headers) that could be written in the storage.
301302

302303
See :ref:`free-space`.
@@ -307,10 +308,10 @@ Device lifetime calculation
307308
Storage devices whether they are classical Flash or new technologies like RRAM/MRAM has a limited
308309
life expectancy which is determined by the number of times memory cells can be erased/written.
309310
Flash devices are erased one page at a time as part of their functional behavior (otherwise
310-
memory cells cannot be overwritten) and for non erasable storage devices memory cells can be
311+
memory cells cannot be overwritten) and for non-erasable storage devices memory cells can be
311312
overwritten directly.
312313

313-
A typical scenario is shown here to calculate the life expectancy of a device.
314+
A typical scenario is shown here to calculate the life expectancy of a device:
314315
Let's suppose that we store an 8 bytes variable using the same ID but its content changes every
315316
minute. The partition has 4 sectors with 1024 bytes each.
316317
Each write of the variable requires 16 bytes of storage.
@@ -361,9 +362,9 @@ Existing features
361362
=================
362363
Version1
363364
--------
364-
- Supports non erasable devices (only one write operation to erase a sector)
365+
- Supports non-erasable devices (only one write operation to erase a sector)
365366
- Supports large partition size and sector size (64 bits address space)
366-
- Supports large IDs width (32 bits) to store ID/Value pairs
367+
- Supports 32-bit IDs to store ID/Value pairs
367368
- Small sized data ( <= 8 bytes) are stored in the ATE itself
368369
- Built-in Data CRC32 (included in the ATE)
369370
- Versionning of ZMS (to handle future evolution)
@@ -375,7 +376,7 @@ Future features
375376
- Add multiple format ATE support to be able to use ZMS with different ATE formats that satisfies
376377
requirements from application
377378
- Add the possibility to skip garbage collector for some application usage where ID/value pairs
378-
are written periodically and do not exceed half of the partition size (ther is always an old
379+
are written periodically and do not exceed half of the partition size (there is always an old
379380
entry with the same ID).
380381
- Divide IDs into namespaces and allocate IDs on demand from application to handle collisions
381382
between IDs used by different subsystems or samples.
@@ -394,9 +395,9 @@ functionality: :ref:`NVS <nvs_api>` and :ref:`FCB <fcb_api>`.
394395
Which one to use in your application will depend on your needs and the hardware you are using,
395396
and this section provides information to help make a choice.
396397

397-
- If you are using a non erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the
398-
best fit for your storage subsystem as it is designed very well to avoid emulating erase for
399-
these devices and replace it by a single write call.
398+
- If you are using a non-erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the
399+
best fit for your storage subsystem as it is designed to avoid emulating erase operation using
400+
large block writes for these devices and replaces it with a single write call.
400401
- For devices with large write_block_size and/or needs a sector size that is different than the
401402
classical flash page size (equal to erase_block_size), :ref:`ZMS <zms_api>` is also the best fit as there is
402403
the possibility to customize these parameters and add the support of these devices in ZMS.
@@ -414,6 +415,41 @@ verified to make sure that the application could work with one subsystem or the
414415
both solutions could be implemented, the best choice should be based on the calculations of the
415416
life expectancy of the device described in this section: :ref:`wear-leveling`.
416417

418+
Recommendations to increase performance
419+
***************************************
420+
421+
Sector size and count
422+
=====================
423+
424+
- The total size of the storage partition should be well dimensioned to achieve the best
425+
performance for ZMS.
426+
All the information regarding the effectively available free space in ZMS can be found
427+
in the documentation. See :ref:`free-space`.
428+
We recommend choosing a storage partition that can hold double the size of the key-value pairs
429+
that will be written in the storage.
430+
- The size of a sector needs to be dimensioned to hold the maximum data length that will be stored.
431+
Increasing the size of a sector will slow down the garbage collection operation which will
432+
occur less frequently.
433+
Decreasing its size, in the opposite, will make the garbage collection operation faster
434+
which will occur more frequently.
435+
- For some subsystems like :ref:`Settings <settings_api>`, all path-value pairs are split into two ZMS entries (ATEs).
436+
The header needed by the two entries should be accounted when computing the needed storage space.
437+
- Using small data to store in the ZMS entries can increase the performance, as this data is
438+
written within the entry header.
439+
For example, for the :ref:`Settings <settings_api>` subsystem, choosing a path name that is
440+
less than or equal to 8 bytes can make reads and writes faster.
441+
442+
Dimensioning cache
443+
==================
444+
445+
- When using ZMS API directly, the recommended cache size should be, at least, equal to
446+
the number of different entries that will be written in the storage.
447+
- Each additional cache entry will add 8 bytes to your RAM usage. Cache size should be carefully
448+
chosen.
449+
- If you use ZMS through :ref:`Settings <settings_api>`, you have to take into account that each Settings entry is
450+
divided into two ZMS entries. The recommended cache size should be, at least, twice the number
451+
of Settings entries.
452+
417453
Sample
418454
******
419455

0 commit comments

Comments
 (0)