diff --git a/doc/services/settings/index.rst b/doc/services/settings/index.rst index 66c99d050a3..68b1faed548 100644 --- a/doc/services/settings/index.rst +++ b/doc/services/settings/index.rst @@ -5,9 +5,9 @@ Settings The settings subsystem gives modules a way to store persistent per-device configuration and runtime state. A variety of storage implementations are -provided behind a common API using FCB, NVS, or a file system. These different -implementations give the application developer flexibility to select an -appropriate storage medium, and even change it later as needs change. This +provided behind a common API using FCB, NVS, ZMS or a file system. These +different implementations give the application developer flexibility to select +an appropriate storage medium, and even change it later as needs change. This subsystem is used by various Zephyr components and can be used simultaneously by user applications. @@ -23,8 +23,8 @@ For an example of the settings subsystem refer to :zephyr:code-sample:`settings` .. note:: - As of Zephyr release 2.1 the recommended backend for non-filesystem - storage is :ref:`NVS `. + As of Zephyr release 4.1 the recommended backends for non-filesystem + storage are :ref:`NVS ` and :ref:`ZMS `. Handlers ******** @@ -39,7 +39,7 @@ for static handlers. :c:func:`settings_runtime_get()` from the runtime backend. **h_set** - This gets called when the value is loaded from persisted storage with + This gets called when the value is loaded from persistent storage with :c:func:`settings_load()`, or when using :c:func:`settings_runtime_set()` from the runtime backend. @@ -78,6 +78,14 @@ backend. This gets called when loading values from persistent storage using :c:func:`settings_load()`. +**csi_load_one** + This gets called when loading only one item from persistent storage using + :c:func:`settings_load_one()`. + +**csi_get_val_len** + This gets called when getting a value's length from persistent storage using + :c:func:`settings_get_val_len()`. + **csi_save** This gets called when saving a single setting to persistent storage using :c:func:`settings_save_one()`. @@ -93,10 +101,12 @@ backend. Zephyr Storage Backends *********************** -Zephyr has three storage backends: a Flash Circular Buffer -(:kconfig:option:`CONFIG_SETTINGS_FCB`), a file in the filesystem -(:kconfig:option:`CONFIG_SETTINGS_FILE`), or non-volatile storage -(:kconfig:option:`CONFIG_SETTINGS_NVS`). +Zephyr offers the following storage backends: + +* Flash Circular Buffer (:kconfig:option:`CONFIG_SETTINGS_FCB`). +* A file in the filesystem (:kconfig:option:`CONFIG_SETTINGS_FILE`). +* Non-Volatile Storage (:kconfig:option:`CONFIG_SETTINGS_NVS`). +* Zephyr Memory Storage (:kconfig:option:`CONFIG_SETTINGS_ZMS`). You can declare multiple sources for settings; settings from all of these are restored when :c:func:`settings_load()` is called. @@ -109,14 +119,27 @@ using :c:func:`settings_fcb_dst()`. As a side-effect, :c:func:`settings_fcb_src initializes the FCB area, so it must be called before calling :c:func:`settings_fcb_dst()`. File read target is registered using :c:func:`settings_file_src()`, and write target by using :c:func:`settings_file_dst()`. + Non-volatile storage read target is registered using :c:func:`settings_nvs_src()`, and write target by using :c:func:`settings_nvs_dst()`. +Zephyr Memory Storage (ZMS) read target is registered using :c:func:`settings_zms_src()`, +and write target is registered using :c:func:`settings_zms_dst()`. + +ZMS backend has the particularity of using hash functions to hash the settings +key before storing it to the persistent storage. This implementation implies +that some collisions between key's hashes could occur if a big number of +different keys are stored. This number depends on the selected hash function. + +ZMS backend can handle :math:`2^n` maximum collisions where n is defined by +(:kconfig:option:`SETTINGS_ZMS_MAX_COLLISIONS_BITS`). + + Storage Location **************** -The FCB and non-volatile storage (NVS) backends both look for a fixed +The FCB, non-volatile storage (NVS) and ZMS backends look for a fixed partition with label "storage" by default. A different partition can be selected by setting the ``zephyr,settings-partition`` property of the chosen node in the devicetree. @@ -124,8 +147,8 @@ chosen node in the devicetree. The file path used by the file backend to store settings is selected via the option :kconfig:option:`CONFIG_SETTINGS_FILE_PATH`. -Loading data from persisted storage -*********************************** +Loading data from persistent storage +************************************ A call to :c:func:`settings_load()` uses an ``h_set`` implementation to load settings data from storage to volatile memory. @@ -133,6 +156,12 @@ After all data is loaded, the ``h_commit`` handler is issued, signalling the application that the settings were successfully retrieved. +Alternatively, a call to :c:func:`settings_load_one()` will load only one +Settings entry and store it in the provided buffer. + +To get the value's length associated with the Settings entry, a call to +:c:func:`settings_get_val_len()` should be performed + Technically FCB and file backends may store some history of the entities. This means that the newest data entity is stored after any older existing data entities. @@ -146,7 +175,7 @@ A call to :c:func:`settings_save_one()` uses a backend implementation to store settings data to the storage medium. A call to :c:func:`settings_save()` uses an ``h_export`` implementation to store different data in one operation using :c:func:`settings_save_one()`. -A key need to be covered by a ``h_export`` only if it is supposed to be stored +A key needs to be covered by a ``h_export`` only if it is supposed to be stored by :c:func:`settings_save()` call. For both FCB and file back-end only storage requests with data which @@ -227,7 +256,7 @@ Example: Persist Runtime State This is a simple example showing how to persist runtime state. In this example, only ``h_set`` is defined, which is used when restoring value from -persisted storage. +persistent storage. In this example, the ``main`` function increments ``foo_val``, and then persists the latest number. When the system restarts, the application calls diff --git a/doc/services/storage/zms/zms.rst b/doc/services/storage/zms/zms.rst index 3523bfc5eae..8a94e6d5992 100644 --- a/doc/services/storage/zms/zms.rst +++ b/doc/services/storage/zms/zms.rst @@ -15,15 +15,15 @@ pairs until it is full. The key-value pair is divided into two parts: - The key part is written in an ATE (Allocation Table Entry) called "ID-ATE" which is stored - starting from the bottom of the sector -- The value part is defined as "DATA" and is stored raw starting from the top of the sector + starting from the bottom of the sector. +- The value part is defined as "data" and is stored raw starting from the top of the sector. -Additionally, for each sector we store at the last positions Header-ATEs which are ATEs that +Additionally, for each sector we store at the last positions header ATEs which are ATEs that are needed for the sector to describe its status (closed, open) and the current version of ZMS. When the current sector is full we verify first that the following sector is empty, we garbage -collect the N+2 sector (where N is the current sector number) by moving the valid ATEs to the -N+1 empty sector, we erase the garbage collected sector and then we close the current sector by +collect the sector N+2 (where N is the current sector number) by moving the valid ATEs to the +N+1 empty sector, we erase the garbage-collected sector and then we close the current sector by writing a garbage_collect_done ATE and the close ATE (one of the header entries). Afterwards we move forward to the next sector and start writing entries again. @@ -60,50 +60,50 @@ A sector is organized in this form (example with 3 sectors): - . - . * - . - - ATE_b2 - - ATE_c2 - * - ATE_a2 - - ATE_b1 - - ATE_c1 - * - ATE_a1 - - ATE_b0 - - ATE_c0 - * - ATE_a0 - - GC_done - - GC_done - * - Close (cyc=1) - - Close (cyc=1) - - Close (cyc=1) - * - Empty (cyc=1) - - Empty (cyc=2) - - Empty (cyc=2) + - ID ATE_b2 + - ID ATE_c2 + * - ID ATE_a2 + - ID ATE_b1 + - ID ATE_c1 + * - ID ATE_a1 + - ID ATE_b0 + - ID ATE_c0 + * - ID ATE_a0 + - GC_done ATE + - GC_done ATE + * - Close ATE (cyc=1) + - Close ATE (cyc=1) + - Close ATE (cyc=1) + * - Empty ATE (cyc=1) + - Empty ATE (cyc=2) + - Empty ATE (cyc=2) Definition of each element in the sector ======================================== -``Empty ATE:`` is written when erasing a sector (last position of the sector). +``Empty ATE`` is written when erasing a sector (last position of the sector). -``Close ATE:`` is written when closing a sector (second to last position of the sector). +``Close ATE`` is written when closing a sector (second to last position of the sector). -``GC_done ATE:`` is written to indicate that the next sector has been already garbage -collected. This ATE could be in any position of the sector. +``GC_done ATE`` is written to indicate that the next sector has already been garbage-collected. +This ATE could be at any position of the sector. -``ID-ATE:`` are entries that contain a 32 bits Key and describe where the data is stored, its -size and its crc32 +``ID ATE`` are entries that contain a 32-bit key and describe where the data is stored, its +size and its CRC32. -``Data:`` is the actual value associated to the ID-ATE +``Data`` is the actual value associated to the ID-ATE. How does ZMS work? ****************** -Mounting the Storage system +Mounting the storage system =========================== -Mounting the storage starts by getting the flash parameters, checking that the file system +Mounting the storage system starts by getting the flash parameters, checking that the file system properties are correct (sector_size, sector_count ...) then calling the zms_init function to make the storage ready. -To mount the filesystem some elements in the zms_fs structure must be initialized. +To mount the filesystem the following elements in the ``zms_fs`` structure must be initialized: .. code-block:: c @@ -125,43 +125,44 @@ To mount the filesystem some elements in the zms_fs structure must be initialize Initialization ============== -As ZMS has a fast-forward write mechanism, we must find the last sector and the last pointer of +As ZMS has a fast-forward write mechanism, it must find the last sector and the last pointer of the entry where it stopped the last time. It must look for a closed sector followed by an open one, then within the open sector, it finds -(recover) the last written ATE (Allocation Table Entry). +(recovers) the last written ATE. After that, it checks that the sector after this one is empty, or it will erase it. -ZMS ID-Data write +ZMS ID/data write =================== -To avoid rewriting the same data with the same ID again, it must look in all the sectors if the -same ID exist then compares its data, if the data is identical no write is performed. -If we must perform a write, then an ATE and Data (if not a delete) are written in the sector. -If the sector is full (cannot hold the current data + ATE) we have to move to the next sector, +To avoid rewriting the same data with the same ID again, ZMS must look in all the sectors if the +same ID exists and then compares its data. If the data is identical, no write is performed. +If it must perform a write, then an ATE and the data (if the operation is not a delete) are written +in the sector. +If the sector is full (cannot hold the current data + ATE), ZMS has to move to the next sector, garbage collect the sector after the newly opened one then erase it. -Data size that is smaller or equal to 8 bytes are written within the ATE. +Data whose size is smaller or equal to 8 bytes are written within the ATE. ZMS ID/data read (with history) =============================== -By default it looks for the last data with the same ID by browsing through all stored ATEs from +By default ZMS looks for the last data with the same ID by browsing through all stored ATEs from the most recent ones to the oldest ones. If it finds a valid ATE with a matching ID it retrieves its data and returns the number of bytes that were read. -If history count is provided that is different than 0, older data with same ID is retrieved. +If a history count is provided and different than 0, older data with same ID is retrieved. ZMS free space calculation ========================== ZMS can also return the free space remaining in the partition. -However, this operation is very time consuming and needs to browse all valid ATEs in all sectors -of the partition and for each valid ATE try to find if an older one exist. -It is not recommended for application to use this function often, as it is time consuming and +However, this operation is very time-consuming as it needs to browse through all valid ATEs +in all sectors of the partition and for each valid ATE try to find if an older one exists. +It is not recommended for applications to use this function often, as it is time-consuming and could slow down the calling thread. The cycle counter ================= -Each sector has a lead cycle counter which is a uin8_t that is used to validate all the other +Each sector has a lead cycle counter which is a ``uin8_t`` that is used to validate all the other ATEs. The lead cycle counter is stored in the empty ATE. To become valid, an ATE must have the same cycle counter as the one stored in the empty ATE. @@ -179,88 +180,68 @@ counter as the empty ATE. When closing a sector, all the remaining space that has not been used is filled with garbage data to avoid having old ATEs with a valid cycle counter. -Triggering Garbage collection +Triggering garbage collection ============================= Some applications need to make sure that storage writes have a maximum defined latency. -When calling a ZMS write, the current sector could be almost full and we need to trigger the GC -to switch to the next sector. -This operation is time consuming and it will cause some applications to not meet their real time +When calling ZMS to make a write, the current sector could be almost full such that ZMS needs to +trigger the GC to switch to the next sector. +This operation is time-consuming and will cause some applications to not meet their real time constraints. ZMS adds an API for the application to get the current remaining free space in a sector. -The application could then decide when needed to switch to the next sector if the current one is -almost full and of course it will trigger the garbage collection on the next sector. +The application could then decide when to switch to the next sector if the current one is almost +full. This will of course trigger the garbage collection operation on the next sector. This will guarantee the application that the next write won't trigger the garbage collection. ATE (Allocation Table Entry) structure ====================================== -An entry has 16 bytes divided between these variables : +An entry has 16 bytes divided between these fields: -.. code-block:: c +See the :c:struct:`zms_ate` structure. - struct zms_ate { - uint8_t crc8; /* crc8 check of the entry */ - uint8_t cycle_cnt; /* cycle counter for non-erasable devices */ - uint16_t len; /* data len within sector */ - uint32_t id; /* data id */ - union { - uint8_t data[8]; /* used to store small size data */ - struct { - uint32_t offset; /* data offset within sector */ - union { - uint32_t data_crc; /* crc for data */ - uint32_t metadata; /* Used to store metadata information - * such as storage version. - */ - }; - }; - }; - } __packed; - -.. note:: The CRC of the data is checked only when the whole the element is read. +.. note:: The CRC of the data is checked only when a full read of the data is made. The CRC of the data is not checked for a partial read, as it is computed for the whole element. -.. note:: Enabling the CRC feature on previously existing ZMS content without CRC enabled - will make all existing data invalid. - -.. _free-space: +.. warning:: Enabling the CRC feature on previously existing ZMS content that did not have it + enabled will make all existing data invalid. Available space for user data (key-value pairs) *********************************************** -For both scenarios ZMS should always have an empty sector to be able to perform the -garbage collection (GC). +ZMS always needs an empty sector to be able to perform the garbage collection (GC). So, if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store -Key-value pairs and keep one sector empty to be able to launch GC. +key-value pairs and keep one sector empty to be able to perform GC. The empty sector will rotate between the 4 sectors in the partition. -.. note:: The maximum single data length that could be written at once in a sector is 64K - (This could change in future versions of ZMS) +.. note:: The maximum single data length that can be written at once in a sector is 64K + (this could change in future versions of ZMS). Small data values ================= -Values smaller than 8 bytes will be stored within the entry (ATE) itself, without writing data -at the top of the sector. +Values smaller than or equal to 8 bytes will be stored within the entry (ATE) itself, without +writing data at the top of the sector. ZMS has an entry size of 16 bytes which means that the maximum available space in a partition to -store data is computed in this scenario as : +store data is computed in this scenario as: .. math:: - \small\frac{(NUM\_SECTORS - 1) \times (SECTOR\_SIZE - (5 \times ATE\_SIZE))}{2} + \small\frac{(NUM\_SECTORS - 1) \times (SECTOR\_SIZE - (5 \times ATE\_SIZE)) \times (DATA\_SIZE)}{ATE\_SIZE} Where: -``NUM_SECTOR:`` Total number of sectors +``NUM_SECTOR``: Total number of sectors + +``SECTOR_SIZE``: Size of the sector -``SECTOR_SIZE:`` Size of the sector +``ATE_SIZE``: 16 bytes -``ATE_SIZE:`` 16 bytes +``(5 * ATE_SIZE)``: Reserved ATEs for header and delete items -``(5 * ATE_SIZE):`` Reserved ATEs for header and delete items +``DATA_SIZE``: Size of the small data values (range from 1 to 8) -For example for 4 sectors of 1024 bytes, free space for data is :math:`\frac{3 \times 944}{2} = 1416 \, \text{ bytes}`. +For example for 4 sectors of 1024 bytes, free space for 8-byte length data is :math:`\frac{3 \times 944 \times 8}{16} = 1416 \, \text{ bytes}`. Large data values ================= @@ -274,67 +255,66 @@ Let's take an example: For a partition that has 4 sectors of 1024 bytes and for data size of 64 bytes. Only 3 sectors are available for writes with a capacity of 944 bytes each. -Each Key-value pair needs an extra 16 bytes for ATE which makes it possible to store 11 pairs -in each sectors (:math:`\frac{944}{80}`). -Total data that could be stored in this partition for this case is :math:`11 \times 3 \times 64 = 2112 \text{ bytes}` - -.. _wear-leveling: +Each key-value pair needs an extra 16 bytes for the ATE, which makes it possible to store 11 pairs +in each sector (:math:`\frac{944}{80}`). +Total data that could be stored in this partition for this case is :math:`11 \times 3 \times 64 = 2112 \text{ bytes}`. Wear leveling ************* This storage system is optimized for devices that do not require an erase. -Using storage systems that rely on an erase-value (NVS as an example) will need to emulate the -erase with write operations. This will cause a significant decrease in the life expectancy of -these devices and will cause more delays for write operations and for initialization. -ZMS uses a cycle count mechanism that avoids emulating erase operation for these devices. +Storage systems that rely on an erase value (NVS as an example) need to emulate the erase with +write operations. This causes a significant decrease in the life expectancy of these devices +as well as more delays for write operations and initialization of the device when it is empty. +ZMS uses a cycle count mechanism that avoids emulating erase operations for these devices. It also guarantees that every memory location is written only once for each cycle of sector write. -As an example, to erase a 4096 bytes sector on a non-erasable device using NVS, 256 flash writes -must be performed (supposing that write-block-size=16 bytes), while using ZMS only 1 write of -16 bytes is needed. This operation is 256 times faster in this case. +As an example, to erase a 4096-byte sector on devices that do not require an erase operation +using NVS, 256 flash writes must be performed (supposing that ``write-block-size`` = 16 bytes), while +using ZMS, only 1 write of 16 bytes is needed. This operation is 256 times faster in this case. -Garbage collection operation is also adding some writes to the memory cell life expectancy as it -is moving some blocks from one sector to another. +The garbage collection operation also reduces the memory cell life expectancy as it performs write +operations when moving blocks from one sector to another. To make the garbage collector not affect the life expectancy of the device it is recommended -to correctly dimension the partition size. Its size should be the double of the maximum size of -data (including extra headers) that could be written in the storage. +to dimension the partition appropriately. Its size should be the double of the maximum size of +data (including headers) that could be written in the storage. -See :ref:`free-space`. +See `Available space for user data <#available-space-for-user-data-key-value-pairs>`_. Device lifetime calculation =========================== -Storage devices whether they are classical Flash or new technologies like RRAM/MRAM has a limited -life expectancy which is determined by the number of times memory cells can be erased/written. +Storage devices, whether they are classical flash or new technologies like RRAM/MRAM, have a +limited life expectancy which is determined by the number of times memory cells can be +erased/written. Flash devices are erased one page at a time as part of their functional behavior (otherwise -memory cells cannot be overwritten) and for non-erasable storage devices memory cells can be -overwritten directly. +memory cells cannot be overwritten), and for storage devices that do not require an erase +operation, memory cells can be overwritten directly. A typical scenario is shown here to calculate the life expectancy of a device: -Let's suppose that we store an 8 bytes variable using the same ID but its content changes every +Let's suppose that we store an 8-byte variable using the same ID but its content changes every minute. The partition has 4 sectors with 1024 bytes each. Each write of the variable requires 16 bytes of storage. As we have 944 bytes available for ATEs for each sector, and because ZMS is a fast-forward storage system, we are going to rewrite the first location of the first sector after :math:`\frac{(944 \times 4)}{16} = 236 \text{ minutes}`. -In addition to the normal writes, garbage collector will move the still valid data from old -sectors to new ones. +In addition to the normal writes, the garbage collector will move the data that is still valid +from old sectors to new ones. As we are using the same ID and a big partition size, no data will be moved by the garbage collector in this case. -For storage devices that could be written 20000 times, the storage will last about -4.720.000 minutes (~9 years). +For storage devices that can be written 20 000 times, the storage will last about +4 720 000 minutes (~9 years). To make a more general formula we must first compute the effective used size in ZMS by our typical set of data. -For id/data pair with data <= 8 bytes, effective_size is 16 bytes -For id/data pair with data > 8 bytes, effective_size is 16 bytes + sizeof(data) -Let's suppose that total_effective_size is the total size of the set of data that is written in -the storage and that the partition is well dimensioned (double of the effective size) to avoid +For ID/data pairs with data <= 8 bytes, ``effective_size`` is 16 bytes. +For ID/data pairs with data > 8 bytes, ``effective_size`` is ``16 + sizeof(data)`` bytes. +Let's suppose that ``total_effective_size`` is the total size of the data that is written in +the storage and that the partition is sized appropriately (double of the effective size) to avoid having the garbage collector moving blocks all the time. -The expected life of the device in minutes is computed as : +The expected lifetime of the device in minutes is computed as: .. math:: @@ -342,11 +322,11 @@ The expected life of the device in minutes is computed as : Where: -``SECTOR_EFFECTIVE_SIZE``: is the size sector - header_size(80 bytes) +``SECTOR_EFFECTIVE_SIZE``: The sector size - header size (80 bytes) -``SECTOR_NUMBER``: is the number of sectors +``SECTOR_NUMBER``: The number of sectors -``MAX_NUM_WRITES``: is the life expectancy of the storage device in number of writes +``MAX_NUM_WRITES``: The life expectancy of the storage device in number of writes ``TOTAL_EFFECTIVE_SIZE``: Total effective size of the set of written data @@ -360,15 +340,16 @@ such as low latency and bigger storage space. Existing features ================= -Version1 --------- -- Supports non-erasable devices (only one write operation to erase a sector) -- Supports large partition size and sector size (64 bits address space) -- Supports 32-bit IDs to store ID/Value pairs -- Small sized data ( <= 8 bytes) are stored in the ATE itself -- Built-in Data CRC32 (included in the ATE) -- Versioning of ZMS (to handle future evolution) -- Supports large write-block-size (Only for platforms that need this) +Version 1 +--------- +- Supports storage devices that do not require an erase operation (only one write operation + to invalidate a sector) +- Supports large partition and sector sizes (64-bit address space) +- Supports 32-bit IDs +- Small-sized data (<= 8 bytes) are stored in the ATE itself +- Built-in data CRC32 (included in the ATE) +- Versioning of ZMS (to handle future evolutions) +- Supports large ``write-block-size`` (only for platforms that need it) Future features =============== @@ -395,10 +376,10 @@ functionality: :ref:`NVS ` and :ref:`FCB `. Which one to use in your application will depend on your needs and the hardware you are using, and this section provides information to help make a choice. -- If you are using a non-erasable technology device like RRAM or MRAM, :ref:`ZMS ` is definitely the +- If you are using devices that do not require an erase operation like RRAM or MRAM, :ref:`ZMS ` is definitely the best fit for your storage subsystem as it is designed to avoid emulating erase operation using large block writes for these devices and replaces it with a single write call. -- For devices with large write_block_size and/or needs a sector size that is different than the +- For devices that have a large ``write_block_size`` and/or need a sector size that is different than the classical flash page size (equal to erase_block_size), :ref:`ZMS ` is also the best fit as there is the possibility to customize these parameters and add the support of these devices in ZMS. - For classical flash technology devices, :ref:`NVS ` is recommended as it has low footprint (smaller @@ -413,7 +394,7 @@ and this section provides information to help make a choice. More generally to make the right choice between NVS and ZMS, all the blockers should be first verified to make sure that the application could work with one subsystem or the other, then if both solutions could be implemented, the best choice should be based on the calculations of the -life expectancy of the device described in this section: :ref:`wear-leveling`. +life expectancy of the device described in this section: `Wear leveling <#wear-leveling>`_. Recommendations to increase performance *************************************** @@ -421,44 +402,41 @@ Recommendations to increase performance Sector size and count ===================== -- The total size of the storage partition should be well dimensioned to achieve the best - performance for ZMS. +- The total size of the storage partition should be set appropriately to achieve the best + performance with ZMS. All the information regarding the effectively available free space in ZMS can be found - in the documentation. See :ref:`free-space`. - We recommend choosing a storage partition that can hold double the size of the key-value pairs + in the documentation. See `Available space for user data <#available-space-for-user-data-key-value-pairs>`_. + It's recommended to choose a storage partition size that is double the size of the key-value pairs that will be written in the storage. -- The size of a sector needs to be dimensioned to hold the maximum data length that will be stored. - Increasing the size of a sector will slow down the garbage collection operation which will - occur less frequently. - Decreasing its size, in the opposite, will make the garbage collection operation faster - which will occur more frequently. +- The sector size needs to be set such that a sector can fit the maximum data size that will be + stored. + Increasing the sector size will slow down the garbage collection operation and make it occur + less frequently. + Decreasing its size, on the opposite, will make the garbage collection operation faster but also + occur more frequently. - For some subsystems like :ref:`Settings `, all path-value pairs are split into two ZMS entries (ATEs). - The header needed by the two entries should be accounted when computing the needed storage space. -- Using small data to store in the ZMS entries can increase the performance, as this data is - written within the entry header. + The headers needed by the two entries should be accounted for when computing the needed storage + space. +- Storing small data (<= 8 bytes) in ZMS entries can increase the performance, as this data is + written within the entry. For example, for the :ref:`Settings ` subsystem, choosing a path name that is less than or equal to 8 bytes can make reads and writes faster. -Dimensioning cache -================== +Cache size +========== -- When using ZMS API directly, the recommended cache size should be, at least, equal to - the number of different entries that will be written in the storage. +- When using the ZMS API directly, the recommendation for the cache size is to make it at least + equal to the number of different entries that will be written in the storage. - Each additional cache entry will add 8 bytes to your RAM usage. Cache size should be carefully chosen. - If you use ZMS through :ref:`Settings `, you have to take into account that each Settings entry is - divided into two ZMS entries. The recommended cache size should be, at least, twice the number - of Settings entries. - -Sample -****** - -A sample of how ZMS can be used is supplied in :zephyr:code-sample:`zms`. + divided into two ZMS entries. The recommendation for the cache size is to make it at least + twice the number of Settings entries. API Reference ************* -The ZMS subsystem APIs are provided by ``zms.h``: +The ZMS API is provided by ``zms.h``: .. doxygengroup:: zms_data_structures diff --git a/doc/zephyr.doxyfile.in b/doc/zephyr.doxyfile.in index 036bef82c66..dfd9251bb7b 100644 --- a/doc/zephyr.doxyfile.in +++ b/doc/zephyr.doxyfile.in @@ -980,6 +980,7 @@ INPUT = @ZEPHYR_BASE@/doc/_doxygen/mainpage.md \ @ZEPHYR_BASE@/subsys/testsuite/include/ \ @ZEPHYR_BASE@/subsys/testsuite/ztest/include/ \ @ZEPHYR_BASE@/subsys/secure_storage/include/ \ + @ZEPHYR_BASE@/subsys/fs/zms/zms_priv.h \ # This tag can be used to specify the character encoding of the source files # that Doxygen parses. Internally Doxygen uses the UTF-8 encoding. Doxygen uses diff --git a/include/zephyr/fs/zms.h b/include/zephyr/fs/zms.h index 0f0fbb82cc9..9a514d65818 100644 --- a/include/zephyr/fs/zms.h +++ b/include/zephyr/fs/zms.h @@ -80,8 +80,13 @@ struct zms_fs { * @brief Mount a ZMS file system onto the device specified in `fs`. * * @param fs Pointer to the file system. - * @retval 0 Success - * @retval -ERRNO Negative errno code on error + * + * @retval 0 on success. + * @retval -ENOTSUP if the detected file system is not ZMS. + * @retval -EPROTONOSUPPORT if the ZMS version is not supported. + * @retval -EINVAL if any of the flash parameters or the sector layout is invalid. + * @retval -ENXIO if there is a device error. + * @retval -EIO if there is a memory read/write error. */ int zms_mount(struct zms_fs *fs); @@ -89,8 +94,11 @@ int zms_mount(struct zms_fs *fs); * @brief Clear the ZMS file system from device. * * @param fs Pointer to the file system. - * @retval 0 Success - * @retval -ERRNO Negative errno code on error + * + * @retval 0 on success. + * @retval -EACCES if `fs` is not mounted. + * @retval -ENXIO if there is a device error. + * @retval -EIO if there is a memory read/write error. */ int zms_clear(struct zms_fs *fs); @@ -102,14 +110,20 @@ int zms_clear(struct zms_fs *fs); * entry and an entry with data of length 0. * * @param fs Pointer to the file system. - * @param id ID of the entry to be written - * @param data Pointer to the data to be written - * @param len Number of bytes to be written (maximum 64 KiB) + * @param id ID of the entry to be written. + * @param data Pointer to the data to be written. + * @param len Number of bytes to be written (maximum 64 KiB). * * @return Number of bytes written. On success, it will be equal to the number of bytes requested * to be written or 0. * When a rewrite of the same data already stored is attempted, nothing is written to flash, * thus 0 is returned. On error, returns negative value of error codes defined in `errno.h`. + * @retval Number of bytes written (`len` or 0) on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -ENXIO if there is a device error. + * @retval -EIO if there is a memory read/write error. + * @retval -EINVAL if `len` is invalid. + * @retval -ENOSPC if no space is left on the device. */ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len); @@ -117,9 +131,12 @@ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len); * @brief Delete an entry from the file system * * @param fs Pointer to the file system. - * @param id ID of the entry to be deleted - * @retval 0 Success - * @retval -ERRNO Negative errno code on error + * @param id ID of the entry to be deleted. + * + * @retval 0 on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -ENXIO if there is a device error. + * @retval -EIO if there is a memory read/write error. */ int zms_delete(struct zms_fs *fs, uint32_t id); @@ -127,13 +144,17 @@ int zms_delete(struct zms_fs *fs, uint32_t id); * @brief Read an entry from the file system. * * @param fs Pointer to the file system. - * @param id ID of the entry to be read - * @param data Pointer to data buffer - * @param len Number of bytes to read at most + * @param id ID of the entry to be read. + * @param data Pointer to data buffer. + * @param len Number of bytes to read at most. * * @return Number of bytes read. On success, it will be equal to the number of bytes requested * to be read or less than that if the stored data has a smaller size than the requested one. * On error, returns negative value of error codes defined in `errno.h`. + * @retval Number of bytes read (> 0) on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -EIO if there is a memory read/write error. + * @retval -ENOENT if there is no entry with the given `id`. */ ssize_t zms_read(struct zms_fs *fs, uint32_t id, void *data, size_t len); @@ -141,26 +162,34 @@ ssize_t zms_read(struct zms_fs *fs, uint32_t id, void *data, size_t len); * @brief Read a history entry from the file system. * * @param fs Pointer to the file system. - * @param id ID of the entry to be read - * @param data Pointer to data buffer - * @param len Number of bytes to be read + * @param id ID of the entry to be read. + * @param data Pointer to data buffer. + * @param len Number of bytes to be read. * @param cnt History counter: 0: latest entry, 1: one before latest ... * * @return Number of bytes read. On success, it will be equal to the number of bytes requested * to be read. When the return value is larger than the number of bytes requested to read this * indicates not all bytes were read, and more data is available. On error, returns negative * value of error codes defined in `errno.h`. + * @retval Number of bytes read (> 0) on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -EIO if there is a memory read/write error. + * @retval -ENOENT if there is no entry with the given `id` and history counter. */ ssize_t zms_read_hist(struct zms_fs *fs, uint32_t id, void *data, size_t len, uint32_t cnt); /** - * @brief Gets the length of the data that is stored in an entry with a given ID + * @brief Gets the length of the data that is stored in an entry with a given `id` * * @param fs Pointer to the file system. * @param id ID of the entry whose data length to retrieve. * * @return Data length contained in the ATE. On success, it will be equal to the number of bytes * in the ATE. On error, returns negative value of error codes defined in `errno.h`. + * @retval Length of the entry with the given `id` (> 0) on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -EIO if there is a memory read/write error. + * @retval -ENOENT if there is no entry with the given id and history counter. */ ssize_t zms_get_data_length(struct zms_fs *fs, uint32_t id); @@ -173,6 +202,9 @@ ssize_t zms_get_data_length(struct zms_fs *fs, uint32_t id); * still be written to the file system. * Calculating the free space is a time-consuming operation, especially on SPI flash. * On error, returns negative value of error codes defined in `errno.h`. + * @retval Number of free bytes (>= 0) on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -EIO if there is a memory read/write error. */ ssize_t zms_calc_free_space(struct zms_fs *fs); @@ -181,7 +213,8 @@ ssize_t zms_calc_free_space(struct zms_fs *fs); * * @param fs Pointer to the file system. * - * @return Number of free bytes. + * @retval >=0 Number of free bytes in the currently active sector + * @retval -EACCES if ZMS is still not initialized. */ size_t zms_active_sector_free_space(struct zms_fs *fs); @@ -196,7 +229,9 @@ size_t zms_active_sector_free_space(struct zms_fs *fs); * * @param fs Pointer to the file system. * - * @return 0 on success. On error, returns negative value of error codes defined in `errno.h`. + * @retval 0 on success. + * @retval -EACCES if ZMS is still not initialized. + * @retval -EIO if there is a memory read/write error. */ int zms_sector_use_next(struct zms_fs *fs); diff --git a/include/zephyr/settings/settings.h b/include/zephyr/settings/settings.h index f22f1aba118..b5b89fbe199 100644 --- a/include/zephyr/settings/settings.h +++ b/include/zephyr/settings/settings.h @@ -45,6 +45,9 @@ extern "C" { */ #define SETTINGS_EXTRA_LEN ((SETTINGS_MAX_DIR_DEPTH - 1) + 2) +/* Maximum Settings name length including separators */ +#define SETTINGS_FULL_NAME_LEN SETTINGS_MAX_NAME_LEN + SETTINGS_EXTRA_LEN + 1 + /** * Function used to read the data from the settings storage in * h_set handler implementations. @@ -278,6 +281,25 @@ int settings_load(void); */ int settings_load_subtree(const char *subtree); +/** + * Load one serialized item from registered persistence sources. + * + * @param[in] name Name/key of the settings item. + * @param[out] buf Pointer to the buffer where the data is going to be loaded + * @param[in] buf_len Length of the allocated buffer. + * @return actual size of value that corresponds to name on success, negative + * value on failure. + */ +ssize_t settings_load_one(const char *name, void *buf, size_t buf_len); + +/** + * Get the data length of the value relative to the key + * + * @param[in] key Name/key of the settings item. + * @return length of value if item exists, 0 if not and negative value on failure. + */ +ssize_t settings_get_val_len(const char *key); + /** * Callback function used for direct loading. * Used by @ref settings_load_subtree_direct function. @@ -457,6 +479,26 @@ struct settings_store_itf { * load callback only on the final entity. */ + ssize_t (*csi_load_one)(struct settings_store *cs, const char *name, + char *buf, size_t buf_len); + /**< Loads one value from storage that corresponds to the key defined by name. + * + * Parameters: + * - cs - Corresponding backend handler node. + * - name - Key in string format. + * - buf - Buffer where data should be copied. + * - buf_len - Length of buf. + */ + + ssize_t (*csi_get_val_len)(struct settings_store *cs, const char *name); + /**< Gets the value's length associated to the Key defined by name. + * It returns 0 if the Key/Value doesn't exist. + * + * Parameters: + * - cs - Corresponding backend handler node. + * - name - Key in string format. + */ + int (*csi_save_start)(struct settings_store *cs); /**< Handler called before an export operation. * diff --git a/samples/subsys/fs/zms/Kconfig b/samples/subsys/fs/zms/Kconfig new file mode 100644 index 00000000000..0184fbdc4ef --- /dev/null +++ b/samples/subsys/fs/zms/Kconfig @@ -0,0 +1,16 @@ +# Copyright 2025 NXP +# SPDX-License-Identifier: Apache-2.0 + +mainmenu "ZMS sample configuration" + +config MAX_ITERATIONS + int "The number of iterations that the sample writes the all set of data." + default 300 + range 1 300 + +config DELETE_ITERATION + int "The number of iterations after the sample delete all set of data and verify that it has been deleted." + default 10 + range 1 MAX_ITERATIONS + +source "Kconfig.zephyr" diff --git a/samples/subsys/fs/zms/README.rst b/samples/subsys/fs/zms/README.rst index f05d1fa0838..98deead06d8 100644 --- a/samples/subsys/fs/zms/README.rst +++ b/samples/subsys/fs/zms/README.rst @@ -20,7 +20,7 @@ Overview A loop is executed where we mount the storage system, and then write all set of data. - Each DELETE_ITERATION period, we delete all set of data and verify that it has been deleted. + Each CONFIG_DELETE_ITERATION period, we delete all set of data and verify that it has been deleted. We generate as well incremented ID/value pairs, we store them until storage is full, then we delete them and verify that storage is empty. diff --git a/samples/subsys/fs/zms/prj.conf b/samples/subsys/fs/zms/prj.conf index 343c5021899..195027ea287 100644 --- a/samples/subsys/fs/zms/prj.conf +++ b/samples/subsys/fs/zms/prj.conf @@ -3,3 +3,4 @@ CONFIG_FLASH_MAP=y CONFIG_ZMS=y CONFIG_LOG=y +CONFIG_LOG_BLOCK_IN_THREAD=y diff --git a/samples/subsys/fs/zms/sample.yaml b/samples/subsys/fs/zms/sample.yaml index 802dabcf0f1..c28770ec78a 100644 --- a/samples/subsys/fs/zms/sample.yaml +++ b/samples/subsys/fs/zms/sample.yaml @@ -4,7 +4,11 @@ sample: tests: sample.zms.basic: tags: zms - depends_on: zms platform_allow: - qemu_x86 - - native_posix + - native_sim + harness: console + harness_config: + type: one_line + regex: + - "Sample code finished Successfully" diff --git a/samples/subsys/fs/zms/src/main.c b/samples/subsys/fs/zms/src/main.c index 959d5ac5f3e..a9615823608 100644 --- a/samples/subsys/fs/zms/src/main.c +++ b/samples/subsys/fs/zms/src/main.c @@ -26,9 +26,6 @@ static struct zms_fs fs; #define CNT_ID 2 #define LONG_DATA_ID 3 -#define MAX_ITERATIONS 300 -#define DELETE_ITERATION 10 - static int delete_and_verify_items(struct zms_fs *fs, uint32_t id) { int rc = 0; @@ -112,7 +109,7 @@ int main(void) fs.sector_size = info.size; fs.sector_count = 3U; - for (i = 0; i < MAX_ITERATIONS; i++) { + for (i = 0; i < CONFIG_MAX_ITERATIONS; i++) { rc = zms_mount(&fs); if (rc) { printk("Storage Init failed, rc=%d\n", rc); @@ -164,7 +161,8 @@ int main(void) rc = zms_read(&fs, CNT_ID, &i_cnt, sizeof(i_cnt)); if (rc > 0) { /* item was found, show it */ printk("Id: %d, loop_cnt: %u\n", CNT_ID, i_cnt); - if (i_cnt != (i - 1)) { + if ((i > 0) && (i_cnt != (i - 1))) { + printk("Error loop_cnt %u must be %d\n", i_cnt, i - 1); break; } } @@ -195,8 +193,8 @@ int main(void) break; } - /* Each DELETE_ITERATION delete all basic items */ - if (!(i % DELETE_ITERATION) && (i)) { + /* Each CONFIG_DELETE_ITERATION delete all basic items */ + if (!(i % CONFIG_DELETE_ITERATION) && (i)) { rc = delete_basic_items(&fs); if (rc) { break; @@ -204,7 +202,7 @@ int main(void) } } - if (i != MAX_ITERATIONS) { + if (i != CONFIG_MAX_ITERATIONS) { printk("Error: Something went wrong at iteration %u rc=%d\n", i, rc); return 0; } diff --git a/subsys/fs/zms/CMakeLists.txt b/subsys/fs/zms/CMakeLists.txt index b6db8a3f57f..91e4651c3f6 100644 --- a/subsys/fs/zms/CMakeLists.txt +++ b/subsys/fs/zms/CMakeLists.txt @@ -1,3 +1,3 @@ -#SPDX-License-Identifier: Apache-2.0 +# SPDX-License-Identifier: Apache-2.0 zephyr_sources(zms.c) diff --git a/subsys/fs/zms/Kconfig b/subsys/fs/zms/Kconfig index e1312c57fd8..c2b1d6f0fef 100644 --- a/subsys/fs/zms/Kconfig +++ b/subsys/fs/zms/Kconfig @@ -1,14 +1,16 @@ -#Copyright (c) 2024 BayLibre SAS +# Copyright (c) 2024 BayLibre SAS -#SPDX-License-Identifier: Apache-2.0 +# SPDX-License-Identifier: Apache-2.0 -#Zephyr Memory Storage ZMS +# Zephyr Memory Storage ZMS config ZMS bool "Zephyr Memory Storage" select CRC help - Enable support of Zephyr Memory Storage. + Enable Zephyr Memory Storage, which is a key-value storage system designed to work with + all types of non-volatile storage technologies. + It supports classical on-chip NOR flash as well as new technologies like RRAM and MRAM. if ZMS @@ -20,19 +22,16 @@ config ZMS_LOOKUP_CACHE table entry (ATE) for all ZMS IDs that fall into that cache position. config ZMS_LOOKUP_CACHE_SIZE - int "ZMS Storage lookup cache size" + int "ZMS lookup cache size" default 128 range 1 65536 depends on ZMS_LOOKUP_CACHE help - Number of entries in ZMS lookup cache. - It is recommended that it should be a power of 2. - Every additional entry in cache will add 8 bytes in RAM + Number of entries in the ZMS lookup cache. + Every additional entry in cache will use 8 bytes of RAM. config ZMS_DATA_CRC - bool "ZMS DATA CRC" - help - Enables DATA CRC + bool "ZMS data CRC" config ZMS_CUSTOMIZE_BLOCK_SIZE bool "Customize the size of the buffer used internally for reads and writes" @@ -40,8 +39,8 @@ config ZMS_CUSTOMIZE_BLOCK_SIZE ZMS uses an internal buffer to read/write and compare stored data. Increasing the size of this buffer should be done carefully in order to not overflow the stack. - Increasing this buffer means as well that ZMS could work with storage devices - that have larger write-block-size which decreases ZMS performance + Increasing it makes ZMS able to work with storage devices + that have a larger `write-block-size` (which decreases the performance of ZMS). config ZMS_CUSTOM_BLOCK_SIZE int "ZMS internal buffer size" @@ -52,7 +51,7 @@ config ZMS_CUSTOM_BLOCK_SIZE config ZMS_LOOKUP_CACHE_FOR_SETTINGS bool "ZMS Storage lookup cache optimized for settings" - depends on ZMS_LOOKUP_CACHE + depends on ZMS_LOOKUP_CACHE && SETTINGS_ZMS help Use the lookup cache hash function that results in the least number of collissions and, in turn, the best ZMS performance provided that the ZMS diff --git a/subsys/fs/zms/zms.c b/subsys/fs/zms/zms.c index 99096ab0c17..4336d90805b 100644 --- a/subsys/fs/zms/zms.c +++ b/subsys/fs/zms/zms.c @@ -12,7 +12,6 @@ #include #include "zms_priv.h" #ifdef CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS -#include #include #endif @@ -29,62 +28,47 @@ static int zms_ate_valid_different_sector(struct zms_fs *fs, const struct zms_at #ifdef CONFIG_ZMS_LOOKUP_CACHE -#ifdef CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS - static inline size_t zms_lookup_cache_pos(uint32_t id) { + uint32_t hash = id; + +#ifdef CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS /* - * 1. The ZMS settings backend uses up to (ZMS_NAME_ID_OFFSET - 1) ZMS IDs to - store keys and equal number of ZMS IDs to store values. - * 2. For each key-value pair, the value is stored at ZMS ID greater by exactly - * ZMS_NAME_ID_OFFSET than ZMS ID that holds the key. - * 3. The backend tries to minimize the range of ZMS IDs used to store keys. - * That is, ZMS IDs are allocated sequentially, and freed ZMS IDs are reused - * before allocating new ones. + * 1. Settings subsystem is storing the name ID and the linked list node ID + * with only one bit difference at BIT(0). + * 2. Settings subsystem is also storing the name ID and the data ID in two + * different ZMS entries at an exact offset of ZMS_DATA_ID_OFFSET. * * Therefore, to assure the least number of collisions in the lookup cache, - * the least significant bit of the hash indicates whether the given ZMS ID - * represents a key or a value, and remaining bits of the hash are set to - * the ordinal number of the key-value pair. Consequently, the hash function - * provides the following mapping: - * - * 1st settings key => hash 0 - * 1st settings value => hash 1 - * 2nd settings key => hash 2 - * 2nd settings value => hash 3 - * ... + * the BIT(0) of the hash indicates whether the given ZMS ID represents a + * linked list entry or not, the BIT(1) indicates whether the ZMS ID is a name + * or data and the remaining bits of the hash are set to a truncated part of the + * original hash generated by Settings. */ - BUILD_ASSERT(IS_POWER_OF_TWO(ZMS_NAMECNT_ID), "ZMS_NAMECNT_ID is not power of 2"); - BUILD_ASSERT(IS_POWER_OF_TWO(ZMS_NAME_ID_OFFSET), "ZMS_NAME_ID_OFFSET is not power of 2"); - - uint32_t key_value_bit; - uint32_t key_value_ord; - key_value_bit = (id >> LOG2(ZMS_NAME_ID_OFFSET)) & 1; - key_value_ord = id & (ZMS_NAME_ID_OFFSET - 1); + BUILD_ASSERT(IS_POWER_OF_TWO(ZMS_DATA_ID_OFFSET), "ZMS_NAME_ID_OFFSET is not power of 2"); - return ((key_value_ord << 1) | key_value_bit) % CONFIG_ZMS_LOOKUP_CACHE_SIZE; -} + uint32_t key_value_bit; + uint32_t key_value_hash; + uint32_t key_value_ll; -#else /* CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS */ - -static inline size_t zms_lookup_cache_pos(uint32_t id) -{ - uint32_t hash; + key_value_bit = (id >> LOG2(ZMS_DATA_ID_OFFSET)) & 1; + key_value_hash = (id & ZMS_HASH_MASK) >> (CONFIG_SETTINGS_ZMS_MAX_COLLISIONS_BITS + 1); + key_value_ll = id & BIT(0); + hash = (key_value_hash << 2) | (key_value_bit << 1) | key_value_ll; +#else /* 32-bit integer hash function found by https://github.com/skeeto/hash-prospector. */ - hash = id; hash ^= hash >> 16; hash *= 0x7feb352dU; hash ^= hash >> 15; hash *= 0x846ca68bU; hash ^= hash >> 16; +#endif /* CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS */ return hash % CONFIG_ZMS_LOOKUP_CACHE_SIZE; } -#endif /* CONFIG_ZMS_LOOKUP_CACHE_FOR_SETTINGS */ - static int zms_lookup_cache_rebuild(struct zms_fs *fs) { int rc; @@ -1146,7 +1130,7 @@ static int zms_init(struct zms_fs *fs) /* Let's check that we support this ZMS version */ if (ZMS_GET_VERSION(empty_ate.metadata) != ZMS_DEFAULT_VERSION) { LOG_ERR("ZMS Version is not supported"); - rc = -ENOEXEC; + rc = -EPROTONOSUPPORT; goto end; } } @@ -1170,7 +1154,7 @@ static int zms_init(struct zms_fs *fs) } /* all sectors are closed, and zms magic number not found. This is not a zms fs */ if ((closed_sectors == fs->sector_count) && !zms_magic_exist) { - rc = -EDEADLK; + rc = -ENOTSUP; goto end; } /* TODO: add a recovery mechanism here if the ZMS magic number exist but all @@ -1202,7 +1186,7 @@ static int zms_init(struct zms_fs *fs) /* Let's check the version */ if (ZMS_GET_VERSION(empty_ate.metadata) != ZMS_DEFAULT_VERSION) { LOG_ERR("ZMS Version is not supported"); - rc = -ENOEXEC; + rc = -EPROTONOSUPPORT; goto end; } } @@ -1437,8 +1421,6 @@ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len) { int rc; size_t data_size; - uint64_t wlk_addr; - uint64_t rd_addr; uint32_t gc_count; uint32_t required_space = 0U; /* no space, appropriate for delete ate */ @@ -1459,19 +1441,19 @@ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len) return -EINVAL; } +#ifdef CONFIG_ZMS_NO_DOUBLE_WRITE /* find latest entry with same id */ #ifdef CONFIG_ZMS_LOOKUP_CACHE - wlk_addr = fs->lookup_cache[zms_lookup_cache_pos(id)]; + uint64_t wlk_addr = fs->lookup_cache[zms_lookup_cache_pos(id)]; if (wlk_addr == ZMS_LOOKUP_CACHE_NO_ADDR) { goto no_cached_entry; } #else wlk_addr = fs->ate_wra; -#endif - rd_addr = wlk_addr; +#endif /* CONFIG_ZMS_LOOKUP_CACHE */ + uint64_t rd_addr = wlk_addr; -#ifdef CONFIG_ZMS_NO_DOUBLE_WRITE /* Search for a previous valid ATE with the same ID */ struct zms_ate wlk_ate; int prev_found = zms_find_ate_with_id(fs, id, wlk_addr, fs->ate_wra, &wlk_ate, &rd_addr); @@ -1515,11 +1497,11 @@ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len) return 0; } } -#endif - #ifdef CONFIG_ZMS_LOOKUP_CACHE no_cached_entry: -#endif +#endif /* CONFIG_ZMS_LOOKUP_CACHE */ +#endif /* CONFIG_ZMS_NO_DOUBLE_WRITE */ + /* calculate required space if the entry contains data */ if (data_size) { /* Leave space for delete ate */ diff --git a/subsys/fs/zms/zms_priv.h b/subsys/fs/zms/zms_priv.h index 428ff6babca..e2cbf5f08bb 100644 --- a/subsys/fs/zms/zms_priv.h +++ b/subsys/fs/zms/zms_priv.h @@ -8,15 +8,11 @@ #ifndef __ZMS_PRIV_H_ #define __ZMS_PRIV_H_ -#ifdef __cplusplus -extern "C" { -#endif - /* - * MASKS AND SHIFT FOR ADDRESSES - * an address in zms is an uint64_t where: - * high 4 bytes represent the sector number - * low 4 bytes represent the offset in a sector + * MASKS AND SHIFT FOR ADDRESSES. + * An address in zms is an uint64_t where: + * - high 4 bytes represent the sector number + * - low 4 bytes represent the offset in a sector */ #define ADDR_SECT_MASK GENMASK64(63, 32) #define ADDR_SECT_SHIFT 32 @@ -44,34 +40,40 @@ extern "C" { #define ZMS_INVALID_SECTOR_NUM -1 #define ZMS_DATA_IN_ATE_SIZE 8 +/** + * @ingroup zms_data_structures + * ZMS Allocation Table Entry (ATE) structure + */ struct zms_ate { - uint8_t crc8; /* crc8 check of the entry */ - uint8_t cycle_cnt; /* cycle counter for non erasable devices */ - uint16_t len; /* data len within sector */ - uint32_t id; /* data id */ + /** crc8 check of the entry */ + uint8_t crc8; + /** cycle counter for non erasable devices */ + uint8_t cycle_cnt; + /** data len within sector */ + uint16_t len; + /** data id */ + uint32_t id; union { - uint8_t data[8]; /* used to store small size data */ + /** data field used to store small sized data */ + uint8_t data[8]; struct { - uint32_t offset; /* data offset within sector */ + /** data offset within sector */ + uint32_t offset; union { - uint32_t data_crc; /* - * crc for data: The data CRC is checked only - * when the whole data of the element is read. - * The data CRC is not checked for a partial - * read, as it is computed for the complete - * set of data. - */ - uint32_t metadata; /* - * Used to store metadata information - * such as storage version. - */ + /** + * crc for data: The data CRC is checked only when the whole data + * of the element is read. + * The data CRC is not checked for a partial read, as it is computed + * for the complete set of data. + */ + uint32_t data_crc; + /** + * Used to store metadata information such as storage version. + */ + uint32_t metadata; }; }; }; } __packed; -#ifdef __cplusplus -} -#endif - #endif /* __ZMS_PRIV_H_ */ diff --git a/subsys/settings/Kconfig b/subsys/settings/Kconfig index 48eacf82a8c..fdb3c4bb3ce 100644 --- a/subsys/settings/Kconfig +++ b/subsys/settings/Kconfig @@ -43,25 +43,26 @@ choice SETTINGS_BACKEND config SETTINGS_ZMS bool "ZMS (Zephyr Memory Storage)" depends on ZMS + select SYS_HASH_FUNC32 help Use ZMS as settings storage backend. if SETTINGS_ZMS -config SETTINGS_ZMS_NAME_CACHE - bool "ZMS name lookup cache" - select SYS_HASH_FUNC32 +config SETTINGS_ZMS_LL_CACHE + bool "ZMS linked list lookup cache" help - Enable ZMS name lookup cache, used to reduce the Settings name - lookup time. + Enable ZMS lookup cache for linked list, used to reduce the + Settings load time by having most linked list elements already + in cache. -config SETTINGS_ZMS_NAME_CACHE_SIZE - int "ZMS name lookup cache size" +config SETTINGS_ZMS_LL_CACHE_SIZE + int "ZMS linked list lookup cache size" default 128 range 1 $(UINT32_MAX) - depends on SETTINGS_ZMS_NAME_CACHE + depends on SETTINGS_ZMS_LL_CACHE help - Number of entries in Settings ZMS name cache. + Number of entries in Settings ZMS linked list cache. endif # SETTINGS_ZMS @@ -166,13 +167,49 @@ config SETTINGS_ZMS_SECTOR_SIZE_MULT The sector size to use for the ZMS settings area as a multiple of FLASH_ERASE_BLOCK_SIZE. +config SETTINGS_ZMS_CUSTOM_SECTOR_COUNT + bool "Customize the sector count of the ZMS settings partition" + depends on SETTINGS_ZMS + help + The number of sectors used by default is the maximum value that can + fit in the settings storage partition. + Enabling this config allows to customize the number of used sectors. + config SETTINGS_ZMS_SECTOR_COUNT int "Sector count of the ZMS settings area" default 8 - depends on SETTINGS_ZMS + depends on SETTINGS_ZMS && SETTINGS_ZMS_CUSTOM_SECTOR_COUNT help Number of sectors used for the ZMS settings area +config SETTINGS_ZMS_MAX_COLLISIONS_BITS + int "number of bits reserved to handle collisions between hash numbers" + default 4 + depends on SETTINGS_ZMS + help + The maximum number of hash collisions needs to be well sized depending + on the data that is going to be stored in ZMS and its hash values + +config SETTINGS_ZMS_NO_LL_DELETE + bool "Disable deletion of Linked list hashes" + help + For some applications, the Settings delete operation is too long for + ZMS because of the linked list update. + As a tradeoff for performance the linked list is not updated. As a + result, some nodes will be unused and will occupy some space in the + storage. + These nodes will be used again when the same Settings element that has + been deleted is created again. + +config SETTINGS_ZMS_LOAD_SUBTREE_PATH + bool "Load only subtree path if provided" + help + Loads first the key defined by the subtree path. + If the callback handler returns a zero value it will + continue to look for all the keys under that subtree path. + If the callback handler returns a non negative value, it + returns immeditaley. + config SETTINGS_SHELL bool "Settings shell" depends on SHELL diff --git a/subsys/settings/include/settings/settings_zms.h b/subsys/settings/include/settings/settings_zms.h index dd9fb3aba12..76ca92f6d11 100644 --- a/subsys/settings/include/settings/settings_zms.h +++ b/subsys/settings/include/settings/settings_zms.h @@ -1,5 +1,4 @@ -/* - * Copyright (c) 2024 BayLibre SAS +/* Copyright (c) 2024 BayLibre SAS * * SPDX-License-Identifier: Apache-2.0 */ @@ -23,40 +22,75 @@ extern "C" { * difference between name and value ID is constant and equal to * ZMS_NAME_ID_OFFSET. * - * Setting's name entries start from ZMS_NAMECNT_ID + 1. - * The entry with ID == ZMS_NAMECNT_ID is used to store the largest name ID in use. + * Setting's name is hashed into 29 bits minus hash_collisions_bits. + * The 2 MSB_bits have always the same value 10, the LL_bit for the name's hash is 0 + * and the hash_collisions_bits is configurable through CONFIG_SETTINGS_ZMS_MAX_COLLISIONS_BITS. + * The resulted 32 bits is the ZMS_ID of the Setting's name. + * If we detect a collision between ZMS_IDs we increment the value within hash_collision_bits + * until we find a free ZMS_ID. + * Separately, we store a linked list using the Setting's name ZMS_ID but setting the lsb to 1. * - * Deleted records will not be found, only the last record will be read. + * The linked list is used to maintain a relation between all ZMS_IDs. This is necessary to load + * all settings at initialization. + * The linked list contains at least a header followed by multiple linked list elements that + * we can refer to as LL_x (where x is the order of that element in that list). + * This is a representation of the Linked List that is stored in the storage. + * LL_header <--> LL_0 <--> LL_1 <--> LL_2. + * The "next_hash" pointer of each LL element refers to the next element in the linked list. + * The "previous_hash" pointer is referring the previous element in the linked list. + * + * The bit representation of the 32 bits ZMS_ID is the following: + * -------------------------------------------------------------- + * | MSB_bits | hash (truncated) | hash_collision_bits | LL_bit | + * -------------------------------------------------------------- + * Where: + * MSB_bits (2 bits width) : = 10 for Name IDs + * = 11 for Data IDs + * hash (29 bits - hash_collision_bits) : truncated hash obtained from sys_hash32 + * hash_collision_bits (configurable width) : used to handle hash collisions + * LL_bit : = 0 when this is a name's ZMS_ID + * = 1 when this is the linked list ZMS_ID corresponding to the name + * + * if a settings element is deleted it won't be found. */ -#define ZMS_NAMECNT_ID 0x80000000 -#define ZMS_NAME_ID_OFFSET 0x40000000 + +#define ZMS_LL_HEAD_HASH_ID 0x80000000 +#define ZMS_DATA_ID_OFFSET 0x40000000 +#define ZMS_HASH_MASK GENMASK(29, CONFIG_SETTINGS_ZMS_MAX_COLLISIONS_BITS + 1) +#define ZMS_COLLISIONS_MASK GENMASK(CONFIG_SETTINGS_ZMS_MAX_COLLISIONS_BITS, 1) +#define ZMS_HASH_TOTAL_MASK GENMASK(29, 1) +#define ZMS_MAX_COLLISIONS (BIT(CONFIG_SETTINGS_ZMS_MAX_COLLISIONS_BITS) - 1) + +/* some useful macros */ +#define ZMS_NAME_ID_FROM_LL_NODE(x) (x & ~BIT(0)) +#define ZMS_LL_NODE_FROM_NAME_ID(x) (x | BIT(0)) +#define ZMS_UPDATE_COLLISION_NUM(x, y) \ + ((x & ~ZMS_COLLISIONS_MASK) | ((y << 1) & ZMS_COLLISIONS_MASK)) +#define ZMS_COLLISION_NUM(x) ((x & ZMS_COLLISIONS_MASK) >> 1) +#define ZMS_NAME_ID_FROM_HASH(x) ((x & ZMS_HASH_TOTAL_MASK) | BIT(31)) +#define ZMS_DATA_ID_FROM_HASH(x) (ZMS_NAME_ID_FROM_HASH(x) + ZMS_DATA_ID_OFFSET) +#define ZMS_DATA_ID_FROM_NAME(x) (x + ZMS_DATA_ID_OFFSET) +#define ZMS_DATA_ID_FROM_LL_NODE(x) (ZMS_NAME_ID_FROM_LL_NODE(x) + ZMS_DATA_ID_OFFSET) + +struct settings_hash_linked_list { + uint32_t previous_hash; + uint32_t next_hash; +}; struct settings_zms { struct settings_store cf_store; struct zms_fs cf_zms; - uint32_t last_name_id; const struct device *flash_dev; -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - struct { - uint32_t name_hash; - uint32_t name_id; - } cache[CONFIG_SETTINGS_ZMS_NAME_CACHE_SIZE]; - - uint32_t cache_next; - uint32_t cache_total; - bool loaded; -#endif +#if CONFIG_SETTINGS_ZMS_LL_CACHE + struct settings_hash_linked_list ll_cache[CONFIG_SETTINGS_ZMS_LL_CACHE_SIZE]; + uint32_t ll_cache_next; + bool ll_has_changed; +#endif /* CONFIG_SETTINGS_ZMS_LL_CACHE */ + uint32_t last_hash_id; + uint32_t second_to_last_hash_id; + uint8_t hash_collision_num; }; -/* register zms to be a source of settings */ -int settings_zms_src(struct settings_zms *cf); - -/* register zms to be the destination of settings */ -int settings_zms_dst(struct settings_zms *cf); - -/* Initialize a zms backend. */ -int settings_zms_backend_init(struct settings_zms *cf); - #ifdef __cplusplus } #endif diff --git a/subsys/settings/src/settings_store.c b/subsys/settings/src/settings_store.c index b697f993d94..b8f8d847541 100644 --- a/subsys/settings/src/settings_store.c +++ b/subsys/settings/src/settings_store.c @@ -88,6 +88,122 @@ int settings_load_subtree_direct( return 0; } +struct default_param { + void *buf; + size_t buf_len; + size_t *val_len; +}; + +/* Default callback to set a Key/Value pair */ +static int settings_set_default_cb(const char *name, size_t len, settings_read_cb read_cb, + void *cb_arg, void *param) +{ + int rc = 0; + const char *next; + size_t name_len; + struct default_param *dest = (struct default_param *)param; + + name_len = settings_name_next(name, &next); + if (name_len == 0) { + rc = read_cb(cb_arg, dest->buf, MIN(dest->buf_len, len)); + *dest->val_len = len; + } + + return rc; +} + +/* Default callback to get the value's length of the Key defined by name. Returns 0 if Key + * doesn't exist. + */ +static int settings_get_val_len_default_cb(const char *name, size_t len, + [[maybe_unused]] settings_read_cb read_cb, + [[maybe_unused]] void *cb_arg, void *param) +{ + const char *next; + size_t name_len; + size_t *val_len = (size_t *)param; + + name_len = settings_name_next(name, &next); + if (name_len == 0) { + *val_len = len; + } + + return 0; +} + +/* Gets the value's size if the Key defined by name is in the persistent storage, + * if not found returns 0. + */ +ssize_t settings_get_val_len(const char *name) +{ + struct settings_store *cs; + int rc = 0; + size_t val_len = 0; + + /* + * for every config store that supports this function + * get the value's length. + */ + k_mutex_lock(&settings_lock, K_FOREVER); + SYS_SLIST_FOR_EACH_CONTAINER(&settings_load_srcs, cs, cs_next) { + if (cs->cs_itf->csi_get_val_len) { + val_len = cs->cs_itf->csi_get_val_len(cs, name); + } else { + const struct settings_load_arg arg = { + .subtree = name, + .cb = &settings_get_val_len_default_cb, + .param = &val_len + }; + rc = cs->cs_itf->csi_load(cs, &arg); + } + } + k_mutex_unlock(&settings_lock); + + if (rc >= 0) { + return val_len; + } + + return rc; +} + +/* Load a single key/value from persistent storage */ +ssize_t settings_load_one(const char *name, void *buf, size_t buf_len) +{ + struct settings_store *cs; + size_t val_len = 0; + int rc = 0; + + /* + * for every config store that supports this function + * load config + */ + k_mutex_lock(&settings_lock, K_FOREVER); + SYS_SLIST_FOR_EACH_CONTAINER(&settings_load_srcs, cs, cs_next) { + if (cs->cs_itf->csi_load_one) { + rc = cs->cs_itf->csi_load_one(cs, name, (char *)buf, buf_len); + val_len = (rc >= 0) ? rc : 0; + } else { + struct default_param param = { + .buf = buf, + .buf_len = buf_len, + .val_len = &val_len + }; + const struct settings_load_arg arg = { + .subtree = name, + .cb = &settings_set_default_cb, + .param = ¶m + }; + rc = cs->cs_itf->csi_load(cs, &arg); + } + } + k_mutex_unlock(&settings_lock); + + if (rc >= 0) { + return val_len; + } + return rc; +} + /* * Append a single value to persisted config. Don't store duplicate value. */ diff --git a/subsys/settings/src/settings_zms.c b/subsys/settings/src/settings_zms.c index 0f1164e4ea1..b3250074486 100644 --- a/subsys/settings/src/settings_zms.c +++ b/subsys/settings/src/settings_zms.c @@ -1,9 +1,11 @@ -/* - * Copyright (c) 2024 BayLibre SAS +/* Copyright (c) 2024 BayLibre SAS * * SPDX-License-Identifier: Apache-2.0 */ +#undef _POSIX_C_SOURCE +#define _POSIX_C_SOURCE 200809L /* for strnlen() */ + #include #include @@ -28,13 +30,19 @@ struct settings_zms_read_fn_arg { }; static int settings_zms_load(struct settings_store *cs, const struct settings_load_arg *arg); +static ssize_t settings_zms_load_one(struct settings_store *cs, const char *name, char *buf, + size_t buf_len); static int settings_zms_save(struct settings_store *cs, const char *name, const char *value, size_t val_len); static void *settings_zms_storage_get(struct settings_store *cs); +static int settings_zms_get_last_hash_ids(struct settings_zms *cf); +static ssize_t settings_zms_get_val_len(struct settings_store *cs, const char *name); static struct settings_store_itf settings_zms_itf = {.csi_load = settings_zms_load, + .csi_load_one = settings_zms_load_one, .csi_save = settings_zms_save, - .csi_storage_get = settings_zms_storage_get}; + .csi_storage_get = settings_zms_storage_get, + .csi_get_val_len = settings_zms_get_val_len}; static ssize_t settings_zms_read_fn(void *back_end, void *data, size_t len) { @@ -45,7 +53,7 @@ static ssize_t settings_zms_read_fn(void *back_end, void *data, size_t len) return zms_read(rd_fn_arg->fs, rd_fn_arg->id, data, len); } -int settings_zms_src(struct settings_zms *cf) +static int settings_zms_src(struct settings_zms *cf) { cf->cf_store.cs_itf = &settings_zms_itf; settings_src_register(&cf->cf_store); @@ -53,7 +61,7 @@ int settings_zms_src(struct settings_zms *cf) return 0; } -int settings_zms_dst(struct settings_zms *cf) +static int settings_zms_dst(struct settings_zms *cf) { cf->cf_store.cs_itf = &settings_zms_itf; settings_dst_register(&cf->cf_store); @@ -61,137 +69,328 @@ int settings_zms_dst(struct settings_zms *cf) return 0; } -#if CONFIG_SETTINGS_ZMS_NAME_CACHE -#define SETTINGS_ZMS_CACHE_OVFL(cf) ((cf)->cache_total > ARRAY_SIZE((cf)->cache)) - -static void settings_zms_cache_add(struct settings_zms *cf, const char *name, uint32_t name_id) +#ifndef CONFIG_SETTINGS_ZMS_NO_LL_DELETE +static int settings_zms_unlink_ll_node(struct settings_zms *cf, uint32_t name_hash) { - uint32_t name_hash = sys_hash32(name, strlen(name)); + int rc = 0; + struct settings_hash_linked_list settings_element; + struct settings_hash_linked_list settings_update_element; + + /* let's update the linked list */ + rc = zms_read(&cf->cf_zms, ZMS_LL_NODE_FROM_NAME_ID(name_hash), &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + + /* update the previous element */ + if (settings_element.previous_hash) { + rc = zms_read(&cf->cf_zms, settings_element.previous_hash, &settings_update_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + if (!settings_element.next_hash) { + /* we are deleting the last element of the linked list, + * let's update the second_to_last_hash_id + */ + cf->second_to_last_hash_id = settings_update_element.previous_hash; + } + settings_update_element.next_hash = settings_element.next_hash; + rc = zms_write(&cf->cf_zms, settings_element.previous_hash, + &settings_update_element, sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + } + + /* Now delete the current linked list element */ + rc = zms_delete(&cf->cf_zms, ZMS_LL_NODE_FROM_NAME_ID(name_hash)); + if (rc < 0) { + return rc; + } - cf->cache[cf->cache_next].name_hash = name_hash; - cf->cache[cf->cache_next++].name_id = name_id; + /* update the next element */ + if (settings_element.next_hash) { + rc = zms_read(&cf->cf_zms, settings_element.next_hash, &settings_update_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + settings_update_element.previous_hash = settings_element.previous_hash; + rc = zms_write(&cf->cf_zms, settings_element.next_hash, &settings_update_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + if (!settings_update_element.next_hash) { + /* update second_to_last_hash_id */ + cf->second_to_last_hash_id = settings_element.previous_hash; + } + } else { + /* we are deleting the last element of the linked list + * let's update the last_hash_id. + */ + cf->last_hash_id = settings_element.previous_hash; + } - cf->cache_next %= CONFIG_SETTINGS_ZMS_NAME_CACHE_SIZE; + return 0; } +#endif /* CONFIG_SETTINGS_ZMS_NO_LL_DELETE */ -static uint32_t settings_zms_cache_match(struct settings_zms *cf, const char *name, char *rdname, - size_t len) +static int settings_zms_delete(struct settings_zms *cf, uint32_t name_hash) { - uint32_t name_hash = sys_hash32(name, strlen(name)); - int rc; + int rc = 0; - for (int i = 0; i < CONFIG_SETTINGS_ZMS_NAME_CACHE_SIZE; i++) { - if (cf->cache[i].name_hash != name_hash) { - continue; - } + rc = zms_delete(&cf->cf_zms, name_hash); + if (rc >= 0) { + rc = zms_delete(&cf->cf_zms, ZMS_DATA_ID_FROM_NAME(name_hash)); + } + if (rc < 0) { + return rc; + } + +#ifndef CONFIG_SETTINGS_ZMS_NO_LL_DELETE +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + cf->ll_has_changed = true; +#endif + rc = settings_zms_unlink_ll_node(cf, name_hash); +#endif /* CONFIG_SETTINGS_ZMS_NO_LL_DELETE */ - if (cf->cache[i].name_id <= ZMS_NAMECNT_ID) { + return rc; +} + +#ifdef CONFIG_SETTINGS_ZMS_LOAD_SUBTREE_PATH +/* Loads first the key which is defined by the name found in "subtree" root. + * If the key is not found or further keys under the same subtree are needed + * by the caller, returns 0. + */ +static int settings_zms_load_subtree(struct settings_store *cs, const struct settings_load_arg *arg) +{ + struct settings_zms *cf = CONTAINER_OF(cs, struct settings_zms, cf_store); + struct settings_zms_read_fn_arg read_fn_arg; + char name[SETTINGS_FULL_NAME_LEN]; + ssize_t rc1; + ssize_t rc2; + uint32_t name_hash; + + name_hash = sys_hash32(arg->subtree, strnlen(arg->subtree, SETTINGS_FULL_NAME_LEN)) & + ZMS_HASH_MASK; + for (int i = 0; i <= cf->hash_collision_num; i++) { + name_hash = ZMS_UPDATE_COLLISION_NUM(name_hash, i); + /* Get the name entry from ZMS */ + rc1 = zms_read(&cf->cf_zms, ZMS_NAME_ID_FROM_HASH(name_hash), &name, + sizeof(name) - 1); + /* get the length of data and verify that it exists */ + rc2 = zms_get_data_length(&cf->cf_zms, ZMS_DATA_ID_FROM_HASH(name_hash)); + if ((rc1 <= 0) || (rc2 <= 0)) { + /* Name or data doesn't exist */ continue; } - - rc = zms_read(&cf->cf_zms, cf->cache[i].name_id, rdname, len); - if (rc < 0) { + /* Found a name, this might not include a trailing \0 */ + name[rc1] = '\0'; + if (strcmp(arg->subtree, name)) { + /* Names are not equal let's continue to the next collision hash + * if it exists. + */ continue; } + /* At this steps the names are equal, let's set the handler */ + read_fn_arg.fs = &cf->cf_zms; + read_fn_arg.id = ZMS_DATA_ID_FROM_HASH(name_hash); - rdname[rc] = '\0'; + /* We should return here as there is no need to look for the next + * hash collision + */ + return settings_call_set_handler(arg->subtree, rc2, settings_zms_read_fn, + &read_fn_arg, arg); + } + + return 0; +} +#endif /* CONFIG_SETTINGS_ZMS_LOAD_SUBTREE_PATH */ - if (strcmp(name, rdname)) { +/* Search for the name_hash that corresponds to name. + * If no hash that corresponds to name is found in the persistent storage, + * returns 0. + */ +static uint32_t settings_zms_find_hash_from_name(struct settings_zms *cf, const char *name) +{ + uint32_t name_hash = 0; + int rc = 0; + char r_name[SETTINGS_FULL_NAME_LEN]; + + name_hash = sys_hash32(name, strnlen(name, SETTINGS_FULL_NAME_LEN)) & ZMS_HASH_MASK; + for (int i = 0; i <= cf->hash_collision_num; i++) { + name_hash = ZMS_UPDATE_COLLISION_NUM(name_hash, i); + /* Get the name entry from ZMS */ + rc = zms_read(&cf->cf_zms, ZMS_NAME_ID_FROM_HASH(name_hash), r_name, + sizeof(r_name) - 1); + if (rc <= 0) { + /* Name doesn't exist */ continue; } - - return cf->cache[i].name_id; + /* Found a name, this might not include a trailing \0 */ + r_name[rc] = '\0'; + if (strcmp(name, r_name)) { + /* Names are not equal let's continue to the next collision hash + * if it exists. + */ + continue; + } + /* At this step names are equal, we found the corresponding hash */ + return name_hash; } - return ZMS_NAMECNT_ID; + return 0; } -#endif /* CONFIG_SETTINGS_ZMS_NAME_CACHE */ -static int settings_zms_load(struct settings_store *cs, const struct settings_load_arg *arg) +static ssize_t settings_zms_load_one(struct settings_store *cs, const char *name, char *buf, + size_t buf_len) { - int ret = 0; struct settings_zms *cf = CONTAINER_OF(cs, struct settings_zms, cf_store); - struct settings_zms_read_fn_arg read_fn_arg; - char name[SETTINGS_MAX_NAME_LEN + SETTINGS_EXTRA_LEN + 1]; - ssize_t rc1, rc2; - uint32_t name_id = ZMS_NAMECNT_ID; + uint32_t name_hash = 0; + ssize_t rc = 0; + uint32_t value_id; + + /* verify that name is not NULL */ + if (!name || !buf) { + return -EINVAL; + } -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - uint32_t cached = 0; + name_hash = settings_zms_find_hash_from_name(cf, name); + if (name_hash) { + /* we found a name_hash corresponding to name */ + value_id = ZMS_DATA_ID_FROM_HASH(name_hash); + rc = zms_read(&cf->cf_zms, value_id, buf, buf_len); - cf->loaded = false; -#endif + return (rc == buf_len) ? zms_get_data_length(&cf->cf_zms, value_id) : rc; + } - name_id = cf->last_name_id + 1; + return 0; +} - while (1) { +/* Gets the next linked list node either from cache (if enabled) or from persistent + * storage if cache is full or cache is not enabled. + * It updates as well the next cache index and the next linked list node ID. + */ +static int settings_zms_get_next_ll(struct settings_zms *cf, uint32_t *ll_hash_id, + [[maybe_unused]] uint32_t *ll_cache_index) +{ + struct settings_hash_linked_list settings_element; + int ret = 0; - name_id--; - if (name_id == ZMS_NAMECNT_ID) { -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - cf->loaded = true; - cf->cache_total = cached; +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + if (*ll_cache_index < cf->ll_cache_next) { + settings_element = cf->ll_cache[*ll_cache_index]; + *ll_cache_index = *ll_cache_index + 1; + } else if (*ll_hash_id == cf->second_to_last_hash_id) { + /* The last ll node is not stored in the cache as it is already + * in the cf->last_hash_id. + */ + settings_element.next_hash = cf->last_hash_id; + } else { #endif - break; + ret = zms_read(&cf->cf_zms, *ll_hash_id, &settings_element, + sizeof(struct settings_hash_linked_list)); + if (ret < 0) { + return ret; + } +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + } +#endif + /* update next ll_hash_id */ + *ll_hash_id = settings_element.next_hash; + + return 0; +} + +static int settings_zms_load(struct settings_store *cs, const struct settings_load_arg *arg) +{ + int ret = 0; + struct settings_zms *cf = CONTAINER_OF(cs, struct settings_zms, cf_store); + struct settings_zms_read_fn_arg read_fn_arg; + char name[SETTINGS_FULL_NAME_LEN]; + ssize_t rc1; + ssize_t rc2; + uint32_t ll_hash_id; + uint32_t prev_ll_hash_id; + uint32_t ll_cache_index = 0; + +#ifdef CONFIG_SETTINGS_ZMS_LOAD_SUBTREE_PATH + /* If arg->subtree is not null we must first load settings in that subtree */ + if (arg->subtree != NULL) { + ret = settings_zms_load_subtree(cs, arg); + if (ret) { + return ret; } + } +#endif /* CONFIG_SETTINGS_ZMS_LOAD_SUBTREE_PATH */ + +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + if (cf->ll_has_changed) { + /* reload the linked list in cache */ + ret = settings_zms_get_last_hash_ids(cf); + if (ret < 0) { + return ret; + } + } +#endif + /* If subtree is NULL then we must load all found Settings */ + ll_hash_id = ZMS_LL_HEAD_HASH_ID; + ret = settings_zms_get_next_ll(cf, &ll_hash_id, &ll_cache_index); + if (ret < 0) { + return ret; + } + + while (ll_hash_id) { /* In the ZMS backend, each setting item is stored in two ZMS * entries one for the setting's name and one with the * setting's value. */ - rc1 = zms_read(&cf->cf_zms, name_id, &name, sizeof(name)); + rc1 = zms_read(&cf->cf_zms, ZMS_NAME_ID_FROM_LL_NODE(ll_hash_id), &name, + sizeof(name) - 1); /* get the length of data and verify that it exists */ - rc2 = zms_get_data_length(&cf->cf_zms, name_id + ZMS_NAME_ID_OFFSET); + rc2 = zms_get_data_length(&cf->cf_zms, ZMS_DATA_ID_FROM_LL_NODE(ll_hash_id)); - if ((rc1 <= 0) && (rc2 <= 0)) { - /* Settings largest ID in use is invalid due to - * reset, power failure or partition overflow. - * Decrement it and check the next ID in subsequent - * iteration. - */ - if (name_id == cf->last_name_id) { - cf->last_name_id--; - zms_write(&cf->cf_zms, ZMS_NAMECNT_ID, &cf->last_name_id, - sizeof(uint32_t)); - } - - continue; + /* updated the next linked list node in case the called handler will + * delete this settings entry. + */ + prev_ll_hash_id = ll_hash_id; + ret = settings_zms_get_next_ll(cf, &ll_hash_id, &ll_cache_index); + if (ret < 0) { + return ret; } if ((rc1 <= 0) || (rc2 <= 0)) { - /* Settings item is not stored correctly in the ZMS. - * ZMS entry for its name or value is either missing - * or deleted. Clean dirty entries to make space for - * future settings item. + /* In case we are not updating the linked list, this is an empty mode + * Just continue */ - zms_delete(&cf->cf_zms, name_id); - zms_delete(&cf->cf_zms, name_id + ZMS_NAME_ID_OFFSET); - - if (name_id == cf->last_name_id) { - cf->last_name_id--; - zms_write(&cf->cf_zms, ZMS_NAMECNT_ID, &cf->last_name_id, - sizeof(uint32_t)); +#ifndef CONFIG_SETTINGS_ZMS_NO_LL_DELETE + /* Otherwise, Settings item is not stored correctly in the ZMS. + * ZMS entry's name or value is either missing or deleted. + * Clean dirty entries to make space for future settings items. + */ + ret = settings_zms_delete(cf, ZMS_NAME_ID_FROM_LL_NODE(prev_ll_hash_id)); + if (ret < 0) { + return ret; } - +#endif /* CONFIG_SETTINGS_ZMS_NO_LL_DELETE */ continue; } /* Found a name, this might not include a trailing \0 */ name[rc1] = '\0'; read_fn_arg.fs = &cf->cf_zms; - read_fn_arg.id = name_id + ZMS_NAME_ID_OFFSET; + read_fn_arg.id = ZMS_DATA_ID_FROM_LL_NODE(prev_ll_hash_id); -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - settings_zms_cache_add(cf, name, name_id); - cached++; -#endif - - ret = settings_call_set_handler(name, rc2, settings_zms_read_fn, &read_fn_arg, - (void *)arg); + ret = settings_call_set_handler(name, rc2, settings_zms_read_fn, &read_fn_arg, arg); if (ret) { - break; + return ret; } } + return ret; } @@ -199,10 +398,15 @@ static int settings_zms_save(struct settings_store *cs, const char *name, const size_t val_len) { struct settings_zms *cf = CONTAINER_OF(cs, struct settings_zms, cf_store); - char rdname[SETTINGS_MAX_NAME_LEN + SETTINGS_EXTRA_LEN + 1]; - uint32_t name_id, write_name_id; - bool delete, write_name; + struct settings_hash_linked_list settings_element; + char rdname[SETTINGS_FULL_NAME_LEN]; + uint32_t name_hash; + uint32_t collision_num = 0; + bool delete; + bool write_name; + bool hash_collision; int rc = 0; + int first_available_hash_index = -1; if (!name) { return -EINVAL; @@ -211,141 +415,262 @@ static int settings_zms_save(struct settings_store *cs, const char *name, const /* Find out if we are doing a delete */ delete = ((value == NULL) || (val_len == 0)); -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - bool name_in_cache = false; + name_hash = sys_hash32(name, strnlen(name, SETTINGS_FULL_NAME_LEN)) & ZMS_HASH_MASK; + /* MSB is always 1 */ + name_hash |= BIT(31); - name_id = settings_zms_cache_match(cf, name, rdname, sizeof(rdname)); - if (name_id != ZMS_NAMECNT_ID) { - write_name_id = name_id; - write_name = false; - name_in_cache = true; - goto found; - } -#endif - - /* No entry with "name" is in cache, let's find if it exists in the storage */ - name_id = cf->last_name_id + 1; - write_name_id = cf->last_name_id + 1; + /* Let's find out if there are hash collisions in the storage */ write_name = true; - -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - /* We can skip reading ZMS if we know that the cache wasn't overflowed. */ - if (cf->loaded && !SETTINGS_ZMS_CACHE_OVFL(cf)) { - goto found; - } -#endif - - /* Let's find if we already have an ID within storage */ - while (1) { - name_id--; - if (name_id == ZMS_NAMECNT_ID) { - break; - } - - rc = zms_read(&cf->cf_zms, name_id, &rdname, sizeof(rdname)); - - if (rc < 0) { - /* Error or entry not found */ - if (rc == -ENOENT) { - /* This is a free ID let's keep it */ - write_name_id = name_id; + hash_collision = true; + + for (int i = 0; i <= cf->hash_collision_num; i++) { + rc = zms_read(&cf->cf_zms, name_hash + i * LSB_GET(ZMS_COLLISIONS_MASK), &rdname, + sizeof(rdname)); + if (rc == -ENOENT) { + if (first_available_hash_index < 0) { + first_available_hash_index = i; } continue; + } else if (rc < 0) { + /* error while reading */ + return rc; } - + /* Settings entry exist, let's verify if this is the same + * name + */ rdname[rc] = '\0'; - - if (strcmp(name, rdname)) { - /* ID exists but the name is different, that's not the ID - * we are looking for. + if (!strcmp(name, rdname)) { + /* Hash exist and the names are equal, we should + * not write the names again. */ - continue; - } - - /* At this step we found the ID that corresponds to name */ - if (!delete) { - write_name_id = name_id; write_name = false; + name_hash += i * LSB_GET(ZMS_COLLISIONS_MASK); + goto no_hash_collision; } + /* At this step a Hash collision exists and names are different. + * If we are in the middle of the loop, we should continue checking + * all other possible hash collisions. + * If we reach the end of the loop, either we should select the first + * free hash value otherwise we increment it to the next free value and + * update hash_collision_num + */ + collision_num++; + } - goto found; + if (collision_num <= cf->hash_collision_num) { + /* At this step there is a free hash found */ + name_hash = ZMS_UPDATE_COLLISION_NUM(name_hash, first_available_hash_index); + goto no_hash_collision; + } else if (collision_num > cf->hash_collision_num) { + /* We must create a new hash based on incremented collision_num */ + if (collision_num > ZMS_MAX_COLLISIONS) { + /* At this step there is no more space to store hash values */ + LOG_ERR("Maximum hash collisions reached"); + return -ENOSPC; + } + cf->hash_collision_num = collision_num; + name_hash = ZMS_UPDATE_COLLISION_NUM(name_hash, collision_num); } -found: +no_hash_collision: if (delete) { - if (name_id == ZMS_NAMECNT_ID) { + if (write_name) { + /* hash doesn't exist, do not write anything here */ return 0; } - rc = zms_delete(&cf->cf_zms, name_id); - if (rc >= 0) { - rc = zms_delete(&cf->cf_zms, name_id + ZMS_NAME_ID_OFFSET); - } + rc = settings_zms_delete(cf, name_hash); + return rc; + } - if (rc < 0) { + /* write the value */ + rc = zms_write(&cf->cf_zms, ZMS_DATA_ID_FROM_NAME(name_hash), value, val_len); + if (rc < 0) { + return rc; + } + + /* write the name if required */ + if (write_name) { + /* First let's update the linked list */ +#ifdef CONFIG_SETTINGS_ZMS_NO_LL_DELETE + /* verify that the ll_node doesn't exist otherwise do not update it */ + rc = zms_read(&cf->cf_zms, ZMS_LL_NODE_FROM_NAME_ID(name_hash), &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc >= 0) { + goto no_ll_update; + } else if (rc != -ENOENT) { return rc; } - - if (name_id == cf->last_name_id) { - cf->last_name_id--; - rc = zms_write(&cf->cf_zms, ZMS_NAMECNT_ID, &cf->last_name_id, - sizeof(uint32_t)); + /* else the LL node doesn't exist let's update it */ +#endif /* CONFIG_SETTINGS_ZMS_NO_LL_DELETE */ + /* write linked list structure element */ + settings_element.next_hash = 0; + /* Verify first that the linked list last element is not broken. + * Settings subsystem uses ID that starts from ZMS_LL_HEAD_HASH_ID. + */ + if (cf->last_hash_id < ZMS_LL_HEAD_HASH_ID) { + LOG_WRN("Linked list for hashes is broken, Trying to recover"); + rc = settings_zms_get_last_hash_ids(cf); if (rc < 0) { - /* Error: can't to store - * the largest name ID in use. - */ return rc; } } + settings_element.previous_hash = cf->last_hash_id; + rc = zms_write(&cf->cf_zms, ZMS_LL_NODE_FROM_NAME_ID(name_hash), &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + + /* Now update the previous linked list element */ + settings_element.next_hash = ZMS_LL_NODE_FROM_NAME_ID(name_hash); + settings_element.previous_hash = cf->second_to_last_hash_id; + rc = zms_write(&cf->cf_zms, cf->last_hash_id, &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + cf->second_to_last_hash_id = cf->last_hash_id; + cf->last_hash_id = ZMS_LL_NODE_FROM_NAME_ID(name_hash); +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + if (cf->ll_cache_next < CONFIG_SETTINGS_ZMS_LL_CACHE_SIZE) { + cf->ll_cache[cf->ll_cache_next] = settings_element; + cf->ll_cache_next = cf->ll_cache_next + 1; + } +#endif +#ifdef CONFIG_SETTINGS_ZMS_NO_LL_DELETE +no_ll_update: +#endif /* CONFIG_SETTINGS_ZMS_NO_LL_DELETE */ + /* Now let's write the name */ + rc = zms_write(&cf->cf_zms, name_hash, name, strnlen(name, SETTINGS_FULL_NAME_LEN)); + if (rc < 0) { + return rc; + } + } + return 0; +} - return 0; +static ssize_t settings_zms_get_val_len(struct settings_store *cs, const char *name) +{ + struct settings_zms *cf = CONTAINER_OF(cs, struct settings_zms, cf_store); + uint32_t name_hash = 0; + + /* verify that name is not NULL */ + if (!name) { + return -EINVAL; } - /* No free IDs left. */ - if (write_name_id == ZMS_NAMECNT_ID + ZMS_NAME_ID_OFFSET - 1) { - return -ENOMEM; + name_hash = settings_zms_find_hash_from_name(cf, name); + if (name_hash) { + return zms_get_data_length(&cf->cf_zms, ZMS_DATA_ID_FROM_HASH(name_hash)); } - /* update the last_name_id and write to flash if required*/ - if (write_name_id > cf->last_name_id) { - cf->last_name_id = write_name_id; - rc = zms_write(&cf->cf_zms, ZMS_NAMECNT_ID, &cf->last_name_id, sizeof(uint32_t)); + return 0; +} + +/* This function inits the linked list head if it doesn't exist or recover it + * if the ll_last_hash_id is different than the head hash ID + */ +static int settings_zms_init_or_recover_ll(struct settings_zms *cf, uint32_t ll_last_hash_id) +{ + struct settings_hash_linked_list settings_element; + int rc = 0; + + if (ll_last_hash_id == ZMS_LL_HEAD_HASH_ID) { + /* header doesn't exist */ + settings_element.previous_hash = 0; + settings_element.next_hash = 0; + rc = zms_write(&cf->cf_zms, ZMS_LL_HEAD_HASH_ID, &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + cf->last_hash_id = ZMS_LL_HEAD_HASH_ID; + cf->second_to_last_hash_id = 0; + } else { + /* let's recover it by keeping all nodes until the last one */ + settings_element.previous_hash = cf->second_to_last_hash_id; + settings_element.next_hash = 0; + rc = zms_write(&cf->cf_zms, cf->last_hash_id, &settings_element, + sizeof(struct settings_hash_linked_list)); if (rc < 0) { return rc; } } - /* write the value */ - rc = zms_write(&cf->cf_zms, write_name_id + ZMS_NAME_ID_OFFSET, value, val_len); - if (rc < 0) { - return rc; - } + return 0; +} - /* write the name if required */ - if (write_name) { - rc = zms_write(&cf->cf_zms, write_name_id, name, strlen(name)); - if (rc < 0) { +static int settings_zms_get_last_hash_ids(struct settings_zms *cf) +{ + struct settings_hash_linked_list settings_element; + uint32_t ll_last_hash_id = ZMS_LL_HEAD_HASH_ID; + uint32_t previous_ll_hash_id = 0; + int rc = 0; + +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + cf->ll_cache_next = 0; +#endif + cf->hash_collision_num = 0; + do { + rc = zms_read(&cf->cf_zms, ll_last_hash_id, &settings_element, + sizeof(settings_element)); + if (rc == -ENOENT) { + /* header doesn't exist or linked list broken, reinitialize the header + * if it doesn't exist and recover it if it is broken + */ + return settings_zms_init_or_recover_ll(cf, ll_last_hash_id); + } else if (rc < 0) { return rc; } - } -#if CONFIG_SETTINGS_ZMS_NAME_CACHE - if (!name_in_cache) { - settings_zms_cache_add(cf, name, write_name_id); - if (cf->loaded && !SETTINGS_ZMS_CACHE_OVFL(cf)) { - cf->cache_total++; + if (settings_element.previous_hash != previous_ll_hash_id) { + /* This is a special case that can happen when a power down occurred + * when deleting a linked list node. + * If the power down occurred after updating the previous linked list node, + * then we would end up with a state where the previous_hash of the linked + * list is broken. Let's recover from this + */ + rc = zms_delete(&cf->cf_zms, settings_element.previous_hash); + if (rc < 0) { + return rc; + } + /* Now recover the linked list */ + settings_element.previous_hash = previous_ll_hash_id; + zms_write(&cf->cf_zms, ll_last_hash_id, &settings_element, + sizeof(struct settings_hash_linked_list)); + if (rc < 0) { + return rc; + } + } + previous_ll_hash_id = ll_last_hash_id; + +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + if ((cf->ll_cache_next < CONFIG_SETTINGS_ZMS_LL_CACHE_SIZE) && + (settings_element.next_hash)) { + cf->ll_cache[cf->ll_cache_next] = settings_element; + cf->ll_cache_next = cf->ll_cache_next + 1; } - } #endif + /* increment hash collision number if necessary */ + if (ZMS_COLLISION_NUM(ll_last_hash_id) > cf->hash_collision_num) { + cf->hash_collision_num = ZMS_COLLISION_NUM(ll_last_hash_id); + } + cf->last_hash_id = ll_last_hash_id; + cf->second_to_last_hash_id = settings_element.previous_hash; + ll_last_hash_id = settings_element.next_hash; + } while (settings_element.next_hash); +#ifdef CONFIG_SETTINGS_ZMS_LL_CACHE + cf->ll_has_changed = false; +#endif return 0; } /* Initialize the zms backend. */ -int settings_zms_backend_init(struct settings_zms *cf) +static int settings_zms_backend_init(struct settings_zms *cf) { int rc; - uint32_t last_name_id; cf->cf_zms.flash_device = cf->flash_dev; if (cf->cf_zms.flash_device == NULL) { @@ -357,15 +682,12 @@ int settings_zms_backend_init(struct settings_zms *cf) return rc; } - rc = zms_read(&cf->cf_zms, ZMS_NAMECNT_ID, &last_name_id, sizeof(last_name_id)); - if (rc < 0) { - cf->last_name_id = ZMS_NAMECNT_ID; - } else { - cf->last_name_id = last_name_id; - } + cf->hash_collision_num = 0; - LOG_DBG("Initialized"); - return 0; + rc = settings_zms_get_last_hash_ids(cf); + + LOG_DBG("ZMS backend initialized"); + return rc; } int settings_backend_init(void) @@ -373,7 +695,7 @@ int settings_backend_init(void) static struct settings_zms default_settings_zms; int rc; uint32_t cnt = 0; - size_t zms_sector_size, zms_size = 0; + size_t zms_sector_size; const struct flash_area *fa; struct flash_sector hw_flash_sector; uint32_t sector_cnt = 1; @@ -394,6 +716,9 @@ int settings_backend_init(void) return -EDOM; } +#if defined(CONFIG_SETTINGS_ZMS_CUSTOM_SECTOR_COUNT) + size_t zms_size = 0; + while (cnt < CONFIG_SETTINGS_ZMS_SECTOR_COUNT) { zms_size += zms_sector_size; if (zms_size > fa->fa_size) { @@ -401,8 +726,10 @@ int settings_backend_init(void) } cnt++; } - - /* define the zms file system using the page_info */ +#else + cnt = fa->fa_size / zms_sector_size; +#endif + /* initialize the zms file system structure using the page_info */ default_settings_zms.cf_zms.sector_size = zms_sector_size; default_settings_zms.cf_zms.sector_count = cnt; default_settings_zms.cf_zms.offset = fa->fa_off; diff --git a/tests/subsys/fs/zms/Kconfig b/tests/subsys/fs/zms/Kconfig new file mode 100644 index 00000000000..9c8f49ff59b --- /dev/null +++ b/tests/subsys/fs/zms/Kconfig @@ -0,0 +1,16 @@ +# Copyright 2025 NXP +# SPDX-License-Identifier: Apache-2.0 + +mainmenu "ZMS test configuration" + +config TEST_ZMS_SIMULATOR + bool "Enable ZMS tests designed to be run using a flash-simulator" + default y if BOARD_QEMU_X86 || ARCH_POSIX + help + If y, enables ZMS tests designed to be run using a flash-simulator, + which provide functionality for flash property customization + and emulating errors in flash operation in parallel to + the regular flash API. + The tests must be run only on qemu_x86 or native_sim target. + +source "Kconfig.zephyr" diff --git a/tests/subsys/fs/zms/src/main.c b/tests/subsys/fs/zms/src/main.c index 500ad758d68..6d170ce1f9f 100644 --- a/tests/subsys/fs/zms/src/main.c +++ b/tests/subsys/fs/zms/src/main.c @@ -4,17 +4,6 @@ * SPDX-License-Identifier: Apache-2.0 */ -/* - * This test is designed to be run using flash-simulator which provide - * functionality for flash property customization and emulating errors in - * flash operation in parallel to regular flash API. - * Test should be run on qemu_x86 or native_sim target. - */ - -#if !defined(CONFIG_BOARD_QEMU_X86) && !defined(CONFIG_ARCH_POSIX) -#error "Run only on qemu_x86 or a posix architecture based target (for ex. native_sim)" -#endif - #include #include #include @@ -37,8 +26,10 @@ static const struct device *const flash_dev = TEST_ZMS_AREA_DEV; struct zms_fixture { struct zms_fs fs; +#ifdef CONFIG_TEST_ZMS_SIMULATOR struct stats_hdr *sim_stats; struct stats_hdr *sim_thresholds; +#endif /* CONFIG_TEST_ZMS_SIMULATOR */ }; static void *setup(void) @@ -66,22 +57,26 @@ static void *setup(void) static void before(void *data) { +#ifdef CONFIG_TEST_ZMS_SIMULATOR struct zms_fixture *fixture = (struct zms_fixture *)data; fixture->sim_stats = stats_group_find("flash_sim_stats"); fixture->sim_thresholds = stats_group_find("flash_sim_thresholds"); +#endif /* CONFIG_TEST_ZMS_SIMULATOR */ } static void after(void *data) { struct zms_fixture *fixture = (struct zms_fixture *)data; +#ifdef CONFIG_TEST_ZMS_SIMULATOR if (fixture->sim_stats) { stats_reset(fixture->sim_stats); } if (fixture->sim_thresholds) { stats_reset(fixture->sim_thresholds); } +#endif /* CONFIG_TEST_ZMS_SIMULATOR */ /* Clear ZMS */ if (fixture->fs.ready) { @@ -137,6 +132,7 @@ ZTEST_F(zms, test_zms_write) execute_long_pattern_write(TEST_DATA_ID, &fixture->fs); } +#ifdef CONFIG_TEST_ZMS_SIMULATOR static int flash_sim_write_calls_find(struct stats_hdr *hdr, void *arg, const char *name, uint16_t off) { @@ -453,6 +449,7 @@ ZTEST_F(zms, test_zms_corrupted_sector_close_operation) /* Ensure that the ZMS is able to store new content. */ execute_long_pattern_write(max_id, &fixture->fs); } +#endif /* CONFIG_TEST_ZMS_SIMULATOR */ /** * @brief Test case when storage become full, so only deletion is possible. @@ -562,6 +559,7 @@ ZTEST_F(zms, test_delete) #endif } +#ifdef CONFIG_TEST_ZMS_SIMULATOR /* * Test that garbage-collection can recover all ate's even when the last ate, * ie close_ate, is corrupt. In this test the close_ate is set to point to the @@ -639,6 +637,7 @@ ZTEST_F(zms, test_zms_gc_corrupt_close_ate) zassert_true(len == sizeof(data), "zms_read should have read %d bytes", sizeof(data)); zassert_true(data == 0xaa55aa55, "unexpected value %d", data); } +#endif /* CONFIG_TEST_ZMS_SIMULATOR */ /* * Test that garbage-collection correctly handles corrupt ate's. diff --git a/tests/subsys/settings/functional/src/settings_basic_test.c b/tests/subsys/settings/functional/src/settings_basic_test.c index f77c87db71d..03ff3eba8a8 100644 --- a/tests/subsys/settings/functional/src/settings_basic_test.c +++ b/tests/subsys/settings/functional/src/settings_basic_test.c @@ -16,7 +16,7 @@ #include LOG_MODULE_REGISTER(settings_basic_test); -#if defined(CONFIG_SETTINGS_FCB) || defined(CONFIG_SETTINGS_NVS) +#if defined(CONFIG_SETTINGS_FCB) || defined(CONFIG_SETTINGS_NVS) || defined(CONFIG_SETTINGS_ZMS) #include #if DT_HAS_CHOSEN(zephyr_settings_partition) #define TEST_FLASH_AREA_ID DT_FIXED_PARTITION_ID(DT_CHOSEN(zephyr_settings_partition)) @@ -240,13 +240,22 @@ ZTEST(settings_functional, test_register_and_loading) { int rc, err; uint8_t val = 0; + ssize_t val_len = 0; rc = settings_subsys_init(); zassert_true(rc == 0, "subsys init failed"); + /* Check that key that corresponds to val2 do not exist in storage */ + val_len = settings_get_val_len("ps/ss/ss/val2"); + zassert_true((val_len == 0), "Failure: key should not exist"); + settings_save_one("ps/ss/ss/val2", &val, sizeof(uint8_t)); + /* Check that the key that corresponds to val2 exists in storage */ + val_len = settings_get_val_len("ps/ss/ss/val2"); + zassert_true((val_len == 1), "Failure: key should exist"); + memset(&data, 0, sizeof(struct stored_data)); rc = settings_register(&val1_settings); @@ -279,7 +288,16 @@ ZTEST(settings_functional, test_register_and_loading) err = (data.en1) && (data.en2) && (!data.en3); zassert_true(err, "wrong data enable found"); + /* Check that key that corresponds to val3 do not exist in storage */ + val_len = settings_get_val_len("ps/ss/val3"); + zassert_true((val_len == 0), "Failure: key should not exist"); + settings_save_one("ps/ss/val3", &val, sizeof(uint8_t)); + + /* Check that the key that corresponds to val3 exists in storage */ + val_len = settings_get_val_len("ps/ss/val3"); + zassert_true((val_len == 1), "Failure: key should exist"); + memset(&data, 0, sizeof(struct stored_data)); /* when we load settings now data.val2 and data.val1 should receive a * value @@ -310,7 +328,16 @@ ZTEST(settings_functional, test_register_and_loading) err = (data.en1) && (data.en2) && (data.en3); zassert_true(err, "wrong data enable found"); + /* Check that key that corresponds to val1 do not exist in storage */ + val_len = settings_get_val_len("ps/val1"); + zassert_true((val_len == 0), "Failure: key should not exist"); + settings_save_one("ps/val1", &val, sizeof(uint8_t)); + + /* Check that the key that corresponds to val1 exists in storage */ + val_len = settings_get_val_len("ps/val1"); + zassert_true((val_len == 1), "Failure: key should exist"); + memset(&data, 0, sizeof(struct stored_data)); /* when we load settings all data should receive a value loaded */ rc = settings_load(); @@ -345,6 +372,17 @@ ZTEST(settings_functional, test_register_and_loading) err = (!data.en1) && (data.en2) && (!data.en3); zassert_true(err, "wrong data enable found"); + memset(&data, 0, sizeof(struct stored_data)); + /* test load_one: path "ps/ss/ss/val2". Only data.val2 should + * receive a value + */ + val = 2; + settings_save_one("ps/ss/ss/val2", &val, sizeof(uint8_t)); + rc = settings_load_one("ps/ss/ss/val2", &data.val2, sizeof(uint8_t)); + zassert_true(rc >= 0, "settings_load_one failed"); + err = (data.val1 == 0) && (data.val2 == 2) && (data.val3 == 0); + zassert_true(err, "wrong data value found %u != 2", data.val2); + /* clean up by deregistering settings_handler */ rc = settings_deregister(&val1_settings); zassert_true(rc, "deregistering val1_settings failed"); diff --git a/tests/subsys/settings/functional/zms/CMakeLists.txt b/tests/subsys/settings/functional/zms/CMakeLists.txt new file mode 100644 index 00000000000..e29cd6c417e --- /dev/null +++ b/tests/subsys/settings/functional/zms/CMakeLists.txt @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: Apache-2.0 + +cmake_minimum_required(VERSION 3.20.0) +find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE}) +project(functional_zms) + +# The code is in the library common to several tests. +target_sources(app PRIVATE settings_test_zms.c) + +add_subdirectory(../src func_test_bindir) diff --git a/tests/subsys/settings/functional/zms/prj.conf b/tests/subsys/settings/functional/zms/prj.conf new file mode 100644 index 00000000000..3de961c7c72 --- /dev/null +++ b/tests/subsys/settings/functional/zms/prj.conf @@ -0,0 +1,8 @@ +CONFIG_ZTEST=y +CONFIG_FLASH=y +CONFIG_FLASH_MAP=y +CONFIG_ZMS=y + +CONFIG_SETTINGS=y +CONFIG_SETTINGS_RUNTIME=y +CONFIG_SETTINGS_ZMS=y diff --git a/tests/subsys/settings/functional/zms/settings_test_zms.c b/tests/subsys/settings/functional/zms/settings_test_zms.c new file mode 100644 index 00000000000..2a2ef9a5d59 --- /dev/null +++ b/tests/subsys/settings/functional/zms/settings_test_zms.c @@ -0,0 +1,25 @@ +/* Copyright (c) 2024 BayLibre SAS + * + * SPDX-License-Identifier: Apache-2.0 + */ + +#include +#include +#include +#include +#include + +ZTEST(settings_functional, test_setting_storage_get) +{ + int rc; + void *storage; + uint32_t data = 0xdeadbeef; + + rc = settings_storage_get(&storage); + zassert_equal(0, rc, "Can't fetch storage reference (err=%d)", rc); + zassert_not_null(storage, "Null reference."); + + rc = zms_write((struct zms_fs *)storage, 512, &data, sizeof(data)); + zassert_true(rc >= 0, "Can't write ZMS entry (err=%d).", rc); +} +ZTEST_SUITE(settings_functional, NULL, NULL, NULL, NULL, NULL); diff --git a/tests/subsys/settings/functional/zms/testcase.yaml b/tests/subsys/settings/functional/zms/testcase.yaml new file mode 100644 index 00000000000..8b7fcdf576f --- /dev/null +++ b/tests/subsys/settings/functional/zms/testcase.yaml @@ -0,0 +1,9 @@ +tests: + settings.functional.zms: + platform_allow: + - qemu_x86 + - native_sim + - native_sim/native/64 + tags: + - settings + - zms diff --git a/tests/subsys/settings/performance/CMakeLists.txt b/tests/subsys/settings/performance/CMakeLists.txt new file mode 100644 index 00000000000..b3e4780d3aa --- /dev/null +++ b/tests/subsys/settings/performance/CMakeLists.txt @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: Apache-2.0 + +cmake_minimum_required(VERSION 3.20.0) +find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE}) +project(test_settings_perf) + +zephyr_include_directories( + ${ZEPHYR_BASE}/subsys/settings/include + ${ZEPHYR_BASE}/subsys/settings/src + ) + +target_sources(app PRIVATE + settings_test_perf.c) diff --git a/tests/subsys/settings/performance/prj.conf b/tests/subsys/settings/performance/prj.conf new file mode 100644 index 00000000000..27c208baa0b --- /dev/null +++ b/tests/subsys/settings/performance/prj.conf @@ -0,0 +1,7 @@ +CONFIG_ZTEST=y +CONFIG_FLASH=y +CONFIG_FLASH_MAP=y +CONFIG_ZMS=y + +CONFIG_SETTINGS=y +CONFIG_SETTINGS_RUNTIME=y diff --git a/tests/subsys/settings/performance/settings_test_perf.c b/tests/subsys/settings/performance/settings_test_perf.c new file mode 100644 index 00000000000..3efe0fcaa52 --- /dev/null +++ b/tests/subsys/settings/performance/settings_test_perf.c @@ -0,0 +1,134 @@ +/* + * Copyright (c) 2024 Nordic Semiconductor ASA + * + * SPDX-License-Identifier: Apache-2.0 + */ +#include + +#include "settings_priv.h" +#include +#include +#include + +/* This is a test suite for performance testing of settings subsystem by writing + * many small setting values repeatedly. Ideally, this should consume as small + * amount of time as possible for best possible UX. + */ + +static struct k_work_q settings_work_q; +static K_THREAD_STACK_DEFINE(settings_work_stack, 2024); +static struct k_work_delayable pending_store; + +#define TEST_SETTINGS_COUNT (128) +#define TEST_STORE_ITR (5) +#define TEST_TIMEOUT_SEC (60) +#define TEST_SETTINGS_WORKQ_PRIO (1) + +static void bt_scan_cb([[maybe_unused]] const bt_addr_le_t *addr, [[maybe_unused]] int8_t rssi, + [[maybe_unused]] uint8_t adv_type, struct net_buf_simple *buf) +{ + printk("len %u\n", buf->len); +} + +struct test_setting { + uint32_t val; +}; +struct test_setting test_settings[TEST_SETTINGS_COUNT]; + +K_SEM_DEFINE(waitfor_work, 0, 1); + +static void store_pending(struct k_work *work) +{ + int err; + char path[20]; + struct test_stats { + uint32_t total_calculated; + uint32_t total_measured; + uint32_t single_entry_max; + uint32_t single_entry_min; + }; + struct test_stats stats = {0, 0, 0, UINT32_MAX}; + + int64_t ts1 = k_uptime_get(); + + /* benchmark storage performance */ + for (int j = 0; j < TEST_STORE_ITR; j++) { + for (int i = 0; i < TEST_SETTINGS_COUNT; i++) { + test_settings[i].val = TEST_SETTINGS_COUNT * j + i; + + int64_t ts2 = k_uptime_get(); + + snprintk(path, sizeof(path), "ab/cdef/ghi/%04x", i); + err = settings_save_one(path, &test_settings[i], + sizeof(struct test_setting)); + zassert_equal(err, 0, "settings_save_one failed %d", err); + + int64_t delta2 = k_uptime_delta(&ts2); + + if (stats.single_entry_max < delta2) { + stats.single_entry_max = delta2; + } + if (stats.single_entry_min > delta2) { + stats.single_entry_min = delta2; + } + stats.total_calculated += delta2; + } + } + + int64_t delta1 = k_uptime_delta(&ts1); + + stats.total_measured = delta1; + + printk("*** storing of %u entries completed ***\n", ARRAY_SIZE(test_settings)); + printk("total calculated: %u, total measured: %u\n", stats.total_calculated, + stats.total_measured); + printk("entry max: %u, entry min: %u\n", stats.single_entry_max, stats.single_entry_min); + + k_sem_give(&waitfor_work); +} + +ZTEST_SUITE(settings_perf, NULL, NULL, NULL, NULL, NULL); + +ZTEST(settings_perf, test_performance) +{ + int err; + + if (IS_ENABLED(CONFIG_NVS)) { + printk("Testing with NVS\n"); + } else if (IS_ENABLED(CONFIG_ZMS)) { + printk("Testing with ZMS\n"); + } + + k_work_queue_start(&settings_work_q, settings_work_stack, + K_THREAD_STACK_SIZEOF(settings_work_stack), + K_PRIO_COOP(TEST_SETTINGS_WORKQ_PRIO), NULL); + k_thread_name_set(&settings_work_q.thread, "Settings workq"); + k_work_init_delayable(&pending_store, store_pending); + + if (IS_ENABLED(CONFIG_BT)) { + /* enable one of the major subsystems, and start scanning. */ + err = bt_enable(NULL); + zassert_equal(err, 0, "Bluetooth init failed (err %d)\n", err); + + err = bt_le_scan_start(BT_LE_SCAN_ACTIVE, bt_scan_cb); + zassert_equal(err, 0, "Scanning failed to start (err %d)\n", err); + } + + err = settings_subsys_init(); + zassert_equal(err, 0, "settings_backend_init failed %d", err); + + /* fill with values */ + for (int i = 0; i < TEST_SETTINGS_COUNT; i++) { + test_settings[i].val = i; + } + + k_work_reschedule_for_queue(&settings_work_q, &pending_store, K_NO_WAIT); + + err = k_sem_take(&waitfor_work, K_SECONDS(TEST_TIMEOUT_SEC)); + zassert_equal(err, 0, "k_sem_take failed %d", err); + + if (IS_ENABLED(CONFIG_BT)) { + err = bt_le_scan_stop(); + zassert_equal(err, 0, "Scanning failed to stop (err %d)\n", err); + } +} diff --git a/tests/subsys/settings/performance/testcase.yaml b/tests/subsys/settings/performance/testcase.yaml new file mode 100644 index 00000000000..7ae65c3c556 --- /dev/null +++ b/tests/subsys/settings/performance/testcase.yaml @@ -0,0 +1,66 @@ +tests: + settings.performance.zms: + extra_configs: + - CONFIG_SETTINGS_ZMS=y + - CONFIG_ZMS_LOOKUP_CACHE=y + - CONFIG_ZMS_LOOKUP_CACHE_SIZE=512 + platform_allow: + - nrf52840dk/nrf52840 + - nrf54l15dk/nrf54l15/cpuapp + min_ram: 32 + tags: + - settings + - zms + + settings.performance.nvs: + extra_configs: + - CONFIG_ZMS=n + - CONFIG_NVS=y + - CONFIG_NVS_LOOKUP_CACHE=y + - CONFIG_NVS_LOOKUP_CACHE_SIZE=512 + - CONFIG_SETTINGS_NVS_NAME_CACHE=y + - CONFIG_SETTINGS_NVS_NAME_CACHE_SIZE=512 + platform_allow: + - nrf52840dk/nrf52840 + - nrf54l15dk/nrf54l15/cpuapp + min_ram: 32 + tags: + - settings + - nvs + + settings.performance.zms_bt: + extra_configs: + - CONFIG_BT=y + - CONFIG_BT_OBSERVER=y + - CONFIG_BT_PERIPHERAL=y + - CONFIG_SETTINGS_ZMS=y + - CONFIG_ZMS_LOOKUP_CACHE=y + - CONFIG_ZMS_LOOKUP_CACHE_SIZE=512 + platform_allow: nrf52840dk/nrf52840 + platform_exclude: + - native_sim + - qemu_x86 + min_ram: 32 + tags: + - settings + - zms + + settings.performance.nvs_bt: + extra_configs: + - CONFIG_BT=y + - CONFIG_BT_OBSERVER=y + - CONFIG_BT_PERIPHERAL=y + - CONFIG_ZMS=n + - CONFIG_NVS=y + - CONFIG_NVS_LOOKUP_CACHE=y + - CONFIG_NVS_LOOKUP_CACHE_SIZE=512 + - CONFIG_SETTINGS_NVS_NAME_CACHE=y + - CONFIG_SETTINGS_NVS_NAME_CACHE_SIZE=512 + platform_allow: nrf52840dk/nrf52840 + platform_exclude: + - native_sim + - qemu_x86 + min_ram: 32 + tags: + - settings + - nvs