Skip to content

v2.7 : long delay in open/close for spi nor #965

@matthieu-c-tagheuer

Description

@matthieu-c-tagheuer

Hi,

I started to do some test on littlefs. I have a simple test that do
mkdir ("flash")
for i = 0 ; i < 100; i++
remove("flash/test%d", i)
open("flash/test%d", i)
write(4k)
close()

For some open/close, I see several second delay.
I enabled trace in read/write/erase callback (see attached trace : log4).
log4.txt

In case of long delay, there is lot's of 16B read : around 26780 read (see long_close.txt) !
long_close.txt

For example

Flash Write size 16 loc 9875456
Flash1 Read size 16 loc 9875456
Flash1 Read size 16 loc 9502720
Flash1 Read size 16 loc 9502736
Flash1 Read size 16 loc 9502752
Flash1 Read size 16 loc 9502768
Flash1 Read size 16 loc 9502784
Flash1 Read size 16 loc 9502800
Flash1 Read size 16 loc 9502816
Flash1 Read size 16 loc 9502832
Flash1 Read size 16 loc 9502848
Flash1 Read size 16 loc 9502864

Where come from this read ?
Is that because there is too much files in a directory ?

Config :

const struct lfs_config lfs_cfg = {
    // block device operations
    .read  = lfs_spi_flash_read,
    .prog  = lfs_spi_flash_prog,
    .erase = lfs_spi_flash_erase,
    .sync  = lfs_spi_flash_sync,

    // block device configuration
    .read_size = 16,
    .prog_size = 16,
    .block_size = 4096,
    .block_count = (8*1024),
    .cache_size = 1024,
    .lookahead_size = 1024,
    .block_cycles = 500,

    .lock = lfs_lock,
    .unlock = lfs_unlock,
};

This a spi nor flash. The 26780 read are not really fast. It is not possible to merge continous read in one spi transaction ?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions