Skip to content

Commit 613cba4

Browse files
committed
syscalls/readahead02: Wait for the readahead()
The test did request readahead on a file and then immediatelly tried to access the data and measure if readahead saved I/O or not. The problem is that we need to wait a bit for the readahead to happen, especially on hardware with slower I/O speeds. So the test now waits a bit for the readahead to start and the loops for a while, with a short usleeps, until retires are reached or until page cache stops to grow. Signed-off-by: Cyril Hrubis <chrubis@suse.cz> Acked-by: Jan Stancek <jstancek@redhat.com> Reviewed-by: Li Wang <liwang@redhat.com>
1 parent 3bf7abd commit 613cba4

File tree

1 file changed

+41
-0
lines changed

1 file changed

+41
-0
lines changed

testcases/kernel/syscalls/readahead/readahead02.c

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,8 @@ static char testfile[PATH_MAX] = "testfile";
3939
#define MEMINFO_FNAME "/proc/meminfo"
4040
#define PROC_IO_FNAME "/proc/self/io"
4141
#define DEFAULT_FILESIZE (64 * 1024 * 1024)
42+
#define INITIAL_SHORT_SLEEP_US 10000
43+
#define SHORT_SLEEP_US 5000
4244

4345
static size_t testfile_size = DEFAULT_FILESIZE;
4446
static char *opt_fsizestr;
@@ -173,8 +175,47 @@ static int read_testfile(struct tcase *tc, int do_readahead,
173175

174176
i++;
175177
offset += readahead_length;
178+
/* Wait a bit so that the readahead() has chance to start. */
179+
usleep(INITIAL_SHORT_SLEEP_US);
180+
/*
181+
* We assume that the worst case I/O speed is around
182+
* 5MB/s which is roughly 5 bytes per 1 us, which gives
183+
* us upper bound for retries that is
184+
* readahead_size/(5 * SHORT_SLEEP_US).
185+
*
186+
* We also monitor the cache size increases before and
187+
* after the sleep. With the same assumption about the
188+
* speed we are supposed to read at least 5 *
189+
* SHORT_SLEEP_US bytes during that time. That amound
190+
* is genreally quite close a page size so that we just
191+
* assume that we sould continue as long as the cache
192+
* increases.
193+
*
194+
* Of course all of this is inprecise on multitasking
195+
* OS however even on a system where there are several
196+
* processes figthing for I/O this loop will wait as
197+
* long a cache is increasing which will gives us high
198+
* chance of waiting for the readahead to happen.
199+
*/
200+
unsigned long cached_prev, cached_cur = get_cached_size();
201+
int retries = readahead_length / (5 * SHORT_SLEEP_US);
202+
203+
tst_res(TDEBUG, "Readahead cached %lu", cached_cur);
204+
205+
do {
206+
usleep(SHORT_SLEEP_US);
207+
208+
cached_prev = cached_cur;
209+
cached_cur = get_cached_size();
210+
211+
if (cached_cur <= cached_prev)
212+
break;
213+
} while (retries-- > 0);
214+
176215
} while ((size_t)offset < fsize);
216+
177217
tst_res(TINFO, "readahead calls made: %zu", i);
218+
178219
*cached = get_cached_size();
179220

180221
/* offset of file shouldn't change after readahead */

0 commit comments

Comments
 (0)