Skip to content

Commit 9b8e809

Browse files
committed
Merge patch series "Random netfs folio fixes"
Matthew Wilcox (Oracle) <[email protected]> says: A few minor fixes; nothing earth-shattering. Matthew Wilcox (Oracle) (3): netfs: Remove call to folio_index() netfs: Fix a few minor bugs in netfs_page_mkwrite() netfs: Remove unnecessary references to pages Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
2 parents 9852d85 + e995e8b commit 9b8e809

File tree

13 files changed

+435
-50
lines changed

13 files changed

+435
-50
lines changed
Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
.. SPDX-License-Identifier: GPL-2.0+
2+
3+
===========
4+
Folio Queue
5+
===========
6+
7+
:Author: David Howells <[email protected]>
8+
9+
.. Contents:
10+
11+
* Overview
12+
* Initialisation
13+
* Adding and removing folios
14+
* Querying information about a folio
15+
* Querying information about a folio_queue
16+
* Folio queue iteration
17+
* Folio marks
18+
* Lockless simultaneous production/consumption issues
19+
20+
21+
Overview
22+
========
23+
24+
The folio_queue struct forms a single segment in a segmented list of folios
25+
that can be used to form an I/O buffer. As such, the list can be iterated over
26+
using the ITER_FOLIOQ iov_iter type.
27+
28+
The publicly accessible members of the structure are::
29+
30+
struct folio_queue {
31+
struct folio_queue *next;
32+
struct folio_queue *prev;
33+
...
34+
};
35+
36+
A pair of pointers are provided, ``next`` and ``prev``, that point to the
37+
segments on either side of the segment being accessed. Whilst this is a
38+
doubly-linked list, it is intentionally not a circular list; the outward
39+
sibling pointers in terminal segments should be NULL.
40+
41+
Each segment in the list also stores:
42+
43+
* an ordered sequence of folio pointers,
44+
* the size of each folio and
45+
* three 1-bit marks per folio,
46+
47+
but hese should not be accessed directly as the underlying data structure may
48+
change, but rather the access functions outlined below should be used.
49+
50+
The facility can be made accessible by::
51+
52+
#include <linux/folio_queue.h>
53+
54+
and to use the iterator::
55+
56+
#include <linux/uio.h>
57+
58+
59+
Initialisation
60+
==============
61+
62+
A segment should be initialised by calling::
63+
64+
void folioq_init(struct folio_queue *folioq);
65+
66+
with a pointer to the segment to be initialised. Note that this will not
67+
necessarily initialise all the folio pointers, so care must be taken to check
68+
the number of folios added.
69+
70+
71+
Adding and removing folios
72+
==========================
73+
74+
Folios can be set in the next unused slot in a segment struct by calling one
75+
of::
76+
77+
unsigned int folioq_append(struct folio_queue *folioq,
78+
struct folio *folio);
79+
80+
unsigned int folioq_append_mark(struct folio_queue *folioq,
81+
struct folio *folio);
82+
83+
Both functions update the stored folio count, store the folio and note its
84+
size. The second function also sets the first mark for the folio added. Both
85+
functions return the number of the slot used. [!] Note that no attempt is made
86+
to check that the capacity wasn't overrun and the list will not be extended
87+
automatically.
88+
89+
A folio can be excised by calling::
90+
91+
void folioq_clear(struct folio_queue *folioq, unsigned int slot);
92+
93+
This clears the slot in the array and also clears all the marks for that folio,
94+
but doesn't change the folio count - so future accesses of that slot must check
95+
if the slot is occupied.
96+
97+
98+
Querying information about a folio
99+
==================================
100+
101+
Information about the folio in a particular slot may be queried by the
102+
following function::
103+
104+
struct folio *folioq_folio(const struct folio_queue *folioq,
105+
unsigned int slot);
106+
107+
If a folio has not yet been set in that slot, this may yield an undefined
108+
pointer. The size of the folio in a slot may be queried with either of::
109+
110+
unsigned int folioq_folio_order(const struct folio_queue *folioq,
111+
unsigned int slot);
112+
113+
size_t folioq_folio_size(const struct folio_queue *folioq,
114+
unsigned int slot);
115+
116+
The first function returns the size as an order and the second as a number of
117+
bytes.
118+
119+
120+
Querying information about a folio_queue
121+
========================================
122+
123+
Information may be retrieved about a particular segment with the following
124+
functions::
125+
126+
unsigned int folioq_nr_slots(const struct folio_queue *folioq);
127+
128+
unsigned int folioq_count(struct folio_queue *folioq);
129+
130+
bool folioq_full(struct folio_queue *folioq);
131+
132+
The first function returns the maximum capacity of a segment. It must not be
133+
assumed that this won't vary between segments. The second returns the number
134+
of folios added to a segments and the third is a shorthand to indicate if the
135+
segment has been filled to capacity.
136+
137+
Not that the count and fullness are not affected by clearing folios from the
138+
segment. These are more about indicating how many slots in the array have been
139+
initialised, and it assumed that slots won't get reused, but rather the segment
140+
will get discarded as the queue is consumed.
141+
142+
143+
Folio marks
144+
===========
145+
146+
Folios within a queue can also have marks assigned to them. These marks can be
147+
used to note information such as if a folio needs folio_put() calling upon it.
148+
There are three marks available to be set for each folio.
149+
150+
The marks can be set by::
151+
152+
void folioq_mark(struct folio_queue *folioq, unsigned int slot);
153+
void folioq_mark2(struct folio_queue *folioq, unsigned int slot);
154+
void folioq_mark3(struct folio_queue *folioq, unsigned int slot);
155+
156+
Cleared by::
157+
158+
void folioq_unmark(struct folio_queue *folioq, unsigned int slot);
159+
void folioq_unmark2(struct folio_queue *folioq, unsigned int slot);
160+
void folioq_unmark3(struct folio_queue *folioq, unsigned int slot);
161+
162+
And the marks can be queried by::
163+
164+
bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot);
165+
bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot);
166+
bool folioq_is_marked3(const struct folio_queue *folioq, unsigned int slot);
167+
168+
The marks can be used for any purpose and are not interpreted by this API.
169+
170+
171+
Folio queue iteration
172+
=====================
173+
174+
A list of segments may be iterated over using the I/O iterator facility using
175+
an ``iov_iter`` iterator of ``ITER_FOLIOQ`` type. The iterator may be
176+
initialised with::
177+
178+
void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
179+
const struct folio_queue *folioq,
180+
unsigned int first_slot, unsigned int offset,
181+
size_t count);
182+
183+
This may be told to start at a particular segment, slot and offset within a
184+
queue. The iov iterator functions will follow the next pointers when advancing
185+
and prev pointers when reverting when needed.
186+
187+
188+
Lockless simultaneous production/consumption issues
189+
===================================================
190+
191+
If properly managed, the list can be extended by the producer at the head end
192+
and shortened by the consumer at the tail end simultaneously without the need
193+
to take locks. The ITER_FOLIOQ iterator inserts appropriate barriers to aid
194+
with this.
195+
196+
Care must be taken when simultaneously producing and consuming a list. If the
197+
last segment is reached and the folios it refers to are entirely consumed by
198+
the IOV iterators, an iov_iter struct will be left pointing to the last segment
199+
with a slot number equal to the capacity of that segment. The iterator will
200+
try to continue on from this if there's another segment available when it is
201+
used again, but care must be taken lest the segment got removed and freed by
202+
the consumer before the iterator was advanced.
203+
204+
It is recommended that the queue always contain at least one segment, even if
205+
that segment has never been filled or is entirely spent. This prevents the
206+
head and tail pointers from collapsing.
207+
208+
209+
API Function Reference
210+
======================
211+
212+
.. kernel-doc:: include/linux/folio_queue.h

fs/afs/afs_vl.h

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -134,13 +134,4 @@ struct afs_uvldbentry__xdr {
134134
__be32 spares9;
135135
};
136136

137-
struct afs_address_list {
138-
refcount_t usage;
139-
unsigned int version;
140-
unsigned int nr_addrs;
141-
struct sockaddr_rxrpc addrs[];
142-
};
143-
144-
extern void afs_put_address_list(struct afs_address_list *alist);
145-
146137
#endif /* AFS_VL_H */

fs/afs/file.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -420,6 +420,7 @@ const struct netfs_request_ops afs_req_ops = {
420420
.begin_writeback = afs_begin_writeback,
421421
.prepare_write = afs_prepare_write,
422422
.issue_write = afs_issue_write,
423+
.retry_request = afs_retry_request,
423424
};
424425

425426
static void afs_add_open_mmap(struct afs_vnode *vnode)

fs/afs/fs_operation.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ void afs_wait_for_operation(struct afs_operation *op)
201201
}
202202
}
203203

204-
if (op->call_responded)
204+
if (op->call_responded && op->server)
205205
set_bit(AFS_SERVER_FL_RESPONDING, &op->server->flags);
206206

207207
if (!afs_op_error(op)) {

fs/afs/fs_probe.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -506,10 +506,10 @@ int afs_wait_for_one_fs_probe(struct afs_server *server, struct afs_endpoint_sta
506506
finish_wait(&server->probe_wq, &wait);
507507

508508
dont_wait:
509-
if (estate->responsive_set & ~exclude)
510-
return 1;
511509
if (test_bit(AFS_ESTATE_SUPERSEDED, &estate->flags))
512510
return 0;
511+
if (estate->responsive_set & ~exclude)
512+
return 1;
513513
if (is_intr && signal_pending(current))
514514
return -ERESTARTSYS;
515515
if (timo == 0)

fs/afs/rotate.c

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -632,8 +632,10 @@ bool afs_select_fileserver(struct afs_operation *op)
632632
wait_for_more_probe_results:
633633
error = afs_wait_for_one_fs_probe(op->server, op->estate, op->addr_tried,
634634
!(op->flags & AFS_OPERATION_UNINTR));
635-
if (!error)
635+
if (error == 1)
636636
goto iterate_address;
637+
if (!error)
638+
goto restart_from_beginning;
637639

638640
/* We've now had a failure to respond on all of a server's addresses -
639641
* immediately probe them again and consider retrying the server.
@@ -644,10 +646,13 @@ bool afs_select_fileserver(struct afs_operation *op)
644646
error = afs_wait_for_one_fs_probe(op->server, op->estate, op->addr_tried,
645647
!(op->flags & AFS_OPERATION_UNINTR));
646648
switch (error) {
647-
case 0:
649+
case 1:
648650
op->flags &= ~AFS_OPERATION_RETRY_SERVER;
649-
trace_afs_rotate(op, afs_rotate_trace_retry_server, 0);
651+
trace_afs_rotate(op, afs_rotate_trace_retry_server, 1);
650652
goto retry_server;
653+
case 0:
654+
trace_afs_rotate(op, afs_rotate_trace_retry_server, 0);
655+
goto restart_from_beginning;
651656
case -ERESTARTSYS:
652657
afs_op_set_error(op, error);
653658
goto failed;

fs/cachefiles/namei.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -595,14 +595,12 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
595595
* write and readdir but not lookup or open).
596596
*/
597597
touch_atime(&file->f_path);
598-
dput(dentry);
599598
return true;
600599

601600
check_failed:
602601
fscache_cookie_lookup_negative(object->cookie);
603602
cachefiles_unmark_inode_in_use(object, file);
604603
fput(file);
605-
dput(dentry);
606604
if (ret == -ESTALE)
607605
return cachefiles_create_file(object);
608606
return false;
@@ -611,7 +609,6 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
611609
fput(file);
612610
error:
613611
cachefiles_do_unmark_inode_in_use(object, d_inode(dentry));
614-
dput(dentry);
615612
return false;
616613
}
617614

@@ -654,7 +651,9 @@ bool cachefiles_look_up_object(struct cachefiles_object *object)
654651
goto new_file;
655652
}
656653

657-
if (!cachefiles_open_file(object, dentry))
654+
ret = cachefiles_open_file(object, dentry);
655+
dput(dentry);
656+
if (!ret)
658657
return false;
659658

660659
_leave(" = t [%lu]", file_inode(object->file)->i_ino);

fs/netfs/buffered_read.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -646,7 +646,7 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len,
646646
if (unlikely(always_fill)) {
647647
if (pos - offset + len <= i_size)
648648
return false; /* Page entirely before EOF */
649-
zero_user_segment(&folio->page, 0, plen);
649+
folio_zero_segment(folio, 0, plen);
650650
folio_mark_uptodate(folio);
651651
return true;
652652
}
@@ -665,7 +665,7 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len,
665665

666666
return false;
667667
zero_out:
668-
zero_user_segments(&folio->page, 0, offset, offset + len, plen);
668+
folio_zero_segments(folio, 0, offset, offset + len, plen);
669669
return true;
670670
}
671671

@@ -732,7 +732,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
732732
if (folio_test_uptodate(folio))
733733
goto have_folio;
734734

735-
/* If the page is beyond the EOF, we want to clear it - unless it's
735+
/* If the folio is beyond the EOF, we want to clear it - unless it's
736736
* within the cache granule containing the EOF, in which case we need
737737
* to preload the granule.
738738
*/
@@ -792,7 +792,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
792792
EXPORT_SYMBOL(netfs_write_begin);
793793

794794
/*
795-
* Preload the data into a page we're proposing to write into.
795+
* Preload the data into a folio we're proposing to write into.
796796
*/
797797
int netfs_prefetch_for_write(struct file *file, struct folio *folio,
798798
size_t offset, size_t len)

0 commit comments

Comments
 (0)