Skip to content

Commit a8f23dd

Browse files
Cao-Wuhuitehcaster
authored andcommitted
mm/slab.c: fix comments
While reading the source code, I noticed some language errors in the comments, so I fixed them. Signed-off-by: Yixuan Cao <[email protected]> Acked-by: Hyeonggon Yoo <[email protected]> Signed-off-by: Vlastimil Babka <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent a285909 commit a8f23dd

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

mm/slab.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -781,7 +781,7 @@ static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
781781
int slab_node = slab_nid(virt_to_slab(objp));
782782
int node = numa_mem_id();
783783
/*
784-
* Make sure we are not freeing a object from another node to the array
784+
* Make sure we are not freeing an object from another node to the array
785785
* cache on this cpu.
786786
*/
787787
if (likely(node == slab_node))
@@ -832,7 +832,7 @@ static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp)
832832

833833
/*
834834
* The kmem_cache_nodes don't come and go as CPUs
835-
* come and go. slab_mutex is sufficient
835+
* come and go. slab_mutex provides sufficient
836836
* protection here.
837837
*/
838838
cachep->node[node] = n;
@@ -845,7 +845,7 @@ static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp)
845845
* Allocates and initializes node for a node on each slab cache, used for
846846
* either memory or cpu hotplug. If memory is being hot-added, the kmem_cache_node
847847
* will be allocated off-node since memory is not yet online for the new node.
848-
* When hotplugging memory or a cpu, existing node are not replaced if
848+
* When hotplugging memory or a cpu, existing nodes are not replaced if
849849
* already in use.
850850
*
851851
* Must hold slab_mutex.
@@ -1046,7 +1046,7 @@ int slab_prepare_cpu(unsigned int cpu)
10461046
* offline.
10471047
*
10481048
* Even if all the cpus of a node are down, we don't free the
1049-
* kmem_cache_node of any cache. This to avoid a race between cpu_down, and
1049+
* kmem_cache_node of any cache. This is to avoid a race between cpu_down, and
10501050
* a kmalloc allocation from another cpu for memory from the node of
10511051
* the cpu going down. The kmem_cache_node structure is usually allocated from
10521052
* kmem_cache_create() and gets destroyed at kmem_cache_destroy().
@@ -1890,7 +1890,7 @@ static bool set_on_slab_cache(struct kmem_cache *cachep,
18901890
* @flags: SLAB flags
18911891
*
18921892
* Returns a ptr to the cache on success, NULL on failure.
1893-
* Cannot be called within a int, but can be interrupted.
1893+
* Cannot be called within an int, but can be interrupted.
18941894
* The @ctor is run when new pages are allocated by the cache.
18951895
*
18961896
* The flags are
@@ -3138,7 +3138,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
31383138
}
31393139

31403140
/*
3141-
* A interface to enable slab creation on nodeid
3141+
* An interface to enable slab creation on nodeid
31423142
*/
31433143
static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
31443144
int nodeid)

0 commit comments

Comments
 (0)