Skip to content

Commit 24af523

Browse files
LuizOt31rodrigovivi
authored andcommitted
drm/i915/gem: Clean-up outdated struct_mutex comments
The struct_mutex will be removed from the DRM subsystem, as it was a legacy BKL that was only used by i915 driver. After review, it was concluded that its usage was no longer necessary This patch updates various comments in the i915/gem and i915/gt codebase to either remove or clarify references to struct_mutex, in order to prevent future misunderstandings. * i915_gem_execbuffer.c: Replace reference to struct_mutex with vm->mutex, as noted in the eb_reserve() function, which states that vm->mutex handles deadlocks. * i915_gem_object.c: Change struct_mutex by drm_i915_gem_object->vma.lock. i915_gem_object_unbind() in i915_gem.c states that this lock is who actually protects the unbind. * i915_gem_shrinker.c: The correct lock is actually i915->mm.obj, as already documented in its declaration. * i915_gem_wait.c: The existing comment already mentioned that struct_mutex was no longer necessary. Updated to refer to a generic global lock instead. * intel_reset_types.h: Cleaned up the comment text. Updated to refer to a generic global lock instead. Signed-off-by: Luiz Otavio Mello <[email protected]> Reviewed-by: Rodrigo Vivi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Acked-by: Tvrtko Ursulin <[email protected]> Signed-off-by: Rodrigo Vivi <[email protected]>
1 parent 1bd3db8 commit 24af523

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ enum {
182182
* the object. Simple! ... The relocation entries are stored in user memory
183183
* and so to access them we have to copy them into a local buffer. That copy
184184
* has to avoid taking any pagefaults as they may lead back to a GEM object
185-
* requiring the struct_mutex (i.e. recursive deadlock). So once again we split
185+
* requiring the vm->mutex (i.e. recursive deadlock). So once again we split
186186
* the relocation into multiple passes. First we try to do everything within an
187187
* atomic context (avoid the pagefaults) which requires that we never wait. If
188188
* we detect that we may wait, or if we need to fault, then we have to fallback

drivers/gpu/drm/i915/gem/i915_gem_object.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -459,8 +459,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
459459
atomic_inc(&i915->mm.free_count);
460460

461461
/*
462-
* Since we require blocking on struct_mutex to unbind the freed
463-
* object from the GPU before releasing resources back to the
462+
* Since we require blocking on drm_i915_gem_object->vma.lock to unbind
463+
* the freed object from the GPU before releasing resources back to the
464464
* system, we can not do that directly from the RCU callback (which may
465465
* be a softirq context), but must instead then defer that work onto a
466466
* kthread. We use the RCU callback rather than move the freed object

drivers/gpu/drm/i915/gem/i915_gem_shrinker.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww,
170170
* Also note that although these lists do not hold a reference to
171171
* the object we can safely grab one here: The final object
172172
* unreferencing and the bound_list are both protected by the
173-
* dev->struct_mutex and so we won't ever be able to observe an
173+
* i915->mm.obj_lock and so we won't ever be able to observe an
174174
* object on the bound_list with a reference count equals 0.
175175
*/
176176
for (phase = phases; phase->list; phase++) {
@@ -185,7 +185,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww,
185185

186186
/*
187187
* We serialize our access to unreferenced objects through
188-
* the use of the struct_mutex. While the objects are not
188+
* the use of the obj_lock. While the objects are not
189189
* yet freed (due to RCU then a workqueue) we still want
190190
* to be able to shrink their pages, so they remain on
191191
* the unbound/bound list until actually freed.

drivers/gpu/drm/i915/gem/i915_gem_wait.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -222,10 +222,10 @@ static unsigned long to_wait_timeout(s64 timeout_ns)
222222
*
223223
* The wait ioctl with a timeout of 0 reimplements the busy ioctl. With any
224224
* non-zero timeout parameter the wait ioctl will wait for the given number of
225-
* nanoseconds on an object becoming unbusy. Since the wait itself does so
226-
* without holding struct_mutex the object may become re-busied before this
227-
* function completes. A similar but shorter * race condition exists in the busy
228-
* ioctl
225+
* nanoseconds on an object becoming unbusy. Since the wait occurs without
226+
* holding a global or exclusive lock the object may become re-busied before
227+
* this function completes. A similar but shorter * race condition exists
228+
* in the busy ioctl
229229
*/
230230
int
231231
i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)

drivers/gpu/drm/i915/gt/intel_reset_types.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ struct intel_reset {
2020
* FENCE registers).
2121
*
2222
* #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
23-
* acquire the struct_mutex to reset an engine, we need an explicit
23+
* acquire a global lock to reset an engine, we need an explicit
2424
* flag to prevent two concurrent reset attempts in the same engine.
2525
* As the number of engines continues to grow, allocate the flags from
2626
* the most significant bits.

0 commit comments

Comments
 (0)