Skip to content

Commit 774e7cb

Browse files
kwachowsjlawryno
authored andcommitted
accel/ivpu: Add dma fence to command buffers only
Currently job->done_fence is added to every BO handle within a job. If job handle (command buffer) is shared between multiple submits, KMD will add the fence in each of them. Then bo_wait_ioctl() executed on command buffer will exit only when all jobs containing that handle are done. This creates deadlock scenario for user mode driver in case when job handle is added as dependency of another job, because bo_wait_ioctl() of first job will wait until second job finishes, and second job can not finish before first one. Having fences added only to job buffer handle allows user space to execute bo_wait_ioctl() on the job even if it's handle is submitted with other job. Fixes: cd72722 ("accel/ivpu: Add command buffer submission logic") Signed-off-by: Karol Wachowski <[email protected]> Signed-off-by: Stanislaw Gruszka <[email protected]> Reviewed-by: Jeffrey Hugo <[email protected]> Signed-off-by: Jacek Lawrynowicz <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
1 parent 764a2ab commit 774e7cb

File tree

1 file changed

+7
-11
lines changed

1 file changed

+7
-11
lines changed

drivers/accel/ivpu/ivpu_job.c

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -461,26 +461,22 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32
461461

462462
job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset;
463463

464-
ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count,
465-
&acquire_ctx);
464+
ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
466465
if (ret) {
467466
ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret);
468467
return ret;
469468
}
470469

471-
for (i = 0; i < buf_count; i++) {
472-
ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1);
473-
if (ret) {
474-
ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
475-
goto unlock_reservations;
476-
}
470+
ret = dma_resv_reserve_fences(bo->base.resv, 1);
471+
if (ret) {
472+
ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
473+
goto unlock_reservations;
477474
}
478475

479-
for (i = 0; i < buf_count; i++)
480-
dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
476+
dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
481477

482478
unlock_reservations:
483-
drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx);
479+
drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
484480

485481
wmb(); /* Flush write combining buffers */
486482

0 commit comments

Comments
 (0)