Skip to content

Commit ff0dac0

Browse files
Jonathan-CavittAndi Shyti
authored andcommitted
drm/i915/guc: Add CT size delay helper
As of now, there is no mechanism for tracking a given request's progress through the queue. Instead, add a helper that returns an estimated maximum time the queue should take to drain if completely full. Suggested-by: John Harrison <[email protected]> Signed-off-by: Jonathan Cavitt <[email protected]> Reviewed-by: Andi Shyti <[email protected]> Acked-by: Tvrtko Ursulin <[email protected]> Reviewed-by: Nirmoy Das <[email protected]> Reviewed-by: John Harrison <[email protected]> Signed-off-by: Andi Shyti <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
1 parent 29e6683 commit ff0dac0

File tree

2 files changed

+29
-0
lines changed

2 files changed

+29
-0
lines changed

drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,33 @@ enum { CTB_SEND = 0, CTB_RECV = 1 };
103103

104104
enum { CTB_OWNER_HOST = 0 };
105105

106+
/*
107+
* Some H2G commands involve a synchronous response that the driver needs
108+
* to wait for. In such cases, a timeout is required to prevent the driver
109+
* from waiting forever in the case of an error (either no error response
110+
* is defined in the protocol or something has died and requires a reset).
111+
* The specific command may be defined as having a time bound response but
112+
* the CT is a queue and that time guarantee only starts from the point
113+
* when the command reaches the head of the queue and is processed by GuC.
114+
*
115+
* Ideally there would be a helper to report the progress of a given
116+
* command through the CT. However, that would require a significant
117+
* amount of work in the CT layer. In the meantime, provide a reasonable
118+
* estimation of the worst case latency it should take for the entire
119+
* queue to drain. And therefore, how long a caller should wait before
120+
* giving up on their request. The current estimate is based on empirical
121+
* measurement of a test that fills the buffer with context creation and
122+
* destruction requests as they seem to be the slowest operation.
123+
*/
124+
long intel_guc_ct_max_queue_time_jiffies(void)
125+
{
126+
/*
127+
* A 4KB buffer full of context destroy commands takes a little
128+
* over a second to process so bump that to 2s to be super safe.
129+
*/
130+
return (CTB_H2G_BUFFER_SIZE * HZ) / SZ_2K;
131+
}
132+
106133
static void ct_receive_tasklet_func(struct tasklet_struct *t);
107134
static void ct_incoming_request_worker_func(struct work_struct *w);
108135

drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,8 @@ struct intel_guc_ct {
104104
#endif
105105
};
106106

107+
long intel_guc_ct_max_queue_time_jiffies(void);
108+
107109
void intel_guc_ct_init_early(struct intel_guc_ct *ct);
108110
int intel_guc_ct_init(struct intel_guc_ct *ct);
109111
void intel_guc_ct_fini(struct intel_guc_ct *ct);

0 commit comments

Comments
 (0)