OSDN Git Service

drm/scheduler: fix inconsistent locking of job_list_lock
authorLucas Stach <l.stach@pengutronix.de>
Mon, 20 Jan 2020 10:51:19 +0000 (11:51 +0100)
committerAlex Deucher <alexander.deucher@amd.com>
Fri, 13 Mar 2020 15:52:36 +0000 (11:52 -0400)
commita7fbb630c5485f5095146df46f04c2ca1a24c299
treeb96cf91afb5c74ceac9c1ebbd59585b7dc8eb741
parentc2c91828fbdbc5a31616f956834c85ab011392e1
drm/scheduler: fix inconsistent locking of job_list_lock

1db8c142b6c5 (drm/scheduler: Add drm_sched_suspend/resume_timeout()) made
the job_list_lock IRQ safe in as the suspend/resume calls were expected to
be called from IRQ context. This usage never materialized in upstream.
Instead amdgpu started locking the job_list_lock in an IRQ unsafe way in
amdgpu_ib_preempt_mark_partial_job() and amdgpu_ib_preempt_job_recovery(),
which leads to potential deadlock if one would actually start to call the
drm_sched_suspend/resume_timeout functions from IRQ context.

As no current user needs the locking to be IRQ safe, the local IRQ
disable/enable is pure overhead. Fix the inconsistent locking by changing
all uses of job_list_lock to use the IRQ unsafe locking primitives.

Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
drivers/gpu/drm/scheduler/sched_main.c