OSDN Git Service

arm_pmu: explicitly enable/disable SPIs at hotplug
authorMark Rutland <mark.rutland@arm.com>
Mon, 5 Feb 2018 16:42:00 +0000 (16:42 +0000)
committerWill Deacon <will.deacon@arm.com>
Tue, 20 Feb 2018 11:34:54 +0000 (11:34 +0000)
To support ACPI systems, we need to request IRQs before CPUs are
hotplugged, and thus we need to request IRQs before we know their
associated PMU.

This is problematic if a PMU IRQ is pending out of reset, as it may be
taken before we know the PMU, and thus the IRQ handler won't be able to
handle it, leaving it screaming.

To avoid such problems, lets request all IRQs in a disabled state, and
explicitly enable/disable them at hotplug time, when we're sure the PMU
has been probed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
drivers/perf/arm_pmu.c

index ddcabd6..72118e6 100644 (file)
@@ -558,6 +558,7 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
                            IRQF_NOBALANCING |
                            IRQF_NO_THREAD;
 
+               irq_set_status_flags(irq, IRQ_NOAUTOEN);
                err = request_irq(irq, handler, irq_flags, "arm-pmu",
                                  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
        } else if (cpumask_empty(&armpmu->active_irqs)) {
@@ -600,10 +601,10 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 
        irq = armpmu_get_cpu_irq(pmu, cpu);
        if (irq) {
-               if (irq_is_percpu_devid(irq)) {
+               if (irq_is_percpu_devid(irq))
                        enable_percpu_irq(irq, IRQ_TYPE_NONE);
-                       return 0;
-               }
+               else
+                       enable_irq(irq);
        }
 
        return 0;
@@ -618,8 +619,12 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
                return 0;
 
        irq = armpmu_get_cpu_irq(pmu, cpu);
-       if (irq && irq_is_percpu_devid(irq))
-               disable_percpu_irq(irq);
+       if (irq) {
+               if (irq_is_percpu_devid(irq))
+                       disable_percpu_irq(irq);
+               else
+                       disable_irq(irq);
+       }
 
        return 0;
 }