OSDN Git Service

ANDROID: Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"
authorVikram Mulukutla <markivx@codeaurora.org>
Fri, 22 Sep 2017 00:24:24 +0000 (17:24 -0700)
committerAmit Pundir <amit.pundir@linaro.org>
Tue, 14 Aug 2018 12:17:11 +0000 (17:47 +0530)
commitfd256281ef3122e91cc2ddee9efb756cfab81700
tree0dc4314165e806573ca779f9ad774fdc67d3c8ae
parent0bce8cac5ff9ffe65f64c04ae351d34c000ac3d1
ANDROID: Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"

This reverts commit c5616f2f874faa20b59b116177b99bf3948586df.

If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.

For example:

     -------- CPU 0 --------   | -------- CPU1 --------
cgroupX boost group added      |
schedtune_enqueue_task         |
  acquires(bg->lock)           | cgroupY boost group added
                               |  for_each_cpu()
                               |    raw_spin_lock_init(bg->lock)
  releases(bg->lock)           |
      BUG (already unlocked)   |
                               |

This results in the following BUG from the debug spinlock code:
BUG: spinlock already unlocked on CPU#5, rcuop/6/68

Bug: 32668852

Change-Id: I3016702780b461a0cd95e26c538cd18df27d6316
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
kernel/sched/tune.c