OSDN Git Service

UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed load
authorPeter Zijlstra <peterz@infradead.org>
Fri, 24 Jun 2016 13:53:54 +0000 (15:53 +0200)
committerAndres Oportus <andresoportus@google.com>
Fri, 2 Jun 2017 15:01:54 +0000 (08:01 -0700)
commit640c909c3470b8b8882676754485d5ee0ccccf7d
tree5685feabdedd4e6bfeca8d82a93f3f9aed71e153
parent89e4d18a6712b6452c577776abb8536913b6c1fb
UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed load

Starting with the following commit:

  fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities")

calc_tg_weight() doesn't compute the right value as expected by effective_load().

The difference is in the 'correction' term. In order to ensure \Sum
rw_j >= rw_i we cannot use tg->load_avg directly, since that might be
lagging a correction on the current cfs_rq->avg.load_avg value.
Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib +
cfs_rq->avg.load_avg.

Now, per the referenced commit, calc_tg_weight() doesn't use
cfs_rq->avg.load_avg, as is later used in @w, but uses
cfs_rq->load.weight instead.

So stop using calc_tg_weight() and do it explicitly.

The effects of this bug are wake_affine() making randomly
poor choices in cgroup-intense workloads.

Change-Id: I1c0058ff674650cf295c8dc3b88a5a3de4bddab0
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v4.3+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 7dd4912594daf769a46744848b05bd5bc6d62469)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
kernel/sched/fair.c