OSDN Git Service

sched/core: Distribute tasks within affinity masks
authorPaul Turner <pjt@google.com>
Wed, 11 Mar 2020 01:01:13 +0000 (18:01 -0700)
committerPeter Zijlstra <peterz@infradead.org>
Fri, 20 Mar 2020 12:06:18 +0000 (13:06 +0100)
commit46a87b3851f0d6eb05e6d83d5c5a30df0eca8f76
treeb98cf01a4a12e708f063c07788f7a3a7b41a93f2
parentfe61468b2cbc2b7ce5f8d3bf32ae5001d4c434e9
sched/core: Distribute tasks within affinity masks

Currently, when updating the affinity of tasks via either cpusets.cpus,
or, sched_setaffinity(); tasks not currently running within the newly
specified mask will be arbitrarily assigned to the first CPU within the
mask.

This (particularly in the case that we are restricting masks) can
result in many tasks being assigned to the first CPUs of their new
masks.

This:
 1) Can induce scheduling delays while the load-balancer has a chance to
    spread them between their new CPUs.
 2) Can antogonize a poor load-balancer behavior where it has a
    difficult time recognizing that a cross-socket imbalance has been
    forced by an affinity mask.

This change adds a new cpumask interface to allow iterated calls to
distribute within the intersection of the provided masks.

The cases that this mainly affects are:
 - modifying cpuset.cpus
 - when tasks join a cpuset
 - when modifying a task's affinity via sched_setaffinity(2)

Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Josh Don <joshdon@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Qais Yousef <qais.yousef@arm.com>
Tested-by: Qais Yousef <qais.yousef@arm.com>
Link: https://lkml.kernel.org/r/20200311010113.136465-1-joshdon@google.com
include/linux/cpumask.h
kernel/sched/core.c
lib/cpumask.c