OSDN Git Service

ANDROID: sched/fair: initialise util_est values to 0 on fork
authorChris Redpath <chris.redpath@arm.com>
Tue, 23 Oct 2018 16:43:34 +0000 (17:43 +0100)
committer0ranko0P <ranko0p@outlook.com>
Tue, 24 Dec 2019 20:42:35 +0000 (04:42 +0800)
Since "sched/fair: Align PELT windows between cfs_rq and its se" the
upstream kernel has initialised the whole content of sched_avg to zero
on fork. When util_est was backported, we missed this and so ended up
with util_est values copied from the parent task.

Add the zero initialisation which is present upstream and ensure that
util_est values always start from a known point.

Fixes: 700f1172f7a7 ("BACKPORT: sched/fair: Add util_est on top of PELT")
Reported-by: Puja Gupta <pujag@quicinc.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Todd Kjos <tkjos@google.com>
Cc: Saravana Kannan <skannan@codeaurora.org>
Change-Id: I06995e4320d606a52761d0e773baf28fcd1e2680
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
kernel/sched/fair.c

index ffc2c71..d2915f6 100644 (file)
@@ -727,8 +727,10 @@ void init_entity_runnable_average(struct sched_entity *se)
 {
        struct sched_avg *sa = &se->avg;
 
-       sa->last_update_time = 0;
+       memset(sa, 0, sizeof(*sa));
        /*
+        * util_avg is initialized in post_init_entity_util_avg.
+        * util_est should start from zero.
         * sched_avg's period_contrib should be strictly less then 1024, so
         * we give it 1023 to make sure it is almost a period (1024us), and
         * will definitely be update (after enqueue).
@@ -743,18 +745,6 @@ void init_entity_runnable_average(struct sched_entity *se)
        if (entity_is_task(se))
                sa->load_avg = scale_load_down(se->load.weight);
        sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
-       /*
-        * In previous Android versions, we used to have:
-        *      sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
-        *      sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
-        * However, that functionality has been moved to enqueue.
-        * It is unclear if we should restore this in enqueue.
-        */
-       /*
-        * At this point, util_avg won't be used in select_task_rq_fair anyway
-        */
-       sa->util_avg = 0;
-       sa->util_sum = 0;
        /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
 }