From: Gaurav Jindal Date: Thu, 14 Jul 2016 12:04:20 +0000 (+0000) Subject: tick/nohz: Optimize nohz idle enter X-Git-Url: http://git.osdn.net/view?a=commitdiff_plain;h=4505e1cb8a597e9d47ace1533125695762d35816;p=sagit-ice-cold%2Fkernel_xiaomi_msm8998.git tick/nohz: Optimize nohz idle enter tick_nohz_start_idle is called before checking whether the idle tick can be stopped. If the tick cannot be stopped, calling tick_nohz_start_idle() is pointless and just wasting CPU cycles. Only invoke tick_nohz_start_idle() when can_stop_idle_tick() returns true. A short one minute observation of the effect on ARM64 shows a reduction of calls by 1.5% thus optimizing the idle entry sequence. [tglx: Massaged changelog ] Co-developed-by: Sanjeev Yadav Signed-off-by: Gaurav Jindal Link: http://lkml.kernel.org/r/20160714120416.GB21099@gaurav.jindal@spreadtrum.com Signed-off-by: Thomas Gleixner Signed-off-by: Francisco Franco Signed-off-by: kdrag0n --- diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index d675f8b06110..01b279e64dad 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -823,8 +823,6 @@ static void __tick_nohz_idle_enter(struct tick_sched *ts) ktime_t now, expires; int cpu = smp_processor_id(); - now = tick_nohz_start_idle(ts); - #ifdef CONFIG_SMP if (check_pending_deferrable_timers(cpu)) raise_softirq_irqoff(TIMER_SOFTIRQ); @@ -833,6 +831,7 @@ static void __tick_nohz_idle_enter(struct tick_sched *ts) if (can_stop_idle_tick(cpu, ts)) { int was_stopped = ts->tick_stopped; + now = tick_nohz_start_idle(ts); ts->idle_calls++; expires = tick_nohz_stop_sched_tick(ts, now, cpu);