Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
home:pzskc383
kernel
linux-2.6.33-fix-wake-affine.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File linux-2.6.33-fix-wake-affine.patch of Package kernel
From: Suresh Siddha <suresh.b.siddha@intel.com> Subject: sched: use the idle sibling cpu for the wake affine decisions Patch-mainline: 2.6.35? During the process wakeup, select_task_rq_fair() and wake_affine() makes the decision to wakeup the task either on the previous cpu that the task ran or the cpu that the task is currently woken up. select_task_rq_fair() also goes through to see if there are any idle siblings for the cpu that the task is woken up on. This is to ensure that we select any idle sibling rather than choose a busy cpu. But the wake_affine() call in select_task_rq_fair() is making wake affine decisions based on the current woken up cpu (instead of using the idle sibling). Thus waking the task up on a busy thread rather than an idle thread. This issue is introduced by the commit: | commit c88d5910890ad35af283344417891344604f0438 | Author: Peter Zijlstra <a.p.zijlstra@chello.nl> | Date: Thu Sep 10 13:50:02 2009 +0200 | | sched: Merge select_task_rq_fair() and sched_balance_self() Because of this we have seen > 5% performance regressions on single cpu SMT systems in certain workloads. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: stable@kernel.org [2.6.32.x, 2.6.33.y] --- kernel/sched_fair.c | 15 +++++++++++---- 1 files changed, 11 insertions(+), 4 deletions(-) diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 5a5ea2c..e858c15 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1238,11 +1238,16 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu, #endif -static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) +/* + * Check if we can wake the task on 'this_cpu' rather than the cpu that it + * previously ran. + */ +static int wake_affine(struct sched_domain *sd, struct task_struct *p, + int this_cpu, int sync) { struct task_struct *curr = current; unsigned long this_load, load; - int idx, this_cpu, prev_cpu; + int idx, prev_cpu; unsigned long tl_per_task; unsigned int imbalance; struct task_group *tg; @@ -1250,11 +1255,13 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) int balanced; idx = sd->wake_idx; - this_cpu = smp_processor_id(); prev_cpu = task_cpu(p); load = source_load(prev_cpu, idx); this_load = target_load(this_cpu, idx); + if (prev_cpu == this_cpu) + return 1; + if (sync) { if (sched_feat(SYNC_LESS) && (curr->se.avg_overlap > sysctl_sched_migration_cost || @@ -1545,7 +1552,7 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag update_shares(tmp); } - if (affine_sd && wake_affine(affine_sd, p, sync)) + if (affine_sd && wake_affine(affine_sd, p, cpu, sync)) return cpu; while (sd) {
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor