File 5742fa17-sched-avoid-races-on-time-values-read-from-NOW.patch of Package xen.7317

# Commit 779511f4bf5ae34820a85e4eb20d50c60f69e977
# Date 2016-05-23 14:39:51 +0200
# Author Dario Faggioli <dario.faggioli@citrix.com>
# Committer Jan Beulich <jbeulich@suse.com>
sched: avoid races on time values read from NOW()

or (even in cases where there is no race, e.g., outside
of Credit2) avoid using a time sample which may be rather
old, and hence stale.

In fact, we should only sample NOW() from _inside_
the critical region within which the value we read is
used. If we don't, in case we have to spin for a while
before entering the region, when actually using it:

 1) we will use something that, at the veryy least, is
    not really "now", because of the spinning,

 2) if someone else sampled NOW() during a critical
    region protected by the lock we are spinning on,
    and if we compare the two samples when we get
    inside our region, our one will be 'earlier',
    even if we actually arrived later, which is a
    race.

In Credit2, we see an instance of 2), in runq_tickle(),
when it is called by csched2_context_saved() as it samples
NOW() before acquiring the runq lock. This makes things
look like the time went backwards, and it confuses the
algorithm (there's even a d2printk() about it, which would
trigger all the time, if enabled).

In RTDS, something similar happens in repl_timer_handler(),
and there's another instance in schedule() (in generic code),
so fix these cases too.

While there, improve csched2_vcpu_wake() and and rt_vcpu_wake()
a little as well (removing a pointless initialization, and
moving the sampling a bit closer to its use). These two hunks
entail no further functional changes.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>

--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -956,7 +956,7 @@ static void
 csched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu * const svc = CSCHED_VCPU(vc);
-    s_time_t now = 0;
+    s_time_t now;
 
     /* Schedule lock should be held at this point. */
 
@@ -1009,8 +1009,8 @@ static void
 csched_context_saved(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu * const svc = CSCHED_VCPU(vc);
-    s_time_t now = NOW();
     spinlock_t *lock = vcpu_schedule_lock_irq(vc);
+    s_time_t now = NOW();
 
     BUG_ON( !is_idle_vcpu(vc) && svc->rqd != RQD(ops, vc->processor));
 
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1147,7 +1147,7 @@ static void vcpu_periodic_timer_work(str
 static void schedule(void)
 {
     struct vcpu          *prev = current, *next = NULL;
-    s_time_t              now = NOW();
+    s_time_t              now;
     struct scheduler     *sched;
     unsigned long        *tasklet_work = &this_cpu(tasklet_work_to_do);
     bool_t                tasklet_work_scheduled = 0;
@@ -1181,6 +1181,8 @@ static void schedule(void)
 
     lock = pcpu_schedule_lock_irq(cpu);
 
+    now = NOW();
+
     stop_timer(&sd->s_timer);
     
     /* get policy-specific decision on scheduling... */
openSUSE Build Service is sponsored by