Computer Science Dept

- Tasks are periodic. Task i has a period Ti.
- Task i need to compute for up to Ci units of time every period Ti.
- Tasks are ready at the start of each period.
- Tasks have deadlines by the end of the period.
- Tasks can be preemptable.
- Tasks are scheduled according to their priorities, which are allowed to change dynamically.

Find a *priority assignment technique* and a *schedulability
test* such that for any set of tasks that passes the
schedulability test, it follows that using the priority assignment
technique, each task in the set will meet its deadline.

**Earliest Deadline First Scheduling**

The *priority assignment technique* is to assign to task i a
priority inversely proportional to the amount of time remaining until
the task's deadline (ties broken arbitrarily). The *schedulability
test* is that

C1/T1 + C2/T2 + ... + Ci/Ti + ... + Cn/Tn <= 1

**Theorem**

Earliest Deadline First Scheduling is Optimal under the above model.

EDF has a serious disadvantage. It does not work well when system is
overloaded (i.e. when the above inequality is not met, *
nothing* can be said about what will happen. This is a serious
predictability issue.

- Tasks are periodic. Task i has a period Ti.
- Task i need to compute for up to Ci units of time every period Ti.
- Tasks are ready at the start of each period.
- Tasks have deadlines by the end of the period.
- Tasks can be preemptable.
- Tasks are scheduled according to an
*a priori fixed*priority assignment (i.e. A task with a higher priority preempts a task with a lower priority.)

**The Problem**

Find a *priority assignment technique* and a *schedulability
test* such that for any set of tasks that passes the
schedulability test, it follows that using the priority assignment
technique, each task in the set will meet its deadline.

**Rate Monotonic Scheduling**

The *priority assignment technique* is to assign to task i a
priority inversely proportional to the task's period Ti (ties broken
arbitrarily). The *schedulability test* is that

C1/T1 + C2/T2 + ... + Ci/Ti + ... + Cn/Tn <= n (2^(1/n) - 1)

**Liu and Layland theorems**

Rate Monotonic Scheduling is optimal under the above model.

RMS has a nice advantage. If the system becomes overloaded, deadlines are missed predictably! Also, in many situations, the up to 90% utilization is possible with RMS!

- Tasks are periodic. Task i has a period Ti.
- Task i need to compute for up to Ci units of time every period Ti.
- Tasks are ready at the start of each period.
- Tasks have deadlines by the end of the period.
- Tasks can be preemptable.
- Tasks are scheduled according to an
*a priori fixed*priority assignment (i.e. A task with a higher priority preempts a task with a lower priority.)

**The Problem**

Could we execute aperiodic tasks using the above model?

**Deferrable Server Algorithm**

Dedicate one (or more) periodic task to "pick up" aperiodic tasks that may need to be executed. Call such a task the "aperiodic server". During each period Ts of the server, it is allowed to use the resource up to a set amount of time Cs. This is called the budget if the server. At the beginning of every server period, that budget is replenished (back to Cs). As long as the server has "enough budget" it could use it anytime within its period to service aperiodic requests. The algorithm is simple to implement, but makes the schedulability analysis very hard due to a phenomenon called "deferred execution effect" (also "jitter effect"). In particular, imagine that an aperiodic task comes into the system at the latter part of the server's period and utilizes its budget just in time for a new server period. Since the budget is replenished, the aperiodic task grabs the resource (say CPU) for another Cs units of time. The net result is that 2*Cs units of time where used over two periods "back to back". This is not allowed in RMS and thus necessitates a changing of the RMS schedulability test.

**Sporadic Server Algorithm**

This is very similar to the deferrable Server Algorithm, except that the budget is not replenished at the beginning of each period, but after Ts units of time ellapse after the budget is consumed. The Sporadic Server Algorithm conforms to the RMS schedulability analysis.

Could we achieve better utilization?

**Theorem**

If a task set is scheduled using RMS and Tj divides Ti for 1 <= j <= i, then the schedulability test for RMS becomes

C1/T1 + C2/T2 + ... + Ci/Ti + ... + Cn/Tn <= 1

What if task deadlines are a constand fraction (delta) of their period? (i.e. Di = delta * Ti, where delta is a positive number less than or equal to 1)

**Theorem**

RMS is optimal and the schedulability test generalizes to

C1/T1 + C2/T2 + ... + Cn/Tn <= (n((2*delta)^(1/n) - 1 ) + (1 - delta), if 0.5 <= delta <= 1 C1/T1 + C2/T2 + ... + Cn/Tn <= delta , if 0 <= delta <= 0.5

What if a task's deadline Di is an arbitrary value smaller than the period Ti?

**Deadline Monotonic Scheduling**

Assign to tasks priorities that are inversly proportional to their deadlines. The schedulability test is more complicated that that suggested by Liu and Layland and involves testing for all possible task phasings.

**Theorem**

Deadline Monotonic Scheduling is optimal.

- Tasks are periodic. Task i has a period Ti.
- Task i need to compute for up to Ci units of time every period Ti.
- Tasks are ready at the start of each period.
- Tasks have deadlines by the end of the period.
- Tasks can be preemptable.
- Tasks share resource(s) that could be locked (say) by a semaphore.

**Problem**

The highest priority task may suffer a * very* long delay waiting
a shared resource. In particular, if task 1 is the highest priority, it
may have to wait for C2 + C3 + C4 + .... + Cn units of time, which
creates a period of priority inversion that destroys all the guarantees
given (say) by RMS.

**Priority Inheritance**

A task blocking a shared resource that is needed by a higher priority
task * inherits* the priority of that task. This bounds the
amount of time a higher priority task will have to wait for a lower
priority task (i.e. will bound priority inversion).

**Problem**

Priority inheritance is suceptible to deadlocks

**Priority Ceiling**

Each semaphore is assigned a *priority ceiling* which is defined
as the priority of the highest-priority task that may ever request to
lock that semaphore. When a task A attempts to execute a critical
section, it will be suspended unless its priority is higher than the
priority ceilings of all semaphores currently locked by tasks other than
A. Moreover, the task that holds the lock on the semaphore with a priority
ceiling equal to or larger than the priority of A is assumed to be blocking A,
and hence *inherits* A's priority.

**Theorem**

The Priority Ceiling algorithm prevents deadlocks.

**Theorem**

Under the Priority Ceiling algorithm, priority inversion is bounded by the duration of at most one critical section.

**Theorem**

A schedulability test for RMS in the presence of synchronization constraints is

C1/T1 + C2/T2 + ... + Ci/Ti + Bi/Ti <= i(2^(1/i)-1) , 1 <= i <= n

- Value-cognizant scheduling of soft/firm-deadline tasks.
- Scheduling for imprecise computations.
- Scheduling in the presence of precedence constraints.
- Scheduling for multiprocessors.
- Scheduling for distributed systems.
- Scheduling with probabilistic guarantees.
- Scheduling for end-to-end constraints.
- Scheduling for fault-tolerance (open problem).

(C) Copyright 1995.

This document has been prepared by Professor Azer Bestavros <

Created on:January 23, 1996.Updated on:January 24, 1996.