Ada Reference Manual (Ada 2022)Legal Information
Contents   Index   References   Search   Previous   Next 

D.2.1 The Task Dispatching Model

1/2
The task dispatching model specifies task scheduling, based on conceptual priority-ordered ready queues. 

Static Semantics

1.1/2
 The following language-defined library package exists: 
1.2/5
package Ada.Dispatching
  with Preelaborate, Nonblocking, Global => in out synchronized is
1.3/5
  procedure Yield
   with Nonblocking => False;
1.4/3
  Dispatching_Policy_Error : exception;
end Ada.Dispatching;
1.5/2
 Dispatching serves as the parent of other language-defined library units concerned with task dispatching.
1.6/5
 For a noninstance subprogram (including a generic formal subprogram), a generic subprogram, or an entry, the following language-defined aspect may be specified with an aspect_specification (see 13.1.1):
1.7/5
 Yield
The type of aspect Yield is Boolean.
1.8/5
If directly specified, the aspect_definition shall be a static expression. If not specified (including by inheritance), the aspect is False.
1.9/5
If a Yield aspect is specified True for a primitive subprogram S of a type T, then the aspect is inherited by the corresponding primitive subprogram of each descendant of T

Legality Rules

1.10/5
  If the Yield aspect is specified for a dispatching subprogram that inherits the aspect, the specified value shall be confirming.
1.11/5
  If the Nonblocking aspect (see 9.5) of the associated callable entity is statically True, the Yield aspect shall not be specified as True. For a callable entity that is declared within a generic body, this rule is checked assuming that any nonstatic Nonblocking attributes in the expression of the Nonblocking aspect of the entity are statically True.
1.12/5
  In addition to the places where Legality Rules normally apply (see 12.3), these rules also apply in the private part of an instance of a generic unit.

Dynamic Semantics

2/5
A task can become a running task only if it is ready (see Clause 9) and the execution resources required by that task are available. Processors are allocated to tasks based on each task's active priority.
3
It is implementation defined whether, on a multiprocessor, a task that is waiting for access to a protected object keeps its processor busy. 
4/5
Task dispatching is the process by which a logical thread of control associated with a ready task is selected for execution on a processor. This selection is done during the execution of such a logical thread of control, at certain points called task dispatching points. Such a logical thread of control reaches a task dispatching point whenever it becomes blocked, and when its associated task terminates. Other task dispatching points are defined throughout this Annex for specific policies. Below we talk in terms of tasks, but in the context of a parallel construct, a single task can be represented by multiple logical threads of control, each of which can appear separately on a ready queue. 
5/2
Task dispatching policies are specified in terms of conceptual ready queues and task states. A ready queue is an ordered list of ready tasks. The first position in a queue is called the head of the queue, and the last position is called the tail of the queue. A task is ready if it is in a ready queue, or if it is running. Each processor has one ready queue for each priority value. At any instant, each ready queue of a processor contains exactly the set of tasks of that priority that are ready for execution on that processor, but are not running on any processor; that is, those tasks that are ready, are not running on any processor, and can be executed using that processor and other available resources. A task can be on the ready queues of more than one processor. 
6/2
Each processor also has one running task, which is the task currently being executed by that processor. Whenever a task running on a processor reaches a task dispatching point it goes back to one or more ready queues; a task (possibly the same task) is then selected to run on that processor. The task selected is the one at the head of the highest priority nonempty ready queue; this task is then removed from all ready queues to which it belongs. 
7/5
A call of Yield and a delay_statement are task dispatching points for all language-defined policies.
8/5
If the Yield aspect has the value True, then a call to procedure Yield is included within the body of the associated callable entity, and invoked immediately prior to returning from the body if and only if no other task dispatching points were encountered during the execution of the body. 

Implementation Permissions

9/2
An implementation is allowed to define additional resources as execution resources, and to define the corresponding allocation policies for them. Such resources may have an implementation-defined effect on task dispatching. 
10
An implementation may place implementation-defined restrictions on tasks whose active priority is in the Interrupt_Priority range. 
10.1/5
  Unless otherwise specified for a task dispatching policy, an implementation may add additional points at which task dispatching may occur, in an implementation-defined manner.
11
NOTE 1   Clause 9 specifies under which circumstances a task becomes ready. The ready state is affected by the rules for task activation and termination, delay statements, and entry calls. When a task is not ready, it is said to be blocked.
12/5
NOTE 2   An example of a possible implementation-defined execution resource is a page of physical memory, which must be loaded with a particular page of virtual memory before a task can continue execution.
13
NOTE 3   The ready queues are purely conceptual; there is no requirement that such lists physically exist in an implementation.
14
NOTE 4   While a task is running, it is not on any ready queue. Any time the task that is running on a processor is added to a ready queue, a new running task is selected for that processor.
15
NOTE 5   In a multiprocessor system, a task can be on the ready queues of more than one processor. At the extreme, if several processors share the same set of ready tasks, the contents of their ready queues is identical, and so they can be viewed as sharing one ready queue, and can be implemented that way. Thus, the dispatching model covers multiprocessors where dispatching is implemented using a single ready queue, as well as those with separate dispatching domains.
16
NOTE 6   The priority of a task is determined by rules specified in this subclause, and under D.1, “Task Priorities”, D.3, “Priority Ceiling Locking”, and D.5, “Dynamic Priorities”.
17/2
NOTE 7   The setting of a task's base priority as a result of a call to Set_Priority does not always take effect immediately when Set_Priority is called. The effect of setting the task's base priority is deferred while the affected task performs a protected action.

Contents   Index   References   Search   Previous   Next 
Ada-Europe Ada 2005 and 2012 Editions sponsored in part by Ada-Europe