Subject: Re: SDL-News: Priorities - Interrupt
From: Rick Reed TSE (rickreed#tseng.co.uk)
Date: Thu Oct 08 1998 - 17:42:04 GMT
The originator of this message is responsible for its content.
-----From Rick Reed TSE <rickreed#tseng.co.uk> to sdlnews -----
At 21:43 +0100 7/10/98, Maha Boughdadi wrote:
>I have a couple of questions, if anybody has an answer it will be very
>1) I would like to know if it is possible to handle multiple level
>priorities in SDL.
>2) How to handle/model Interrupts in SDL?
1) Priority levels
SDL (currently) has only one level of build in priority: for a given state
priority inputs identify signals which will be received in preference to
other signals. If the input queue contains signals in priority inputs when
a sate is entered, then transition interpreted corresponds the first of
these signals in the input queue, and other signals (not in priority
inputs) remain in the input queue even if they arrived before the signal in
the priority input.
It is worth noting that "signal priority" applies for each state/signal
combination: that if signals i1 and i2 are handled with priority in state
s1, they do not have to be handled with priority in state s2. In state s2
signal i1 could be handeled without priority, or saved.
For a small number of signal priorities it would be possible to manually
apply the the model used in Z.100 for priority signals, but this model
requires a signal to be repeated posted in the process input queue as a
marker. For this reason the Z.100 model is inefficient: it was provided to
define the (observable) behaviour rather than REQUIRE that conforming
systems should be implemented in this way.
There is an implicit assumption that every process can run concurrently and
(at least potentially) runs on its own processor. A model or implementation
that correctly reflects a definition in SDL should therefore have behaviour
that is independent of process/processor allocation. Thus process
scheduling priority and the use of processor resource is not part of the
language. An ongoing ITU study is considering how time and performance
should be related to SDL (and MSC).
Aside:- The formal semantic model of SDL maps the concept of autonomous
concurrent processes onto an interleaved processing model. The reason for
this is not that SDL processes are required to be interleaved (which would
clearly be impractical for a distributed system), but to make the formal
model easier to handle. For all practical purposes there is no observable
difference between such an interleaved model and a truly concurrent one.
When SDL is implemented some scheduling needs to be implemented. In this
case tools may provide some additional mechanism for hadnling priotity
which will almost certainly be dependent on the actual run time system(s)
2) Interrupt handling
I am aware that some users of SDL have used it to model both hardware and
software where interrupts occur. Thes models tend to be application
specific because the meaning of "interrupt" is different. Perhaps any such
users reading this might also like to comment.
The real question is what meaning can be given to the term "interrupt"
given that SDL has (in effect) concurrent processes. An "interrupt to the
system would need to be directed to a particular process instance. An
"interrupt" to a process instance would either have to be deferred until
the next state is entered or enable a transfer of control from any point in
the process graph to some form of handler. If it is assumed that
transistions take an insignificant amount of time, then defering a process
"interrupt" until the next state should be adequate. (Note: Z.100 does not
state that transitions take NO time - a commonly held belief.)
In real systems there is also the issue of what happens if an interrupt
occurs while an interrupt is being handled. In some case the second
interrupt is ignored, whereas in other cases the second interrupt causes
transfer of control again (seomtimes to the same place and sometimes to
somewhere different). In general, intterrupts in real systems cause the
current environment to be stored, but this is not always the case and this
may need to be handled in the interrupt handler.
There are some strategies that can be used in current SDL:
a) There can be an interrupt handling process that receives (interrupt)
signals from the environment and this process then does what is necessary
to handle the interrupt;
b) The priority input mechanism can be used to handle interrupts (if there
are no other priority inputs), in which case the priority input needs to
applied to all states (in which "interrupt is allowed");
c) A process may be "interrupt driven" meaning that other signals are only
examined when an interrupt (usually from some form of "clock") is received.
In this case each state can be split into two states: one which saves all
signals except the interrupt and leads to the second state when the
interrupt is received; the second state has inputs for the other signals
and a continuous signal input containing TRUE that leads back to the first
state (this ensures that if there are no signals .
In SDL-2000 there will an exception mechanism where it will be possible to
cause an expeption that is then handled. It seems to me that the exception
could be an (externally caused) interrupt.
-- Rick Reed, TSE Limited 13 Weston House, 18-22 Church Street Lutterworth Leicestershire LE17 4AW United Kingdom Tel +44 14 55 55 96 55; Fax +44 14 55 55 96 58 Mob +44 79 70 50 96 50 email: rickreed#tseng.co.uk http://www.tseng.co.uk ftp://ftp.tseng.co.uk/tseng/
-----End text from Rick Reed TSE <rickreed#tseng.co.uk> to sdlnews ----- For help, email "majordomo#sdl-forum.org" with the body of your email as: help or (iff this does not answer your question) email: owner-sdlnews#sdl-forum.org
This archive was generated by hypermail 2a23 : Sun Jun 16 2013 - 10:41:40 GMT