Timing
related problems involve combinations of *logical* and * temporal*
constraints. (e.g. constraints of the form *x-y * *>=* *d*
for real clocks *x,y*). Such constraints occur in the verification of timed
automata as well as in other approaches for solving scheduling problems and
their manipulation is a *major computational difficulty* in all time-dependent
reasoning and optimisation. Current verification tools for timed automata use
particular data structures for representing and manipulating such constraints,
such as certain varieties of BDDs and efficient matrix structures. Other approaches
to similar problems are used in other disciplines e.g.constraint propagation
in constraint programming or relaxed mixed integer linear programming.

The crude distinction between state based and constraint based specification and analysis is that the former uses a dynamic model (change of state) whereas the latter is based on a static via (logic) constraints (and time is just a variable). Mutual translations between state based and constraint based models are possible, depending on the questions asked. The consortium intends to enrich existing timed verification tools by methods inspired by the constraints approach and adapt constraint satisfaction techniques to treat mixed Boolean and temporal constraints. In addition the consortium will continue with the related effort of improving the data structures for storing such constraints and the algorithms for manipulating them.

Traditionally, the verification algorithms for timed automata have been based on pre-determined search orders (e.g. breadth-first or depth-first) on the symbolic state-space graph of the automaton. For finding optimal paths it is useful to employ more sophisticated, heuristic search strategies that favour the exploration of the most promising branches --- possibly guided by user provided information --- and that can avoid exhaustive exploration. Such techniques can also be useful when applying real-time verification tools as advanced debuggers: here preference should be given to branches that are most likely to lead to errors.

In addition, the consortium will work on distributing the exploration and
analysis algorithms in order to benefit from the consortiums access to tightly-coupled
clusters of PCs as well as the loosely coupled GRID computers. Here the challenge
will be to reduce the communication overhead (e.g. by identifying compact
formats for communicating symbolic representations of sets of states) in order
to obtain speed-up as close to linear as possible.

responsible
partner: LIF