Scheduling and planning are subjects studied from different perspectives by the Operation Research (OR) and the Artificial Intelligence (AI) communities. Typically, OR models tend to reduce these problem to well-defined static optimisation problems to be solved by mathematical programming techniques. AI models emphasise complex logical structure and dynamic interactions between actions and pay less attention to quantitative aspects. The consortium will work toward establishing timed automata as a common denominator of the two approaches.

The AI community has developed a formalism called PDDL (Planner Domain Description Language) as a common language for defining benchamark problems on which different planning and scheduling algorithms can be evaluated. In this language one can define object-types (e.g. plane, city, person) together with a number of associated attributes (speed, capacity, location). The possible plans spanned by the domain are described by a number of parameterised actions, where the execution of an action-instance may depend on and affect the values of the attributes. Such a language allows richer semantic description than standard OR models and it turned out that algorithms based on verification techniques have very good performance on such problems.(E.g.: in 2000, the BDD-based tool MIPS by Stefan Edelkamp and Malte Helmert, Freiburg University, was awarded for ``Distinguished Performance'' at the yearly AI conference on planning and scheduling (AIPS).) Recently, in order to reinforce the quantitative aspects, an extension of PDDL, called PDDL+, has been defined, where actions are additionally equipped with constraints on durations. In this task the consortium will investigate translations from PDDL+ into the timed automata modelling framework in order to allow the timing analysis methods of \Ametist to be applied to benchmarks written in this language.

From the OR side, the consortium will develop abstract timed automata models that add different types of uncertainty to classical job-shop scheduling models, which assume that all tasks are known in advance as well as their execution times. The consortium intend to build models that relax these assumptions gradually. In the first model the consortium will introduce uncertainty into the task duration, i.e. the duration of a step is only known to be inside an interval [l,u]. Such a simple modification in the model raises a lot of methodological questions. One approach is to assume some nominal future evolution (optimistic, pessimistic, etc.), find a schedule which is optimal with respect to this prediction, and re-compute a schedule whenever the actual execution deviates from the nominal path. A major disadvantage of this approach is in the need to perform costly computations on-line, that is, during the execution of the jobs. A question the consortium intends to investigate is whether some of the re-scheduling can be performed off-line using synthesis methods (see T1.5). After resolving these issues the consortium will proceed to more complicated models of uncertainty involving arrival and release time of jobs, preemption, breaking of machines, etc. For all of these models the consortium will formulate the corresponding algorithmic problems that need to be solved.

responsible partner: Uni Do