In the last 15 years, research in Computer Science has involved many paradigms in which continuous time appears whether in a pure way or in cooperation with discrete time. In particular, this is evident in subjects concerning Automata and Logic. This issue of FI consists of four papers dedicated to such subjects. The papers will be referred to below as [HR], [S], [PRT], [T] (the initials of the authors). [HR] and [S] are about continuous-time logics, whereas in [T] and [PRT] the focus is on automata.
In [HR] the concern is about Monadic Logics of Order (MLO) and their relationship to Temporal Logics. For discrete time, decidability of Second Order Monadic Logic (SOML) follows from the connection between SOML and automata theory. It is well known that SOML covers numerous temporal logics (TL). Ultimately, these TL may be considered as syntactically sugared versions of SOML fragments. In [HR] further facts of this kind are established for continuous time. However, in this case, instead of traditional automata-theoretic techniques one needs to use properly general theorems from logic.
The logic framework developed in [S] is based on First Order Timed Logic (FOTL), that allows functions and predicates with more than one argument; moreover, it allows also some arithmetics. This makes the logic expressive enough to represent, more or less directly, continuous-time properties of distributed algorithms. But, on the other hand, it makes the logic undecidable. The fundamental observation that, nevertheless, permits the efficient use this logic for verification is as follows: the underlying theories of continuous time (e.g. the theory of real addition, Tarski algebra, etc.) are decidable or have much better complexity than the corresponding theories of discrete time. Interesting decidable classes of the verification problem are based on appropriate properties of FOTL.
The companion papers [T] and [PRT] draw their initial motivation from the literature on Hybrid Automata, Circuits and related control problems. Concrete problems about circuits (feedback reliability, completeness) and control (sample-and-hold architectures in continuous time) are also the subject of [T] and [PRT]. Yet, a more general contribution is the development of a conceptual framework that allows one to highlight the genuine distinctions and similarities between the discrete-time and continuous-time tracks.
There is a growing feeling in the community that the literature on these subjects, as well on the related logics, is plagued by a Babel of models, constructs and formalisms with an amazing discord of terminology and notation. Further models and formalisms are engendered, and it is not clear where to stop. Hence, appeals like:
The analysis of such formalisms and systems is not the goal of [S]; here the control systems under consideration are reduced to rather primitive ones by declaring the relevant properties in logic specifications. Accordingly, [S] confines itself to the following informal remark: an algorithm interacting with a continuous-time device is, usually, called a hybrid automaton.
On the other hand, [HR], [T] and [PRT] are strongly committed to the analysis in depth of the various continuous-time paradigms and to their robust conceptual integration in mainstream Automata Theory and Logic. These papers come to the following conclusions about misconceptions in the previously suggested models and logics:
(i). The standard continuous-time model for logic was ignored as a yardstick; instead, different kinds of sequences of continuous-time bits were used ([HR]).
(ii). Input/output behavior of automata was ignored in favor of generating devices ([T]).
More details and reflections follow:
[HR]: When it came to continuous time most of the researchers decided to abandon the natural model of the real line in favor of discrete models in which the elements of the sequence were decorated by some information concerning the real line. This may have been an attempt to pursue the connection with automata theory since automata were traditionally associated with sequences. This was the main cause in the rejection of the classical model. It complicated the subsequent research. The choice of the temporal logic became an arbitrary decision which was no longer backed up by the expressive power of classical logic. A host of different formalisms was introduced, as well as an astonishing variety of pairs ``model - logic". This work never converged to an accepted model; models and formalisms continued to proliferate.
[T]: Functions (in particular, input/output behavior of automata) are more fundamental than sets (say, languages accepted by automata). In particular, the restriction to language - theoretic formulations hides issues that are crucial for proper handing of circuits and hybrids. Surprisingly, it has been left unobserved that some flaws in the conceptual decisions concerning continuous time are identifiable already at the level of discrete-automata theory.
Editorial and historical comments. Sometimes, slightly different notation and terminology are used in the papers. This is visible especially in the companion papers [PRT] and [T] that have much in common but were written at different times. It seemed too burdensome to impose unified standards and to require the respective revisions. Hopefully, the reader will easily overcome this lack of uniformity due to appropriate footnotes and mutual references.
Over the last fifteen years formalisms for reasoning about metric properties of computations were suggested and discussed. First as extensions of temporal logic, ignoring the framework of classical predicate logic, and then, with the authors' work, within the framework of monadic logic of order. Here we survey our work on metric logic comparing it to the previous work in the field. We define a quantitative temporal logic that is based on a simple modality within the framework of monadic predicate Logic. Its canonical model is the real line (and not an w-sequence of some type). It can be interpreted either by behaviors with finite variability or by unrestricted behaviors. For finite variability models it is as expressive as any logic suggested in the literature. For unrestricted behaviors our treatment is new. In both cases we prove decidability and complexity bounds using general theorems from logic (and not from automata theory).
This paper is a survey of research of my colleagues and myself aimed at developing a comprehensive logical framework for the verification of real-time distributed systems. The framework is based on predicate logics with explicit time. To choose such a logic we pursue two goals: first, to make formalization of verification problems rather direct, without unjustified simplifications, and second, to have a logic which permits to describe decidable classes of the verifications problem covering the particular problems we are interested in. Notice that our intention is not to introduce new specification languages, but work directly with those of the user.
In this paper we describe First Order Timed Logic (FOTL) that is sufficient to express the main part of verification of systems without uncertainty. The time is continuous (the formalism work as well for discrete time - in our context this case is less interesting and less efficient from algorithmic viewpoint). We give examples of problems that can be treated, describe how to represent runs of programs in FOTL, introduce decidable classes, discuss aspects of practical efficiency. We conclude with open questions.
Paradigms, in which continuous time is involved in cooperation with, or instead of, discrete time appear now in different areas related to automata, logic and interaction. Unfortunately, they are accompanied by a plethora of definitions, terminology and notation, which is not free of ad-hoc and ambiguous decisions. The overuse of definitions from scratch of intricate notions without a previous, explicit core of basic generic notions engenders further models and formalisms, and it is not clear where to stop. Hence (quoting J.Hartmanis), the challenge ``to isolate the right concepts, to formulate the right models, and to discard many others, that do not capture the reality we want to understand...".
We undertake this challenge wrt some automata-theoretic concepts and issues that appear in the literature on continuous-time circuits and hybrid automata, by keeping to the following guidelines:
1. Building on Basic Automata Theory.
2. Coherence with original or potential discrete-time paradigms, whose continuous-time analogs and/or mutants we would like to understand.
3. Functions, notably input/output behavior of devices, should not be ignored in favor of sets (languages) accepted by them.
The paper outlines the approach which emerged in previous research [PRT, RT, T3, T4, R] and in teaching experience [T0, T2]. As an illustration we offer a precise explanation of the evasive relationship between hybrid automata, constrained automata and control circuits.
To what mathematical models do digital computer circuits belong? In particular:
(i) (Feedback reliability.) Which cyclic circuits should be accepted? In other words, under which conditions is causally faithful the propagation of signals along closed cycles of the circuit?
(ii) (Comparative power and completeness.) What are the appropriate primitives upon which circuits may be (or should be) assembled?
There are well-known answers to these questions for circuits operating in discrete time, and they point on the exclusive role of the unit-delay primitive. For example:
(i) If every cycle in the circuit N passes through a delay, then N is feedback reliable.
(ii) Every finite-memory operator F is implementable in a circuit over unit-delay and pointwise boolean gates.
In what form, if any, can such phenomena and results be extended to circuits operating in continuous time? This is the main problem considered (and, hopefully, solved to some extent) in this paper.
In order to tackle the problems one needs more insight into specific properties of continuous time signals and operators that are not visible at discrete time.
The articles in this special issue of Fundamenta Informaticae are revised versions of selected papers presented in the Third International Conference on Application of Concurrency to System Design (ACSD'03) held in Guimarães, Portugal, 18-20 June, 2003. The first conference in the series was held in Aizu-Wakamatsu, Japan, March 1998, the second in Newcastle upon Tyne, U.K., June 2001. The journal versions of six selected papers from ACSD'01 were published in Fundamenta Informaticae, Vol. 50, Number 2, April 2002. The five papers have been chosen from 22 contributions accepted for presentation at ACSD'03, following their additional evaluation and editorial treatment. Like its predecessors, the Guimarães Conference was organized to provide a forum for disseminating advanced research results on theory, algorithms, and case studies arising in the design of concurrent systems. The chosen articles cover complexity issues, asynchronous circuits, reactive syatems, scheduling, refinement and synthesis problems. The order of contributions is alphabetical. The first article by Y. Bontemps, P.-Y. Schobbens and C. Löding is devoted to synthesis of open reactive systems. They start with scenario-based specifications and propose Live Sequence Charts with a new, game-based, semantics to model interactions between the system and its environment. In the second article, J. Cortadella, A. Kondratyev, L. Lavagno, A. Taubin, Y. Watanabe address the problem of quasi-static scheduling for concurrent architectures. Their paper presents a synthesis approach for reactive syatems that aims at minimizing the overhead introduced by the operating system and the interaction among the concurrent tasks, while considering multiple concurrent execution resources. In the third article, J. Esparza provides a surprising polynomial-time algoritnm for checking consitency of free-choice singal transition graphs. The fourth article by V. Khomenko, M. Koutny and A. Yakovlev addresses the problem of detecting state encoding conflicts in Signal Transition Graphs. The problem is proven to be NP-complete. The authors propose an algorithm based on the Boolean Satisfiability (SAT) approach. The SAT problem is of course also NP-complete, but there exist many heuristic efficient algorithms for SAT working very well for äverage" case. In the fifth article and the last, J.-P. Talpin, P. Le Gueric, S. K. Shukla, F. Doucet, and R. Gupta proposes a new approach to formal refinement-checking. They apply the concept of multi-clock synchrony, and demonstrate the effectiveness of the approach by experimental and comparative study of a SPECC programming example. I am very grateful to the authors for submitting their papers and to the referees for their useful criticism.
Department of Computing and Software
Canada L8S 4K1
We propose here Live Sequence Charts with a new, game-based semantics to model interactions between the system and its environment. For constructing programs automatically, we give an algorithm to synthesize either a strategy for the system ensuring that the specification is respected, or, if the specification is unimplementable, a strategy for the environment forcing the system to fail. We introduce the concept of mercifulness, a desirable property of the synthesized program. We give a polynomial time algorithm for synthesizing merciful winning strategies.
This paper presents a synthesis approach for reactive systems that aims at minimizing the overhead introduced by the operating system and the interaction among the concurrent tasks, while considering multiple concurrent execution resources. A formal model based on the notion of scheduling of Petri nets is used to perform the synthesis. We show how the notion of projections of a schedule for the complete system onto the components implemented on separate resources is essential to define the correctness of the partitioned schedule.
Signal Transition Graphs (STGs) are one of the most popular models for the specification of asynchronous circuits. A STG can be implemented if it admits a so-called consistent and complete binary encoding. Deciding this is EXPSPACE-hard for arbitrary STGs, and so a lot of attention has been devoted to the subclass of free-choice STGs, which offers a good compromise between expressive power and analizability. In the last years, polynomial time synthesis techniques have been developed for free-choice STGs, but they assume that the STG has a consistent binary encoding. This paper presents the first polynomial algorithm for checking consistency.
The behaviour of asynchronous circuits is often described by Signal Transition Graphs (STGs), which are Petri nets whose transitions are interpreted as rising and falling edges of signals. One of the crucial problems in the synthesis of such circuits is that of identifying whether an STG satisfies the Complete State Coding (CSC) requirement (which states that semantically different reachable states must have different binary encodings), and, if necessary, modifying the STG (by, e.g., inserting new signals helping to trace the current state) to meet this requirement. This is usually done using reachability graphs.
In this paper, we avoid constructing the reachability graph of an STG, which can lead to state space explosion, and instead use only the information about causality and structural conflicts between the events involved in a finite and complete prefix of its unfolding. We propose an efficient algorithm for detection of CSC conflicts based on the Boolean Satisfiability (SAT) approach. Following the basic formulation of the state encoding conflict relationship, we present some problem-specific optimization rules. Experimental results show that this technique leads not only to huge memory savings when compared to the CSC conflicts detection methods based on reachability graphs, but also to significant speedups in many cases.
Rising complexity, increasing performance requirements, and shortening time-to-market demands necessitate newer design paradigms for embedded system design. Such newer design methodologies require raising the level of abstraction for design entry, reuse of intellectual property blocks as virtual components, refinement based design, and formal verification to prove correctness of refinement steps. The problem of combining various components from different designers and companies, designed at different levels of abstraction, and embodying heterogeneous models of computation is a difficult challenge for the designer community today. Moreover, one of the gating factors for widespread adoption of the system-level design paradigm is the lack of formal models, method and tools to support refinement. In the absence of provably correct and adequate behavioral synthesis techniques, the refinement of a system-level description towards its implementation is primarily a manual process. Furthermore, proving that the implementation preserves the properties of the higher system-level design-abstraction is an outstanding problem. In this paper, we address these issues and define a formal refinement-checking methodology for system-level design. Our methodology is based on a polychronous model of computation of the multi-clocked synchronous formalism Signal. This formalism is implemented in the Polychrony workbench. We demonstrate the effectiveness of our approach by the experimental case study of a SpecC modeling example. First, we define a technique to systematically model SpecC programs in the Signal formalism. Second, we define a methodology to compare system-level models of SpecC programs and to validate behavioral equivalence relations between these models at different levels of abstraction. Although we use SpecC modeling examples to illustrate our technique, our methodology is generic and language-independent and the model that supports it conceptually minimal by offering a scalable notion and a flexible degree of abstraction.
In this paper, a new palette-based image steganography is proposed. Palette-based images such as GIF files have been widely used on the Internet and web pages such that all browsers can recognize them. Palette-based images can be used as good carriers for transmitting secret messages. In this scheme, the Least Significant Bits (LSBs) of the palette-based image are utilized in order to hide the secret message by using the computational information hiding technique. The adaptive hiding capacity can be computed according to different sizes of the embedding block.
Experimental results also show that existing artifacts in the traditional LSB-based method are unnoticeable. The number of modification pixels in the embedding process is decreased, while good stego-image quality is maintained, and the cover image is left unblemished by camouflaged information. Furthermore, the feature of adaptive hiding capacity provides the possibility of protecting different sizes of embedded secret messages.
In this paper we present a timed extension of the AltaRica formalism. Following previous works, we first extend the semantics of AltaRicawith time and define timed components and timed nodes. Moreover we lift the priority features of AltaRicato the timed case. We obtain a timed version of AltaRica, called Timed AltaRica. Finally we give a translation of a Timed AltaRicaspecification into a usual timed automaton. These are the semantic foundations of a high-level hierarchical language for the specification of timed systems.
w-powers of finitary languages are w-languages s in the form
Vw, where V is a finitary language over a finite
Since the set Sw of infinite words over S can be equipped
with the usual Cantor topology, the question of the topological
complexity of w-powers naturally arises and has been
raised by Niwinski , by Simonnet , and by Staiger .
It has been proved in  that for each integer n ³ 1, there
exist some w-powers of context free languages which are
Pn0-complete Borel sets, and in  that there exists
a context free language L such that Lw is analytic but
not Borel. But the question was still open whether there exists a
finitary language V such that Vw is a Borel set of
We answer this question in this paper, giving an example of a finitary language whose w-power is Borel of infinite rank.
A top-down method is presented for compiling queries in unstratified propositional disjunctive deductive databases under the disjunctive well-founded semantics. Compilation entails the construction of a set of compilation trees, and the run-time processing of the compiled query then amounts to checking that some such tree can be extended within the extensional database to yield a tree that encapsulates a proof of the original query. Issues surrounding the extension of our techniques to the first order level are discussed.
The max-bisection problem is to find a partition of the vertices of a graph into two equal size subsets that maximizes the number of edges with endpoints in both subsets.
We obtain new improved approximation ratios for the max-bisection problem on the low degree k-regular graphs for 3 £ k £ 8, by deriving some improved transformations from a maximum cut into a maximum bisection. In the case of three regular graphs we obtain an approximation ratio of 0.854, and in the case of four and five regular graphs, approximation ratios of 0.805, and 0.812, respectively.
A propositional knowledge base can be seen as a compact representation of a set of models. When a knowledge base T is updated with a formula P, the resulting set of models can be represented in two ways: either by a theory T¢ that is equivalent to T*P or by the pair \l T,P \r. The second representation can be super-polinomially more compact than the first. In this paper, we prove that the compactness of this representation depends on the specific semantics of *, , Winslett's semantics is more compact than Ginsberg's.
In this paper we focus on the properties of infinite sets of types that have finite substitutional images. Since the number of such images may be infinite, we provide the way of its reduction. Having this done, we apply the new methods in the field of learnability of classical categorial grammars.
In this paper, cluster analysis based on fuzzy relations is investigated. This paper starts with fuzzy proximity relations and cluster analysis based on fuzzy proximity relation is performed by using rough approximation. A clustering algorithm has been proposed and explained with an example.