101 (1-2) 2010

  1. Concurrency Specification and Programming (CS&P).  Preface i-i
  2. Synthesis and Analysis of Net Structures and Transition Graphs with Inhibitor Relations
    Ludwik Czaja,  Manfred Kudlek 1-7
    We extend the investigation of synthesis and analysis of net structures and transition graphs to such with inhibitor relations, and show that the results obtained for the simpler case can be extended, too.
  3. Bounded Parametric Verification for Distributed Time Petri Nets
    with Discrete-Time Semantics

    Michał Knapik, Wojciech Penczek,  Maciej Szreter, Agata Półrola, 9-27
    Bounded Model Checking (BMC) is an efficient technique applicable to verification of temporal properties of (timed) distributed systems. In this paper we show for the first time how to apply BMC to parametric verification of time Petri nets with discrete-time semantics. The properties are expressed by formulas of the logic PRTECTL - a parametric extension of the existential fragment of Computation Tree Logic (CTL).
  4. Safeness for Object Nets
    Michael Köhler-Bußmeier and Frank Heitmann 29-43
    In this paper we discuss the concept of safeness for Elementary Object Nets ( EOS). Object nets are Petri nets which have Petri nets as tokens - an approach known as the nets-within-nets paradigm. Object nets are called elementary if the net system has a two levelled structure. The well known p/t nets can be considered as a special case of EOS.
    For p/t nets the concept of safeness means that there is at most one token on each place. Since object nets have nested markings there are different possibilities to generalise this idea for EOS. In this paper we define different variants of EOS safeness, discuss their relationships, show that they all coincide for p/t-like EOS, and address the complexity of well known Petri net problems like reachability and liveness for this new class of object nets.
  5. Minimal Regions of ENL-Transition Systems
    Maciej Koutny,  Marta Pietkiewicz-Koutny 45-58
    One of the possible ways of constructing concurrent systems is their automated synthesis from behavioural specifications. In this paper, we look at a particular instance of this approach which aims at constructing GALS (globally asynchronous locally synchronous) systems from specifications given in terms of transition systems with arcs labelled by steps of executed actions. GALS systems are represented by Elementary Net Systems with Localities (ENL-systems), each locality defining a set of co-located actions. The synthesis procedure is based on the regions of transition systems and we provide a number of criteria aimed at generating a minimal set of regions (conditions) of an ENL-system generating a given transition system.
  6. Interacting Workflow Nets for Workflow Process Re-Engineering
    Irina A. Lomazova 59-70
    In this work we consider modeling of workflow systems with Petri nets. To increase flexibility and give tools for workflow models re-engineering we extend the formalism of workflow nets by considering systems of interacting nets. Then we study soundness - the main correctness property of workflow processes - and show, that for a special class of structured workflow system soundness can be proved in compositional way.
  7. Partial Order Reductions for Model Checking Temporal-epistemic Logics over Interleaved Multi-agent Systems
    Alessio Lomuscio,  Wojciech Penczek,  Hongyang Qu 71-90
    We investigate partial order reduction techniques for the verification of multi-agent systems. We investigate the case of interleaved interpreted systems. These are a particular class of interpreted systems, a mainstream MAS formalism, in which only one action at the time is performed in the system. We present a notion of stuttering-equivalence and prove the semantical equivalence of stuttering-equivalent traces with respect to linear and branching time temporal logics for knowledge without the next operator. We give algorithms to reduce the size of the models before the model checking step and show preservation properties. We evaluate the technique by discussing implementations and the experimental results obtained against well-known examples in the MAS literature.
  8. On Computing Extensions and Restrictions of Information Systems Noting Some Order Properties
    Marek Pałasiński,  Krzysztof Pancerz 91-103
    The aim of this paper is to give the basic view on computing consistent extensions and restrictions of information systems noting some order properties of information system families. Information systems are understood in the Pawlak's sense as the systems of knowledge representation. The extension appears if we add to the original information system some new objects corresponding to the known attribute values. Analogously, the restriction appears if we remove from the original information system some objects. Presented observations enable us to find proper descriptions (preserving some features) of concurrent systems whose behavior is represented by means of information systems.
  9. Infinite Product of Traces Represented by Projections
    Roman R. Redziejowski 105-113
    The construction of an associative w-product of traces from an earlier paper is revisited using projection representation of traces. Using projections instead of trace prefixes results in very simple definitions and proofs.
  10. Toward Intelligent Searching the Rough Set Database System (RSDS): an Ontological Approach
    Zbigniew Suraj,  Piotr Grochowalski 115-123
    The main goal of this paper is to give the outline of some approach to intelligent searching the Rough Set Database System (RSDS). RSDS is a bibliographical system containing bibliographical descriptions of publications connected with methodology of rough sets and its applications. The presented approach bases on created ontologies which are models for the considered domain (rough set theory, its applications and related fields) and for information about publications coming from, for example, abstracts.
  11. Multi-Agent Decision Taking System
    Alicja Wakulicz-Deja,  Małgorzata Przybyła-Kasperek 125-141
    The paper presents the process of taking global decisions on the basis of the knowledge of local decision systems involving the mutually complementary observations of objects which can be mutually contradictory. The authors suggest the organization of local decision systems into a multi-agent system with a hierarchical structure. The structure of multi-agent systems and the theoretical aspects of the organization of the system are presented. A density-based algorithm has been used in the process of taking global decisions. Furthermore the paper presents the results of experiments conducted using realistic data.
  12. Perception and Classification. A Note on Near Sets and Rough Sets
    Marcin Wolski 143-155
    The paper aims to establish topological links between perception of objects (as it is defined in the framework of near sets) and classification of these objects (as it is defined in the framework of rough sets). In the near set approach, the discovery of near sets (i.e. sets containing objects with similar descriptions) starts with the selection of probe functions which provide a basis for describing and discerning objects. On the other hand, in the rough set approach, the classification of objects is based on object attributes which are collected into information systems (or data tables). As is well-known, an information system can be represented as a topological space (U,tE). If we pass from an approximation space (U,E) to the quotient space U/E, where points represent indiscernible objects of U, then U/E will be endowed with the discrete topology induced (via the canonical projection) by tE. The main objective of this paper is to show how probe functions can provide new topologies on the quotient set U/E and, in consequence, new (perceptual) topologies on U.

101(3) 2010

  1. Approximating Tree Edit Distance through String Edit Distance for Binary Tree Codes
    Taku Aratsu,  Kouichi Hirata,  Tetsuji Kuboyama 151-171
    This article proposes an approximation of the tree edit distance through the string edit distance for binary tree codes , instead of for Euler strings introduced by Akutsu (2006). Here, a binary tree code is a string obtained by traversing a binary tree representation with two kinds of dummy nodes of a tree in preorder. Then, we show that s/2 £ t £ (h+1)s+h, where t is the tree edit distance between trees, and s is the string edit distance between their binary tree codes and h is the minimum height of the trees.
  2. Finding Patterns In Given Intervals
    Maxime Crochemore,  Costas S. Iliopoulos,  Marcin Kubica,   M. Sohel Rahman, Tomasz Waleń 173-186
    In this paper, we study the pattern matching problem in given intervals. Depending on whether the intervals are given a priori for pre-processing, or during the query along with the pattern or, even in both the cases, we develop efficient solutions for different variants of this problem. In particular, we present efficient indexing schemes for each of the above variants of the problem.
  3. Similarity-Based Classification in Relational Databases
    Piotr Hońko 187-213
    In this paper, we introduce a method for measuring similarity of objects of a relational database (relational objects, in short). We also propose and investigate an algorithm SC for classification of relational objects. The task of classification is carried out based on similarity of the objects to predefined classes. An object to be classified is assigned to the class to which it is most similar. A similarity of an object to a class is understood as its similarity to a class representative. Several methods for computing the class representative are proposed. We test the algorithm on real and artificial databases. We compare results obtained by the algorithm with those obtained by other algorithms known from the literature. We also present our approach in the context of granular computing.
  4. An Improved Bound for an Extension of Fine and Wilf's Theorem and Its Optimality
    Lila Kari and Shinnosuke Seki 215-236
    Considering two DNA molecules which are Watson-Crick (WK) complementary to each other "equivalent" with respect to the information they encode enables us to extend the classical notions of repetition, period, and power. WK-complementarity has been modelled mathematically by an antimorphic involution q, i.e., a function q such that q(xy) = q(y)q(x) for any x, y Î S*, and q2 is the identity. The WK-complementarity being thus modelled, any word which is a repetition of u and q(u) such as uu, uq(u)u, and uq(u)q(u)q(u) can be regarded repetitive in this sense, and hence, called a q-power of u. Taking the notion of q-power into account, the Fine and Wilf's theorem was extended as "given an antimorphic involution q and words u, v, if a q-power of u and a q-power of v have a common prefix of length at least b(|u|, |v|) = 2|u| + |v| - gcd(|u|, |v|), then u and v are q-powers of a same word." In this paper, we obtain an improved bound b¢(|u|, |v|) = b(|u|, |v|) - ëgcd(|u|, |v|) /2 û. Then we show all the cases when this bound is optimal by providing all the pairs of words (u, v) such that they are not q-powers of a same word, but one can construct a q-power of u and a q-power of v whose maximal common prefix is of length equal to b¢(|u|, |v|) - 1. Furthermore, we characterize such words in terms of Sturmian words.
  5. Level Mapping Induced Loop Formulas for Weight Constraint and Aggregate Logic Programs
    Guohua Liu and Jia-Huai You 237-255
    Level mapping and loop formulas are two different means to justify and characterize answer sets for normal logic programs. Both of them specify conditions under which a supported model is an answer set. Though serving a similar purpose, in the past the two have been studied largely in isolation with each other. In this paper, we study level mapping and loop formulas for weight constraint and aggregate (logic) programs. We show that, for these classes of programs, loop formulas can be devised from level mapping characterizations. First, we formulate a level mapping characterization of stable models and show that it leads to a new formulation of loop formulas for arbitrary weight constraint programs, without using any new atoms. This extends a previous result on loop formulas for weight constraint programs, where weight constraints contain only positive literals. Second, since aggregate programs are closely related to weight constraint programs, we further use level mapping to characterize the underlying answer set semantics based on which we formulate loop formulas for aggregate programs. The main result is that for aggregate programs not involving the inequality comparison operator, the dependency graphs can be built in polynomial time. This compares to the previously known exponential time method.

101(4) 2010

  1. Computing Maximal Error-detecting Capabilities and Distances of Regular Languages
    Stavros Konstantinidis,  Pedro V. Silva 257-270
    A (combinatorial) channel consists of pairs of words representing all possible input-output channel situations. In a past paper, we formalized the intuitive concept of "largest amount of errors" detectable by a given language L, by defining the maximal error-detecting capabilities of L with respect to a given class of channels, and we showed how to compute all maximal error-detecting capabilities (channels) of a given regular language with respect to the class of rational channels and a class of channels involving only the substitution-error type. In this paper we resolve the problem for channels involving any combination of the basic error types: substitution, insertion, deletion. Moreover, we consider the problem of finding the inverses of these channels, in view of the fact that L is error-detecting for g if and only if it is error-detecting for the inverse of g. We also discuss a natural method of reducing the problem of computing (inner) distances of a given regular language L to the problem of computing maximal error-detecting capabilities of L.
  2. Boruta - A System for Feature Selection
    Miron B. Kursa,  Aleksander Jankowski,  Witold R. Rudnicki 271-285
    Machine learning methods are often used to classify objects described by hundreds of attributes; in many applications of this kind a great fraction of attributes may be totally irrelevant to the classification problem. Even more, usually one cannot decide a priori which attributes are relevant. In this paper we present an improved version of the algorithm for identification of the full set of truly important variables in an information system. It is an extension of the random forest method which utilises the importance measure generated by the original algorithm. It compares, in the iterative fashion, the importances of original attributes with importances of their randomised copies. We analyse performance of the algorithm on several examples of synthetic data, as well as on a biologically important problem, namely on identification of the sequence motifs that are important for aptameric activity of short RNA sequences.
  3. A Robust Multiple Classifier System for Pixel Classification of Remote Sensing Images
    Ujjwal Maulik,  Debasis Chakraborty 286-304
    Satellite image classification is a complex process that may be affected by many factors. This article addresses the problem of pixel classification of satellite images by a robust multiple classifier system that combines k-NN, support vector machine (SVM) and incremental learning algorithm (IL). The effectiveness of this combination is investigated for satellite imagery which usually have overlapping class boundaries. These classifiers are initially designed using a small set of labeled points. Combination of these algorithms has been done based on majority voting rule. The effectiveness of the proposed technique is first demonstrated for a numeric remote sensing data described in terms of feature vectors and then identifying different land cover regions in remote sensing imagery. Experimental results on numeric data as well as two remote sensing data show that employing combination of classifiers can effectively increase the accuracy label. Comparison is made with each of these single classifiers in terms of kappa value, accuracy, cluster quality indices and visual quality of the classified images.
  4. Enhancement of Multispectral Ikonos Satellite Image Using Quantum Information Processing
    N.R. Shanker,  S.S. Ramakrishnan 305-320
    The objective of the paper is to demonstrate how an image enhancement technique denoted quantum Fourier transform, improves the possibilities for visual interpretation of multispectral IKONOS satellite image, covering an urban area. Image representation is usually based on a mathematical function that has two spatial values, x and y. The spatial points x and y in the image function have the values to represent intensity of the image at that spatial point. An alternative method of image representation is the transformation. Transformation is also a mathematical method in which an image is represented as complex exponential summations. The complex exponentials have the varying magnitudes, frequencies and phases. Transformation method of image representation is useful for image analysis, interpretations, filter etc. In this paper an image is represented based on the quantum mechanics for image analysis. The image is transformed using quantum Fourier transforms. For a specific application the result of processed image is better than the original image then it will be called as Enhancement. Enhancements of images are done in two broad categories. They are spatial domain methods and frequency domain methods. Frequency domain methods are based on transforms. And here quantum transforms is implemented. It is shown how the quantum Fourier transforms are applicable in the remote sensing satellite images, particularly for image enhancement. The satellite image of IKONOS is covering the Chennai city in Tamilnadu, India. From experimental result, it is observed that the pixels have low correlation when quantum Fourier transforms, are applied.
  5. Another Constructed Chaotic Image Encryption Scheme Based on Toeplitz Matrix and Hankel Matrix
    Guodong Ye 321-333
    This paper presents a novel image encryption scheme based on Toeplitz matrix and Hankel matrix. According to the definition and the expression of Toeplitz matrix and Hankel matrix, we permutate the positions of image pixels to confuse the relationship between the plain-image and cipher-image, and construct a new key. To enhance security further, hyper-chaos system of L[u\ddot] and Chen is taken to change the grey distribution of image pixels simultaneously. Numerical experimental results demonstrate that the key space is large enough, and the key is sensitive to initial conditions to resist the brute-force attack in the proposed scheme. Additionally, the distribution of grey values in encrypted image has a random-like behavior to resist statistical analysis. The proposed encryption scheme in this paper can suggest a high secure and efficient way for real-time image encryption and transmission in practice.