107(1) 2011


  1. K-Comma Codes and Their Generalizations
    Bo Cui,  Lila Kari,  Shinnosuke Seki 1-18
    In this paper, we introduce the notion of k-comma codes - a proper generalization of the notion of comma-free codes. For a given positive integer k, a k-comma code is a set L over an alphabet S with the property that LSk L S+ L S+ = Æ. Informally, in a k-comma code, no codeword can be a subword of the catenation of two other codewords separated by a "comma" of length k. A k-comma code is indeed a code, that is, any sequence of codewords is uniquely decipherable. We extend this notion to that of k-spacer codes, with commas of length less than or equal to a given k. We obtain several basic properties of k-comma codes and their generalizations, k-comma intercodes, and some relationships between the families of k-comma intercodes and other classical families of codes, such as infix codes and bifix codes. Moreover, we introduce the notion of n-k-comma intercodes, and obtain, for each k ³ 0, several hierarchical relationships among the families of n-k-comma intercodes, as well as a characterization of the family of 1-k-comma intercodes.
  2. Formal Analysis of SystemC Designs in Process Algebra
    Hossein Hojjat,  Mohammad Reza Mousavi,  Marjan Sirjani 19-42
    SystemC is an IEEE standard system-level language used in hardware/software co-design and has been widely adopted in the industry. This paper describes a formal approach to verifying SystemC designs by providing a mapping to the process algebra mCRL2. Our mapping formalizes both the simulation semantics as well as exhaustive state-space exploration of SystemC designs. By exploiting the existing reduction techniques of mCRL2 and also its model-checking tools, we efficiently locate the race conditions in a system and resolve them. A tool is implemented to automatically perform the proposed mapping. This mapping and the implemented tool enabled us to exploit process-algebraic verification techniques to analyze a number of case-studies, including the formal analysis of a single-cycle and a pipelined MIPS processor specified in SystemC.
  3. Mechanical Analysis of Finite Idempotent Relations
    Florian Kammüller 43-65
    We use the technique of interactive theorem proving to develop the theory and an enumeration technique for finite idempotent relations. Starting from a short mathematical characterization of finite idempotents defined and proved in Isabelle/HOL, we derive first an iterative procedure to generate all instances of idempotents over a finite set. From there, we develop a more precise theoretical characterization giving rise to an efficient predicate that can be executed in the programming language ML. Idempotent relations represent a very basic, general mathematical concept but the steps taken to develop their theory with the help of Isabelle/HOL are representative for developing algorithms from a mathematical specification.
  4. Rough Truth Degrees of Formulas and Approximate Reasoning in Rough Logic
    Yanhong She,  Xiaoli He,  Guojun Wang 67-83
    A propositional logic PRL for rough sets was proposed in [1]. In this paper, we initially introduce the concepts of rough (upper, lower) truth degrees on the set of formulas in PRL. Then, by grading the rough equality relations, we propose the concepts of rough (upper, lower) similarity degree. Finally, three different pseudo-metrics on the set of rough formulas are obtained, and thus an approximate reasoning mechanism is established.
  5. Analysis of Recognition of a Musical Instrument in Sound Mixes Using Support Vector Machines
    Alicja Wieczorkowska, Elbieta Kubera, Agnieszka Kubik-Komar 85-104
    Experiments with recognition of the dominating musical instrument in sound mixes are interesting from the point of view of music information retrieval, but this task can be very difficult if the mixed sounds are of the same pitch. In this paper, we analyse experiments on recognition of the dominating instrument in mixes of same-pitch sounds of definite pitch. Sound from one octave (no. 4 in MIDI notation) have been chosen, and instruments of various types, including percussive instruments were investigated. Support vector machines were used in our experiments, and statistical analysis of the results was also carefully performed. After discussing the outcomes of these experiments and analyses, we conclude our paper with suggestions regarding directions of possible future research on this subject.
  6. Cryptanalysis of an Identity Based Signcryption without Random Oracles
    Hu Xiong,  Zhiguang Qin,  Fagen Li 105-109
    Signcryption is a cryptographic primitive that performs digital signature and public key encryption simultaneously, at lower computational costs and communication overhead than signing and encrypting separately. Recently, Yu et al. proposed an identity based signcryption scheme with a claimed proof of security. We show that their scheme is not secure even against a chosen-plaintext attack.

107 (2-3) 2011


  1. RCRA 2009 Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion.  Preface i-ii
  2. Flexible Plan Verification: Feasibility Results
    Amedeo Cesta,  Simone Fratini,  Andrea Orlandini,  Alberto Finzi,  Enrico Tronci 111-137
    Timeline-based planning techniques have demonstrated wide application possibilities in heterogeneous real world domains. For a wider diffusion of this technology, a more thorough investigation of the connections with formal methods is needed. This paper is part of a research program aimed at studying the interconnections between timeline-based planning and standard techniques for formal validation and verification ( V&V). In this line, an open issue consists of studying the link between plan generation and plan execution from the particular perspective of verifying temporal plans before their actual execution. The present work addresses the problem of verifying flexible temporal plans, i.e., those plans usually produced by least-commitment temporal planners. Such plans only impose minimal temporal constraints among the planned activities, hence are able to adapt to on-line environmental changes by trading some of the retained flexibility.
    This work shows how a model-checking verification tool based on Timed Game Automata (TGA) can be used to verify such plans. In particular, flexible plan verification is cast as a model-checking problem on those automata. The effectiveness of the proposed approach is shown by presenting a detailed experimental analysis with a real world domain which is used as a flexible benchmark.
  3. Parallel QBF Solving with Advanced Knowledge Sharing
    Matthew Lewis,  Tobias Schubert, and Bernd Becker,  Paolo Marin,  Massimo Narizzano,  Enrico Giunchiglia 139-166
    In this paper we present the parallel QBF Solver PaQuBE. This new solver leverages the additional computational power that can be exploited from modern computer architectures, from pervasive multi-core boxes to clusters and grids, to solve more relevant instances faster than previous generation solvers. Furthermore, PaQuBE's progressive MPI based parallel framework is the first to support advanced knowledge sharing in which solution cubes as well as conflict clauses can be exchanged between solvers. Knowledge sharing plays a critical role in the performance of PaQuBE. However, due to the overhead associated with sending and receiving MPI messages, and the restricted communication/network bandwidth available between solvers, it is essential to optimize not only what information is shared, but the way in which it is shared. In this context, we compare multiple conflict clause and solution cube sharing strategies, and finally show that an adaptive method provides the best overall results.
  4. An Empirical Analysis of Some Heuristic Features for Planning through Local Search and Action Graphs
    Alfonso Gerevini,  Alessandro Saetti,  Ivan Serina 167-197
    Planning through local search and action graphs is a powerful approach to fully-automated planning which is implemented in the well-known \sf LPG planner. The approach is based on a stochastic local search procedure exploring a space of partial plans and several heuristic features with different possible options. In this paper, we experimentally analyze the most important of them, with the goal of understanding and evaluating their impact on the performance of \sf LPG, and of identifying default settings that work well on a large class of problems. In particular, we analyze several heuristic techniques for (a) evaluating the search neighborhood, (b) defining/restricting the search neighborhood, (c) selecting the next plan flaw to handle, (d) setting the "noise" parameter randomizing the search, and (e) computing reachability information that can be exploited by the heuristic functions used to evaluate the neighborhood elements. Some of these techniques were introduced in previous work on \sf LPG, while others are new. Additional experimental results indicate that the current version of \sf LPG using the identified best heuristic techniques as the default settings is competitive with the winner of the last (2008) International Planning Competition.
  5. Efficient Graph Kernels for Textual Entailment Recognition
    Fabio Massimo Zanzotto,  Lorenzo Dell'Arciprete,   Alessandro Moschitti 199-222
    One of the most important research area in Natural Language Processing concerns the modeling of semantics expressed in text. Since foundational work in Natural Language Understanding has shown that a deep semantic approach is still not feasible, current research is focused on shallow methods combining linguistic models and machine learning techniques. The latter aim at learning semantic models, like those that can detect the entailment between the meaning of two text fragments, by means of training examples described by specific features. These are rather difficult to design since there is no linguistic model that can effectively encode the lexico-syntactic level of a sentence and its corresponding semantic models. Thus, the adopted solution consists in exhaustively describing training examples by means of all possible combinations of sentence words and syntactic information. The latter, typically expressed as parse trees of text fragments, is often encoded in the learning process using graph algorithms.
    In this paper, we propose a class of graphs, the tripartite directed acyclic graphs (tDAGs), which can be efficiently used to design algorithms for graph kernels for semantic natural language tasks involving sentence pairs. These model the matching between two pairs of syntactic trees in terms of all possible graph fragments. Interestingly, since tDAGs encode the association between identical or similar words (i.e. variables), it can be used to represent and learn first-order rules, i.e. rules describable by first-order logic. We prove that our matching function is a valid kernel and we empirically show that, although its evaluation is still exponential in the worst case, it is extremely efficient and more accurate than the previously proposed kernels.
  6. An automaton Constraint for Local Search
    Jun He,  Pierre Flener,  Justin Pearson 223-248
    We explore the idea of using automata to implement new constraints for local search. This is already a successful approach in constraint-based global search. We show how to maintain the violations of a constraint and its variables via a deterministic finite automaton that describes a ground checker for that constraint. We extend the approach to counter automata, which are often much more convenient than finite automata, if not more independent of the constraint instance. We establish the practicality of our approach on several real-life combinatorial problems.
  7. Restoring CSP Satisfiability with MaxSAT
    Inês Lynce,  Joao Marques-Silva 249-266
    The extraction of a Minimal Unsatisfiable Core (MUC) in a Constraint Satisfaction Problem (CSP) aims to identify a subset of constraints that make a CSP instance unsatisfiable. Recent work has addressed the identification of a Minimal Set of Unsatisfiable Tuples (MUST) in order to restore the CSP satisfiability with respect to that MUC. A two-step algorithm has been proposed: first, a MUC is identified, and second, a MUST in the MUC is identified. This paper proposes an integrated algorithm for restoring satisfiability in a CSP, making use of a MaxSAT solver. The proposed approach encodes the CSP instance as a partial MaxSAT instance, in such a way that solving the MaxSAT instance corresponds to identifying the smallest set of tuples to be removed from the CSP instance to restore satisfiability. Experimental results illustrate the feasibility of the approach.
  8. Deriving Information from Sampling and Diving
    Michele Lombardi,  Michela Milano,  Andrea Roli,   Alessandro Zanarini 267-287
    We investigate the impact of information extracted from sampling and diving on the solution of Constraint Satisfaction Problems (CSP). A sample is a complete assignment of variables to values taken from their domain according to a given distribution. Diving consists in repeatedly performing depth first search attempts with random variable and value selection, constraint propagation enabled and backtracking disabled; each attempt is called a dive and, unless a feasible solution is found, it is a partial assignment of variables (whereas a sample is a -possibly infeasible- complete assignment). While the probability of finding a feasible solution via sampling or diving is negligible if the problem is difficult enough, samples and dives are very fast to generate and, intuitively, even when they are infeasible, they can provide some statistical information on search space structure. The aim of this paper is to understand to what extent it is possible to support the CSP solving process with information derived from sampling and diving. In particular, we are interested in extracting from samples and dives precise indications on the quality of individual variable-value assignments with respect to feasibility. We formally prove that even uniform sampling could provide precise evaluation of the quality of single variable-value assignments; as expected, this requires huge sample sizes and is therefore not useful in practice. On the contrary, diving is much better suited for assignment evaluation purposes. We undertake a thorough experimental analysis on a collection of Partial Latin Square and Car Sequencing instances to assess the quality of information provided by dives. Dive features are identified and their impact on search is evaluated. Results show that diving provides information that can be fruitfully exploited.
  9. Partitioning Search Spaces of a Randomized Search
    Antti E. J. Hyvärinen,  Tommi Junttila,  Ilkka Niemelä 289-311
    This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance? Two approaches, simple distribution and search space partitioning as well as their combinations are investigated both analytically and empirically. It is shown that the results depend heavily on the type of the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with many solutions) as well as on how good the search space partitioning function is. In addition, the behavior of a real search space partitioning function is evaluated in the same framework. The results suggest that in practice one should combine the simple distribution and search space partitioning approaches.

107 (4) 2011


  1. A Representation Theorem for Primitive Recursive Algorithms
    Philippe Andary,  Bruno Patrou, Pierre Valarcher 313-330
    We formalize the algorithms computing primitive recursive ( PR) functions as the abstract state machines (ASMs) whose running length is computable by a PR function. Then we show that there exists a programming language (implementing only PR functions) by which it is possible to implement any one of the previously defined algorithms for the PR functions in such a way that their complexity is preserved.
  2. On the Fermat-Weber Point of a Polygonal Chain and its Generalizations
    Bhaswar B. Bhattacharya 331-343
    In this paper, we study the properties of the Fermat-Weber point for a set of fixed points, whose arrangement coincides with the vertices of a regular polygonal chain. A k-chain of a regular n-gon is the segment of the boundary of the regular n-gon formed by a set of k  ( £ n) consecutive vertices of the regular n-gon. We show that for every odd positive integer k, there exists an integer N(k), such that the Fermat-Weber point of a set of k fixed points lying on the vertices a k-chain of a n-gon coincides with a vertex of the chain whenever n ³ N(k). We also show that pm(m+1)-p2/4ù £ N(k) £ pm(m+1)+1û, where k (=2m+1) is any odd positive integer. We then extend this result to a more general family of point set, and give an O(hklogk) time algorithm for determining whether a given set of k points, having h points on the convex hull, belongs to such a family.
  3. Trade-Offs Between Time, Space, Cooperation, and Communication Complexity for CD Grammar Systems
    Liliana Cojocaru 345-378
    In this paper we investigate the communication and cooperation phenomenon in Cooperating Distributed Grammar Systems (henceforth CDGSs). In this respect, we define several complexity structures and two complexity measures, the cooperation and communication complexity measures. These measures are studied with respect to the derivation modes and fairness conditions under which CDGSs may work. We deal with trade-offs between time , space , cooperation , and communication complexity of languages generated by CDGSs with regular, linear, and context-free components.
    Cooperation and communication processes in CDGSs with regular and linear components are of equal complexity. The two (equal) cooperation and communication complexity measures are either constant or linear, as functions of lengths of words in the generated language. The same result holds for the cooperation and communication complexity of q-fair languages generated by CDGSs with regular and linear components. For the case of non-constant communication (cooperation) the time and space used by a nondeterministic multitape Turing machine to recognize weakly q-fair languages are linear, as is the communication (cooperation) complexity.
    For CDGSs with context-free components the cooperation and communication complexity may be different. These measures are either linear or logarithmic functions, in terms of lengths of words in the generated language. In order to find trade-offs between time, space, cooperation, and communication complexity of languages generated by CDGSs with context-free components we study asymptotic behavior of growth functions characteristic to these measures. We prove that languages generated by CDGSs with context-free components are accepted by nondeterministic multitape Turing machines either in quadratic time, linear space, and with cooperation complexity that varies from logarithmic to linear, or in polynomial or exponential time and space, and linear cooperation complexity.
  4. Look-back Techniques for ASP Programs with Aggregates
    Wolfgang Faber,  Nicola Leone,  Marco Maratea, Francesco Ricca 379-413
    The introduction of aggregates has been one of the most relevant language extensions to Answer Set Programming (ASP). Aggregates are very expressive, they allow to represent many problems in a more succinct and elegant way compared to aggregate-free programs. A significant amount of research work has been devoted to aggregates in the ASP community in the last years, and relevant research results on ASP with aggregates have been published, on both theoretical and practical sides. The high expressiveness of aggregates (eliminating aggregates often causes a quadratic blow-up in program size) requires suitable evaluation methods and optimization techniques for an efficient implementation. Nevertheless, in spite of the above-mentioned research developments, aggregates are treated in a quite straightforward way in most ASP systems.
    In this paper, we explore the exploitation of look-back techniques for an efficient implementation of aggregates. We define a reason calculus for backjumping in ASP programs with aggregates. Furthermore, we describe how these reasons can be used in order to guide look-back heuristics for programs with aggregates. We have implemented both the new reason calculus and the proposed heuristics in the DLV system, and have carried out an experimental analysis on publicly available benchmarks which shows significant performance benefits.
  5. Analysis of Markov Boundary Induction in Bayesian Networks: A New View From Matroid Theory
    Dingcheng Feng,  Feng Chen,  Wenli Xu 415-434
    Learning Markov boundaries from data without having to learn a Bayesian network first can be viewed as a feature subset selection problem and has received much attention due to its significance in the wide applications of AI techniques. Popular constraint based methods suffer from high computational complexity and are usually unstable in spaces of high dimensionality. We propose a new perspective from matroid theory towards the discovery of Markov boundaries of random variable in the domain, and develop a learning algorithm which guarantees to recover the true Markov boundaries by a greedy learning algorithm. Then we use the precision matrix of the original distribution as a measure of independence to make our algorithm feasible in large scale problems, which is essentially an approximation of the probabilistic relations with Gaussians and can find possible variables in Markov boundaries with low computational complexity. Experimental results on standard Bayesian networks show that our analysis and approximation can efficiently and accurately identify Markov boundaries in complex networks from data.