125 (1) 2013

  1. Local Verification Using a Distributed State Space
    Chiheb Ameur Abid,  Belhassen Zouari 1-20
    This paper deals with the modular analysis of distributed concurrent systems modelled by Petri nets. The main analysis techniques of such systems suffer from the well-known problem of the combinatory explosion of state space. In order to cope with this problem, we use a modular representation of the state space instead of the ordinary one. The modular representation, namely modular state space, is much smaller than the ordinary state space. We propose to distribute the modular state space on every machine associated with one module. We enhance the modularity of the verification of some local properties of any module by limiting it to the exploration of local and some global information. Once the construction of the distributed state space is performed, there is no communication between modules during the verification.
  2. Computer Algebra Technique for Coxeter Spectral Study of Edge-bipartite Graphs and Matrix Morsifications of Dynkin Type An
    Mariusz Felisiak 21-49
    By applying computer algebra tools (mainly, Maple and C++), given the Dynkin diagram ∆ = An, with n ≥ 2 vertices and the Euler quadratic form q :Zn → Z, we study the problem of classifying mesh root systems and the mesh geometries of roots of ∆ (see Section 1 for details). The problem reduces to the computation of the Weyl orbits in the set Mor ⊆ Mn(Z) of all matrix morsifications A of q, i.e., the non-singular matrices A ∈ Mn(Z) such that (i)  q(v) = v·A·vtr, for all v ∈ Zn, and (ii)  the Coxeter matrix CoxA: = − A·A−tr lies in Gl(n,Z). The Weyl group WGl(n,Z) acts on Mor and the determinant detA ∈ Z, the order cA ≥ 2 of CoxA (i.e. the Coxeter number), and the Coxeter polynomial coxA(t):=det(t·E − CoxA) ∈ Z[t] are W-invariant. The problem of determining the W-orbits Orb(A) of Mor and the Coxeter polynomials coxA(t), with A ∈ Mor , is studied in the paper and we get its solution for n ≤ 8, and A=[aij] ∈ MorAn , with |aij| ≤ 1. In this case, we prove that the number of the W-orbits Orb(A) and the number of the Coxeter polynomials coxA(t) equals two or three, and the following three conditions are equivalent: (i) Orb(A) = Orb(A′), (ii) coxA(t) = coxA′(t), (iii) cA·detA = cA′·detA′. We also construct: (a) three pairwise different W -orbits in Mor , with pairwise different Coxeter polynomials, if ∆ = A2m−1 and m ≥ 3; and (b) two pairwise different W -orbits in Mor , with pairwise different Coxeter polynomials, if ∆ = A2m and m ≥ 1.
  3. Reasoning about Resources and Information: A Linear Logic Approach
    Norihiro Kamide,  Ken Kaneiwa 51-70
    A logic called sequence-indexed linear logic (SLL) is proposed to appropriately formalize resource-sensitive reasoning with sequential information. The completeness and cut-elimination theorems for SLL are proved, and SLL and a fragment of SLL are shown to be undecidable and decidable, respectively. As an application of SLL, some specifications of secure password authentication systems are discussed.
  4. A Complete Generalization of Atkin's Square Root Algorithm
    Armand Stefan Rotaru,  Sorin Iftene 71-94
    Atkin's algorithm [2] for computing square roots in Zp*, where p is a prime such that p ≡ 5 mod 8, has been extended by Müller [15] for the case p ≡ 9 mod 16. In this paper we extend Atkin's algorithm to the general case p ≡ 2s+1 mod 2s+1, for any s ≥ 2, thus providing a complete solution for the case p ≡ 1 mod 4. Complexity analysis and comparisons with other methods are also provided.
  5. Does there Exist an Algorithm which to Each Diophantine Equation Assigns an Integer which is Greater than the Modulus of Integer Solutions, if these Solutions form a Finite Set?
    Apoloniusz Tyszka 95-99
    Let En={xi=1, xi+xj=xk, xi ·xj=xk: i,j,k ∈ {1,…,n}}. We conjecture that if a system S ⊆ En has only finitely many solutions in integers x1,…,xn, then each such solution (x1,…,xn) satisfies |x1|,…,|xn| ≤ 2 2n−1. Assuming the conjecture, we prove: (1) there is an algorithm which to each Diophantine equation assigns an integer which is greater than the heights of integer (non-negative integer, rational) solutions, if these solutions form a finite set, (2) if a set M ⊆  N is recursively enumerable but not recursive, then a finite-fold Diophantine representation of M does not exist.

125 (2) 2013

  1. A Resolution Calculus for First-order Schemata
    Vincent Aravantinos, Mnacho Echenim and Nicolas Peltier 101-133
    We devise a resolution calculus that tests the satisfiability of infinite families of clause sets, called clause set schemata. For schemata of propositional clause sets, we prove that this calculus is sound, refutationally complete, and terminating. The calculus is extended to first-order clauses, for which termination is lost, since the satisfiability problem is not semi-decidable for non-propositional schemata. The expressive power of the considered logic is strictly greater than the one considered in our previous work.
  2. Decomposition and Approximation of Loopy Bayesian Networks
    Jianwu Dong, Feng Chen,  Yanyan Huo,  Hong Liu 135-152
    This paper proposes a new method, conditional probability table(CPT) decomposition, to analyze the independent and deterministic components of CPT. This method can be used to approximate and analyze Baysian networks. The decomposition of Bayesian networks is accomplished by representing CPTs as a linear combination of extreme CPTs, which forms a new framework to conduct inference. Based on this new framework, inference in Bayesian networks can be done by decomposing them into less connected and weighted subnetworks. We can achieve exact inference if the original network is decomposed into singly-connected subnetworks. Besides, approximate inference can be done by discarding the subnetworks with small weights or by a partial decomposition and application of belief propagation (BP) on the still multiply-connected subnetworks. Experiments show that the decomposition-based approximation outperforms BP in most cases.
  3. Wide-diameter of Product Graphs
    Rija Erves,  Janez Zerovnik 153-160
    The product graph B*F of graphs B and F is an interesting model in the design of large reliable networks. Fault tolerance and transmission delay of networks are important concepts in network design. The notions are strongly related to connectivity and diameter of a graph, and have been studied by many authors. Wide diameter of a graph combines studying connectivity with the diameter of a graph. Diameter with width k of a graph G, k-diameter, is defined as the minimum integer d for which there exist at least k internally disjoint paths of length at most d between any two distinct vertices in G. Denote by DWc(G) the c-diameter of G and κ(G) the connectivity of G. We prove that DWa+b (B*F) ≤ ra(F) + DWb(B)+1 for a ≤ κ(F) and b ≤ κ(B). The Rabin number rc(G) is the minimum integer d such that there are c internally disjoint paths of length at most d from any vertex v to any set of c vertices {v1, v2, ..., vc }.
  4. Algorithms for Relatively Cyclotomic Primes
    Maciej Grze¶kowiak 161-181
    We present a general method of generating primes p and q such that q divides Φn(p), where n > 2 is a fixed number. In particular, we present the deterministic method of finding a primitive nth roots of unity modulo q. We estimate the computational complexity of our methods.
  5. On the Validation of Invariants at Runtime
    Piotr Kosiuczenko 183-222
    The paradigm of contractual specification provides a transparent way of specifying object-oriented systems. In this paradigm, system consistency is specified using so-called invariants. Runtime validation of invariants is a complex problem. Existing validation methods require either exhausting validation of an invariant for all objects of the corresponding class or the use of restrictive type systems. In this paper a non-exhaustive method of invariant validation is proposed. It is proved that the method is sound and an implementation of this method is discussed. It is shown also how to extract context free languages corresponding to OCL-invariants.

125 (3-4) 2013

  1. Strategies for Tomography.  Preface i-xviii
  2. Discrete Tomography in MRI: a Simulation Study
    Hilde Segers,  Willem Jan Palenstijn,  Kees Joost Batenburg,  Jan Sijbers 223-237
    Discrete tomography (DT) is concerned with the tomographic reconstruction of images that consist of only a small number of grey levels. Recently, DART, a practical algorithm was introduced for discrete tomography, which was validated in the domain of X-ray computed and electron tomography. In this paper, DART is introduced for magnetic resonance imaging. Using simulation experiments, it is shown that the proposed MRI-DART algorithm is capable of computing high quality reconstructions from substantially fewer data compared to state-of-the-art MRI reconstruction methods.
  3. Approximate Discrete Reconstruction Algorithm
    K. Joost Batenburg,  Wagner Fortes,  Robert Tijdeman 239-259
    Discrete tomography deals with tomographic reconstruction of greyscale images for which the set of possible grey levels is discrete and small. Here, we develop a discrete approximate reconstruction algorithm. Our algorithm computes an image that has only grey values belonging to a given finite set. It also guarantees that the difference between the given projections and the projections of the reconstructed discrete image is bounded. The bound, which is computable, is independent of the image size. We present reconstruction experiments for a range of phantom images and a varying number of grey values.
  4. Discrete Tomography Data Footprint Reduction by Information Conservation
    Rodolfo A. Fiorini,  Giuseppe Laguteta 261-272
    The first impact of Discrete Tomography (DT) applied to nanoscale technology has been to generate enormous quantity of data. Data Footprint Reduction (DFR) is the process of employing one or more techniques to store a given set of data in less storage space. The very best modern lossless compressors use classical probabilistic models only, and are unable to match high end application requirements, like "Arbitrary Bit Depth" (ABD) resolution and "Dynamic Upscale Regeneration" (DUR), with full information conservation. This paper explores, at core level, the basic properties and relationships of Q Arithmetic to achieve full numeric information conservation and regeneration, algorithmically. That knowledge shows strong connections to modular group theory and combinatorial optimization. Traditional Q Arithmetic can be even regarded as a highly sophisticated open logic, powerful and flexible LTR and RTL formal numeric language of languages, with self-defining consistent word and rule, starting from elementary generator and relation. This new awareness can guide the development of successful more convenient algorithm and application.
  5. Discrete Tomography Data Footprint Reduction via Natural Compression
    Rodolfo A. Fiorini,  Andrea Condorelli,  Giuseppe Laguteta 273-284
    In Discrete Tomography (DT) by electron microscopy, 2-D projection images are acquired from various angles, by tilting the sample, generating new challenges associated with the problem of formation, acquisition, compression, transmission, and analysis of enormous quantity of data. Data Footprint Reduction (DFR) is the process of employing one or more techniques to store a given set of data in less storage space. Modern lossless compressors use classical probabilistic models only, and are unable to match high end application requirements like "Arbitrary Bit Depth" (ABD) resolution and information "Dynamic Upscale Regeneration" (DUR). Traditional Q Arithmetic can be regarded as a highly sophisticated open logic, powerful and flexible bidirectional (LTR and RTL) formal language of languages, according to brand new "Information Conservation Theory" (ICT). This new awareness can offer competitive approach to guide more convenient algorithm development and application for combinatorial lossless compression, we named "Natural Compression" (NC). To check practical implementation performance, a first raw example is presented, benchmarked to standard, more sophisticate lossless JPEG2000 algorithm, and critically discussed. NC raw overall lossless compression performance compare quite well to standard one, but offering true ABD and DUR at no extra computational cost.
  6. Critical Parameter Values and Reconstruction Properties of Discrete Tomography: Application to Experimental Fluid Dynamics
    Stefania Petra,  Christoph Schnörr,  Andreas Schröder 285-312
    We analyze representative ill-posed scenarios of tomographic PIV (particle image velocimetry) with a focus on conditions for unique volume reconstruction. Based on sparse random seedings of a region of interest with small particles, the corresponding systems of linear projection equations are probabilistically analyzed in order to determine: (i) the ability of unique reconstruction in terms of the imaging geometry and the critical sparsity parameter, and (ii) sharpness of the transition to non-unique reconstruction with ghost particles when choosing the sparsity parameter improperly. The sparsity parameter directly relates to the seeding density used for PIV in experimental fluids dynamics that is chosen empirically to date. Our results provide a basic mathematical characterization of the PIV volume reconstruction problem that is an essential prerequisite for any algorithm used to actually compute the reconstruction. Moreover, we connect the sparse volume function reconstruction problem from few tomographic projections to major developments in compressed sensing.
  7. Solving Multicolor Discrete Tomography Problems by Using Prior Knowledge
    Elena Barcucci,  Stefano Brocchi 313-328
    Discrete tomography deals with the reconstruction of discrete sets with given projections relative to a limited number of directions, modeling the situation where a material is studied through x-rays and we desire to reconstruct an image representing the scanned object. In many cases it would be interesting to consider the projections to be related to more than one distinguishable type of cell, called atoms or colors, as in the case of a scan involving materials of different densities, as a bone and a muscle. Unfortunately the general n-color problem with n > 1 is NP-complete, but in this paper we show how several polynomial reconstruction algorithms can be defined by assuming some prior knowledge on the set to be rebuilt. In detail, we study the cases where the union of the colors form a set without switches, a convex polyomino or a convex 8-connected set. We describe some efficient reconstruction algorithms and in a case we give a sufficient condition for uniqueness.
  8. Polygons Drawn from Permutations
    Stefano Bilotta, Simone Rinaldi,  Samanta Socci 329-342
    n this paper we consider the class of column-convex permutominoes, i.e. column-convex polyominoes defined by a pair of permutations (π12). First, using a geometric construction, we prove that for every permutation π there is at least one column-convex permutomino P such that π1(P)=π or π2(P)=π. In the second part of the paper, we show how, for any given permutation π, it is possible to define a set of logical implications F(π) on the points of π, and prove that there exists a column-convex permutomino P such that π1(P)=π if and only if F(π) is satisfiable. This property can be then used to give a characterization of the set of column-convex permutominoes P such that π1(P)=π.
  9. The Reconstruction of Polyominoes from Horizontal and Vertical Projections  and Morphological Skeleton is NP-complete
    Norbert Hantos,  Péter Balázs 343-359
    Reconstruction of binary images from their projections is one of the main tasks in many image processing areas, therefore determining the computational complexity of those problems is essential. The reconstruction complexity is highly dependent on the requirements of the image. In this paper, we will show that the reconstruction is NP-complete if the horizontal and vertical projections and the morphological skeleton of the image are given, and it is supposed that the image is 4-connected.
  10. A Regularized, Pyramidal Multi-grid Approach to Global 3D-Volume Digital Image Correlation Based on X-ray Micro-tomography
    Roberto Fedele,  Antonia Ciani,  Luca Galantucci,  Matteo Bettuzzi,  Luca Andena 361-376
    In this study a robust strategy for 3D-Volume Digital Image Correlation (DIC) is presented, apt to provide accurate kinematic measurements within a loaded sample on the basis of three-dimensional digital images by X-ray computed micro-tomography. In the framework of a Galerkin, finite element discretization of the displacement field, the inverse problem of estimating 3D motion inside the bulk material is solved recursively on a hierarchical family of grids, linked by suitable restriction and prolongation operators. Such structured grids are defined over an image pyramid, which is generated starting from the raw tomographic reconstructions by a reiterated application of average filters and sub-sampling operators. To achieve robust estimates of the underlying displacement fields, multi-grid cycles are performed ascending and descending along the pyramid in a selected sequence. At each scale, the least-square matching function for DIC is enriched by means of a penalty term in the spirit of Tychonoff regularization, including as a priori information the estimate achieved at the previous grid and transferred to the current scale. For each grid only one Newton iteration can be considered, implying important time savings. Results are presented concerning a laboratory X-ray micro-tomography experiment on a polymeric foam sample, subjected to loading by an apparatus ad-hoc realized.