124 (1-2) 2013
- Cognitive Informatics and Computational Intelligence:
Theory and Applications. Preface i-iv
- Comparing Problem Solving Strategies for NP-hard Optimization
Problems
Mercedes Hidalgo-Herrero, Pablo Rabanal, Ismael Rodríguez, Fernando Rubio 1-25
NP-complete problems are particularly hard to solve.
Unless P=NP, any algorithm solving an NP-complete problem takes
exponential time in the worst case.
The intrinsic difficulty of NP-complete problems when we try to
optimally solve them with computers seems to apply to humans too.
Intuitively, solving NP-complete problems requires taking a series of
choices where each choice we take disables many subsequent choices,
but the scope of dependencies between these mutually exclusive
choices cannot be bound. Thus, the problem cannot be split into
smaller subproblems in such a way that their solutions can be
computed independently and easily combined for constructing the
global solution (as it happens in divide and conquer algorithms).
Moreover, for each choice, the space of subsequent subproblems to be
considered for all possible choice elections does not collapse into a
polynomial size set (as it happens in dynamic programming
algorithms). Thus, intuitively, in NP-complete problems any choice
may unboundedly affect any other, and this difficulty seems to puzzle
humans as much as computers.
In this paper we conduct an experiment to systematically analyze the
performance of humans when solving NP-complete problems. For each
problem, in order to measure partial fulfillment of the
decision problem goal, we consider its NP-hard optimization
version. We analyze the human capability to compute good suboptimal
solutions to these problems, we try to identify the kind of problem
instances which make humans compute the best and worst solutions
(including the dependance of their performance on the size of problem
instances), and we compare their performance with computational
heuristics typically used to approximately solve these problems. We
also interview experiment participants in order to infer the most
typical strategies used by them in experiments, as well as how these
strategies depend on the form and size of problem instances.
- Lexicon-based Document Representation
Gloria Virginia, Hung Son Nguyen 27-46
It is a big challenge for an information retrieval system (IRS) to interpret the queries made by users, particularly because the common form of query consists of very few terms. Tolerance rough sets models (TRSM), as an extension of rough sets theory, have demonstrated their ability to enrich document representation in terms of semantic relatedness. However, system efficiency is at stake because the weight vector created by TRSM (TRSM-representation) is much less sparse. We mapped the terms occurring in TRSM-representation to terms in the lexicon, hence the final representation of a document was a weight vector consisting only of terms that occurred in the lexicon (lexicon-representation). The lexicon-representation can be viewed as a compact form of TRSM-representation in a lower dimensional space and eliminates all informal terms previously occurring in TRSM-vector. With these facts, we may expect a more efficient system. We employed recall and precision commonly used in information retrieval to evaluate the effectiveness of lexicon-representation. Based on our examination, we found that the effectiveness of lexicon-representation is comparable with TRSM-representation while the efficiency of lexicon-representation should be better than the existing TRSM-representation. We concluded that lexicon-based document representation was another alternative potentially used to represent a document while considering semantics. We are tempted to implement the lexicon-representation together with linguistic computation, such as tagging and feature selection, in order to retrieve more relevant terms with high weight. With regard to the TRSM method, enhancing the quality of tolerance class is crucial based on the fact that the TRSM method is fully reliant on the tolerance classes. We plan to combine other resources such as Wikipedia Indonesia to generate a better tolerance class.
- Coupling as Strategy for Reducing Concept-Drift in Never-ending Learning Environments
E.R. Hruschka Jr., M. C. Duarte, M. C. Nicoletti 47-61
The project and implementation of autonomous computational
systems that incrementally learn and use what has been
learnt to, continually, refine its learning abilities throughout
time is still a goal far from being achieved. Such dynamic
systems would conform to the main ideas of the automatic
learning model conventionally characterized as never-ending
learning (NEL).
The never-ending approach to learning exhibits
similarities to the semi-supervised (SS) model which has been
successfully implemented by bootstrap learning methods.
Bootstrap learning has been one of the most successful among
the SS-methods proposed to date and, as such, the natural
candidate for implementing NEL systems.
Bootstrap methods learn from an available labeled set of data,
use the induced knowledge to label some unlabeled new data and,
recurrently, learn again from both sets of data in a cyclic manner. However the use
of SS methods, particularly bootstrapping methods, to implement
NEL systems can give rise to a problem
known as concept-drift.
Errors that may occur when the system automatically
labels new unlabeled data can, over time, cause the system
to run off track. The development of new strategies to lessen
the impact of concept-drift is an important issue that should
be addressed if the goal is to increase the plausibility of
developing such systems, employing bootstrap methods.
Coupling techniques can play an important role in reducing
concept-drift effects over machine learning systems, particularly those designed to
perform tasks related to machine reading.
This paper proposes and formalizes relevant coupling strategies
for dealing with the concept-drift problem in a NEL environment implemented as the system RTWP (Read The Web in Portuguese); initial results have
shown they are promising strategies for minimizing the problem taking into account a few system settings.
- Unsupervised Tracking, Roughness and Quantitative Indices
Sankar K. Pal, Debarati Chakraborty 63-90
This paper presents a novel methodology for tracking a single moving
object in a video sequence applying the concept of rough set theory.
The novelty of this technique is that it does not consider any prior
information about the video sequence unlike many existing
techniques. The first target model is constructed using the median
filtering based foreground detection technique and after that the
target is reconstructed in every frame according to the rough set
based feature reduction concept incorporating a measure of
indiscernibility instead of indiscernibility matrix. The area of
interest is initially defined roughly in every frame based on the
object shift in the previous frames, and after reduction of
redundant features the object is tracked. The measure of
indiscernibility of a feature is defined based on its degree of
belonging (DoB) to the target. Three quantitative indices based on
rough sets, feature similarity and Bhattacharya distance
are proposed to evaluate the performance of tracking and detect the
mis-tracked frames in the process of tracking to make those corrected.
Unlike many existing measures,
the proposed ones do not require to know
the ground truth or trajectory of the video sequence.
Extensive experimental results are given to
demonstrate the effectiveness of the method. Comparative
performance is demonstrated both visually and quantitatively.
- Knowledge Representation for Human-Computer Interaction
in a System Supporting Conceptual Design
Ewa Grabska, Grażyna Ślusarczyk, Szymon Gajek 91-110
This paper deals with the conceptual stage of the design process
related to civil engineering. Different types of design knowledge
representation essential in visual aspects of a human-computer
dialogue are considered. They comprise designer's drawings
expressing forms, layouts and functionality of designed artifacts,
internal graph-based data structures obtained automatically on the
basis of these drawings and logic formulas extracted from the
graph data structures. The reasoning mechanism based on the
first-order logic enables the system to assess the compatibility
of designs with specified requirements and constraints. The
feedback given by the system along with visualizations of initial
design ideas enforce the designer's constructive visual
perception and creativity. The presented approach is illustrated on the example
of designing an indoor swimming pool.
- Simulated Activation Patterns of Biological Neurons Cultured onto a Multi-Electrode Array Based on a Modified Izhikevich's Model
J.H. Saito, J.F. Mari, E. Pedrino, J.B. Destro-Filho, M.C. Nicoletti 111-132
Recently we have witnessed research efforts into developing real-time hybrid systems implementing
interactions between computational models and live tissues, in an attempt to learn more about the
functioning of biological neural networks.
A fundamental role in the development of such systems is played by Multi-Electrode Array (MEA).
In vitro cultures of neurons on MEAs, however, have some drawbacks such as: needing a
rigorous adherence to sterile techniques, careful choice and replenishment of media and
maintenance of pH, temperature, and osmolarity.
An alternative way to study and investigate live tissues which partially circumvent some of the
problems with in vitro cultures is by simulating them.
This paper describes the proposal of Sim-MEA, a system for modeling and simulating neuron's
communications in a MEA-based in vitro culture.
Sim-MEA implements a modified Izhikevich model that takes into account both: distances between
neurons and distances between microelectrodes and neurons.
The system also provides ways of simulating microelectrodes and their recorded signals as well as
recovering experimental MEA culture data, from their images.
The soundness of the Sim-MEA simulation procedure was empirically evaluated using data from
an experimental in vitro cultured hippocampal neurons of Wistar rat embryos.
Results from simulations, compared to those of the in vitro experiment, are presented and
discussed.
The paper also describes a few experiments (varying several parameter values) to illustrate and
detail the approach.
- Automatic Learning of Temporal Relations Under the Closed
World Assumption
M.C. Nicoletti, F.O.S.S. Lisboa, E.R. Hruschka Jr. 133-151
Time plays an important role in the vast majority of problems and,
as such, it is a vital issue to be considered when developing computer
systems for solving problems.
In the literature, one of the most influential formalisms for representing
time is known as Allen's Temporal Algebra based on a set of 13
relations (basic and reversed) that may hold between two time intervals.
In spite of having a few drawbacks and limitations, Allen's formalism
is still a convenient representation due to its simplicity and
implementability and also, due to the fact that it has been the basis
of several extensions.
This paper explores the automatic learning of Allen's temporal
relations by the inductive logic programming system FOIL, taking
into account two possible representations for a time interval:
(i) as a primitive concept and (ii) as a concept defined by the
primitive concept of time point.
The goals of the experiments described in the paper are
(1) to explore the viability of both representations for use
in automatic learning;
(2) compare the facility and interpretability of the results;
(3) evaluate the impact of the given examples for inducing a proper
representation of the relations and
(4) experiment with both representations under the assumption of
a closed world (CWA), which would ease continuous learning
using FOIL.
Experimental results are presented and discussed as evidence
that the CWA can be a convenient strategy when learning Allen's
temporal relations.
- Robust Rough-Fuzzy C-Means Algorithm: Design and Applications
in Coding and Non-coding RNA Expression Data Clustering
Pradipta Maji, Sushmita Paul 153-174
Cluster analysis is a technique that divides a given data set
into a set of clusters in such a way that two objects
from the same cluster are as similar as possible
and the objects from different clusters are as dissimilar as
possible. In this background, different rough-fuzzy
clustering algorithms have been shown to be successful for
finding overlapping and vaguely defined clusters. However, the
crisp lower approximation of a cluster in existing rough-fuzzy
clustering algorithms is usually assumed to be spherical
in shape, which restricts to find arbitrary shapes of clusters.
In this regard, this paper presents a new rough-fuzzy clustering algorithm,
termed as robust rough-fuzzy c-means. Each cluster
in the proposed clustering algorithm is represented by a set
of three parameters, namely, cluster prototype, a possibilistic
fuzzy lower approximation, and a probabilistic fuzzy boundary.
The possibilistic lower approximation helps in
discovering clusters of various shapes. The cluster prototype depends on the
weighting average of the possibilistic lower approximation and probabilistic
boundary. The proposed algorithm is robust in the sense that it can find
overlapping and vaguely defined clusters with arbitrary shapes
in noisy environment. An efficient method is presented, based
on Pearson's correlation coefficient, to select initial prototypes
of different clusters. A method is also introduced based on cluster
validity index to identify optimum values of different parameters
of the initialization method and the proposed clustering algorithm.
The effectiveness of the proposed algorithm,
along with a comparison with other clustering algorithms,
is demonstrated on synthetic as well as coding and non-coding
RNA expression data sets using some cluster
validity indices.
- Sufficiently Near Neighbourhoods of Points in Flow Graphs.
A Near Set Approach
James F. Peters, Doungrat Chitcharoen 175-196
This paper introduces sufficiently near visual neighbourhoods of points and neighbourhoods of sets in digital image flow graphs (NDIFGs). An NDIFG is an extension of a Pawlak flow graph. The study of sufficiently near neighbourhoods in NDIFGs stems from recent work on near sets and topological spaces via near and far, especially in terms of visual neighbourhoods of points that are sufficiently near each other. From a topological perspective, non-spatially near sets represent an extension of proximity space theory and the original insight concerning spatially near sets by F. Riesz at the International Congress of Mathematicians (ICM) in 1908. In the context of Herrlich nearness, sufficiently near neighbourhoods of sets in NDIFGs provide a new perspective on topological structures in NDIFGs. The practical implications of this work are significant. With the advent of a study of the nearness of open as well as closed neighbourhods of points and of sets in NDIFGs, it is now possible to do information mining on a more global level and achieve new insights concerning the visual information embodied in the images that provide input to an NDIFG.
- An Insight Into The Z-number Approach To CWW
Sankar K. Pal, Romi Banerjee, Soumitra Dutta, Samar Sen Sarma 197-229
The Z-number is a new fuzzy-theoretic concept, proposed by Zadeh in 2011. It extends the basic philosophy of Computing With Words (CWW) to include the perception of uncertainty of the information conveyed by a natural language statement. The Z-number thus, serves as a model of linguistic summarization of natural language statements, a technique to merge human-affective perspectives with CWW, and consequently can be envisaged to play a radical role in the domain of CWW-based system design and Natural Language Processing (NLP). This article presents a comprehensive investigation of the Z-number approach to CWW. We present here: a) an outline of our understanding of the generic architecture, algorithm and challenges underlying CWW in general; b) a detailed study of the Z-number methodology - where we propose an algorithm for CWW using Z-numbers, define a Z-number based operator for the evaluation of the level of requirement satisfaction, and describe simulation experiments of CWW utilizing Z-numbers; and c) analyse the strengths and the challenges of the Z-numbers, and suggest possible solution strategies. We believe that this article would inspire research on the need for inclusion of human-behavioural aspects into CWW, as well as the integration of CWW and NLP.
- Soft Nearness Approximation Spaces
Mehmet Ali Öztürk, Ebubekir Inan 231-250
In 1999, Molodtsov introduced the theory of soft sets, which can be seen as
a new mathematical approach to vagueness. In 2002, near set theory was initiated
by J. F. Peters as a generalization of Pawlak's rough set theory. In the near
set approach, every perceptual granule is a set of objects that have their
origin in the physical world. Objects that have, in some degree, affinities
are considered perceptually near each other, i.e., objects with similar
descriptions. Also, the concept of near groups has been investigated by
\.Inan and Öztürk [30]. The present paper aims to combine the soft sets
approach with near set theory, which gives rise to the new concepts of soft
nearness approximation spaces (SNAS), soft lower and upper approximations.
Moreover, we give some examples and properties of these soft nearness approximations.
124 (3) 2013
- Learning Behaviors of Functions with Teams
Bala Kalyanasundaram, Mahe Velauthapillai 251-270
We consider the inductive inference model of Gold [15].
Suppose we are given a set of functions that are
learnable with certain number of mind changes and errors.
What can we consistently predict about those functions if
we are allowed fewer mind changes or errors?
In [20] we relaxed the notion of exact learning by considering some higher
level properties of the input-output behavior of a given function. in this context, a
learner produces a program that describes the property of a given function.
Can we predict generic properties such as threshold or modality
if we allow fewer number of mind changes or errors?
These questions were completely answered in [20] when the learner
is restricted to a single IIM. In this paper we allow
a team of IIMs to collaborate in the learning process.
The learning is considered to be
successful if any one of the team member succeeds.
A motivation for this extension is to understand and characterize
properties that are learnable for a given set of functions in a team environment.
- Complete Conceptual Schema Algebras
Hui Ma, René Noack, Klaus-Dieter Schewe, Bernhard Thalheim, Qing Wang
271-295
A schema algebra comprises operations on database schemata for a given data model. Such algebras are useful in database design as well as in schema integration. In this article we address the necessary theoretical underpinnings by introducing a novel notion of conceptual schema morphism that captures at the same time the conceptual schema and its semantics by means of the set of valid instances. This leads to a category of schemata that is finitely complete and co-complete. This is the basis for a notion of completeness of schema algebras, if it captures all universal constructions in the category of schemata. We exemplify this notion of completeness for a recently introduced particular schema algebra.
- Residual Closeness in Cycles and Related Networks
Zeynep Nihan Odabas, Aysun Aytaç 297-307
Networks are known to be prone to node or link failures. A central issue in the analysis of networks is the assessment of their stability and reliability. The main aim is to understand, predict, and possibly even control the behavior of a networked system under attacks or disfunctions of any type. A central concept that is used to assess stability and robustness of the performance of a network under failures is that of vulnerability. A network is usually represented by an undirected simple graph where vertices represent processors and edges represent links between processors. Different approaches to properly define a measure for graph vulnerability has been proposed so far. In this paper, we study the vulnerability of cycles and related graphs to the failure of individual vertices, using a measure called residual closeness which provides a more sensitive characterization of the graph than some other well-known vulnerability measures.
- A Framework for Coxeter Spectral Analysis of
Edge-bipartite Graphs, their Rational Morsifications and Mesh Geometries of Root Orbits
Daniel Simson 309-338
Following the spectral Coxeter analysis of matrix morsifications for Dynkin diagrams, the spectral graph theory, a graph coloring technique, and algebraic methods in graph theory, we continue our study of the category UBigr_{n} of loop-free edge-bipartite (signed) graphs ∆, with n ≥ 2 vertices, by means of the Coxeter number c_{∆}, the Coxeter spectrum specc_{∆} of ∆, that is, the spectrum of the Coxeter polynomial cox_{∆}(t)
∈ Z[t] and the Z-bilinear Gram form
b_{∆}: Z^{n}× Z^{n} → Z of ∆ [SIAM J. Discrete Math. 27(2013)]. Our main inspiration for the study comes from the representation theory of posets, groups and algebras, Lie theory, and Diophantine geometry problems.
We show that the Coxeter spectral classification of connected edge-bipartite graphs ∆ in UBigr_{n} reduces to the Coxeter spectral classification of rational matrix morsifications A ∈ ∧Mor_{D∆} for a simply-laced Dynkin diagram D∆ associated with ∆. Given ∆ in UBigr_{n}, we study the isotropy subgroup Gl(n, Z)_{∆} of Gl(n, Z) that contains the Weyl
group W_{∆} and acts on the set ∧Mor_{∆} of rational matrix morsifications A of ∆ in such a way that the map A→ (specc_{A}, detA, c_{∆}) is Gl(n, Z)_{∆}-invariant.
It is shown that, for n ≤ 6, specc_{∆} is the spectrum of one of the Coxeter polynomials listed in Tables 3.11-3.11(a) (we determine them by computer search using symbolic and numeric computation).
The question, if two connected positive edge-bipartite graphs ∆, ∆′ in UBigr_{n}, with specc_{∆} = specc_{ ∆′},
are Z-bilinear equivalent, is studied in the paper. The problem if any
Z-invertible matrix A ∈ M_{n}( Z) is Z-congruent with its transpose A^{tr} is also discussed.
- Toroidal Algorithms for Mesh Geometries of Root Orbits of the Dynkin Diagram D_{4}
Daniel Simson 339-364
By applying symbolic and numerical computation and the spectral Coxeter analysis technique of matrix morsifications introduced in our previous paper [Fund. Inform. 124(2013)], we present a complete algorithmic classification of the rational
morsifications and their mesh geometries of root orbits for the Dynkin
diagram D_{4}. The structure of the isotropy group Gl(4, Z)_{ D4}
of D_{4} is also studied.
As a byproduct of our technique we show that, given a connected loop-free positive edge-bipartite graph ∆, with n ≥ 4 vertices (in the sense of our paper [SIAM J. Discrete Math. 27(2013)]) and the positive definite Gram unit form q_{∆}: Z^{n}
→ Z, any positive integer d ≥ 1 can be presented as d = q_{∆}(v),
with v ∈ Z^{n}. In case
n=3, a positive integer d ≥ 1 can be presented as d = q_{∆}(v),
with v ∈ Z^{n}, if and only if d is not of the form 4^{a}(16·b + 14), where a and b are non-negative integers.
124 (4) 2013
- Special Issue on the Italian Conference on Computational Logic:
CILC 2011. Preface.
- The CHR-based Implementation of the SCIFF Abductive System
Marco Alberti, Marco Gavanelli, Evelina Lamma 365-381
Abduction is a form of inference that supports hypothetical reasoning
and has been applied to a number of domains, such as diagnosis,
planning, protocol verification. Abductive Logic Programming (ALP) is
the integration of abduction in logic programming. Usually, the
operational semantics of an ALP language is defined as a proof
procedure.
The first implementations of ALP proof-procedures were based on the
meta-interpretation technique, which is flexible but limits the use of
the built-in predicates of logic programming systems.
Another, more recent, approach exploits theoretical results on the
similarity between abducibles and constraints. With this approach,
which bears the advantage of an easy integration with built-in
predicates and constraints, Constraint Handling Rules has been the
language of choice for the implementation of abductive proof
procedures. The first CHR-based implementation mapped integrity
constraints directly to CHR rules, which is an efficient
solution, but prevents defined predicates from being in the body of
integrity constraints and does not allow a sound treatment of
negation by default.
In this paper, we describe the CHR-based implementation of the SCIFF
abductive proof-procedure, which follows a different approach. The SCIFF implementation maps integrity constraints to CHR constraints,
and the transitions of the proof-procedure to CHR rules, making it
possible to treat default negation, while retaining the other
advantages of CHR-based implementations of ALP proof-procedures.
- Coalitions of Arguments: An Approach with Constraint Programming
Stefano Bistarelli, Francesco Santini 383-401
The aggregation of generic items into coalitions leads to the creation of sets of homogenous entities. In this paper we accomplish this for an input set of arguments, and the result is a partition according to distinct
lines of
thought, i.e., groups of "coherent" ideas. We extend Dung's Argumentation
Framework (AF) in order to deal with coalitions of arguments. The initial
set of arguments is partitioned into not-intersected subsets. All the found
coalitions show the same property inherited by Dung, e.g., all the
coalitions in the partition are admissible (or conflict-free,
complete, stable): they are generated according to Dung's principles. Each of these coalitions can be assigned to a different agent. We use
Soft Constraint Programming as a formal approach to model and solve
such partitions in weighted AFs: semiring algebraic structures can be used to model
different optimization criteria for the obtained coalitions. Moreover, we implement and solve the presented problem with JaCoP, a Java constraint solver, and we test the code over a
small-world network.
- Product and Production Process Modeling and Configuration
Dario Campagna, Andrea Formisano 403-425
Product configuration systems are an emerging technology that supports
companies in deploying mass customization strategies.
Such strategies need to cover the management of the whole customizable product cycle.
Adding process modeling and configuration features to a product configurator may improve its
ability to assist mass customization development.
In this paper, we describe a modeling framework, PRODPROC,
that allows one to model both a product and its production process.
We first introduce our framework by describing how configurable products are modeled.
Then, we describe the main features and capabilities offered to model production processes and to link them
with the corresponding products models.
The configuration task (namely, the procedure that, from a configurable object/activity generates a
configured product/process) is then analyzed. We also outline a possible CSP-based implementation
of a configurator.
A comparison with some of the existing systems for product configuration and process modeling
emphasizes that none of the considered system/tools offers the complete set of features supported
by PRODPROC for interdependent product and process modeling/configuration.
- On the Satisfiability Problem for a 4-level Quantified Syllogistic
and Some Applications to Modal Logic
Domenico Cantone, Marianna Nicolosi Asmundo 427-448
We introduce a multi-sorted stratified syllogistic, called 4LQS^{R},
admitting variables of four sorts and a restricted form of
quantification over variables of the first three sorts, and prove that
it has a solvable satisfiability problem by showing that it enjoys a
small model property. Then, we consider the fragments (4LQS^{R})^{h} of
4LQS^{R}, consisting of 4LQS^{R}-formulae
whose quantifier prefixes
have length bounded by h ≥ 2 and satisfying certain additional
syntactical constraints, and prove that each of them has an
NP-complete satisfiability problem. Finally we show that the
modal logic K45 can be expressed in (4LQS^{R})^{3}.
- Nested Weight Constraints in ASP
Stefania Costantini, Andrea Formisano 449-464
Weight constraints are a powerful programming construct that has proved very
useful within the Answer Set Programming paradigm.
In this paper, we argue that practical Answer Set Programming might take
profit from introducing some forms of nested weight constraints.
We define such empowered constraints (that we call "Nested Weight Constraints")
and discuss their semantics and their complexity.
- On Modal μ-Calculus in S5 and Applications
Giovanna D'Agostino, Giacomo Lenzi 465-482
We consider the μ-calculus over graphs where the accessibility relation is an equivalence (S5-graphs).
We show that the vectorial μ-calculus model checking problem over arbitrary graphs reduces to the vectorial, existential μ-calculus model checking problem over S5 graphs. Moreover, we give a proof that satisfiability of μ-calculus in S5 is NP-complete, and by using S5 graphs we give a new proof that the satisfiability problem of the existential μ-calculus is also NP-complete. Finally we prove that on multimodal S5, in contrast with the monomodal case, the fixpoint hierarchy of the μ-calculus is infinite and the finite model property fails.
- Controlling Polyvariance for Specialization-based Verification
Fabio Fioravanti, Alberto Pettorossi, Maurizio Proietti, Valerio Senni 483-502
Program specialization has been proposed as a means of improving
constraint-based analysis of infinite state reactive systems.
In particular, safety properties can be specified by constraint logic programs encoding
(backward or forward) reachability algorithms. These programs are then
transformed, before
their use for checking safety, by specializing them with respect to the initial states
(in the case of backward reachability) or with respect to the unsafe states (in the case
of forward reachability). By using the specialized reachability programs, we can
considerably increase the number of successful verifications.
An important feature of specialization algorithms is the so called polyvariance,
that is, the number of specialized variants of the same predicate that are
introduced by specialization.
Depending on this feature, the specialization time, the size of the specialized
program, and the number of successful verifications may vary. We present a specialization
framework which is more general than previous proposals and provides control
on polyvariance.
We demonstrate, through experiments on several infinite state reactive systems,
that by a careful choice of the degree of polyvariance we can design
specialization-based verification procedures that are both efficient and precise.
- A Logic-based Computational Method for the
Automated Induction of Fuzzy Ontology Axioms
Francesca A. Lisi, Umberto Straccia 503-519
Fuzzy Description Logics (DLs) are logics that allow to deal
with structured vague knowledge.
Although a relatively important amount of work has
been carried out in the last years concerning the use of fuzzy DLs as ontology languages, the problem of automatically managing the evolution of fuzzy ontologies has received very little attention so far. We describe here a logic-based computational method for the automated induction of fuzzy ontology axioms which follows the machine learning approach of Inductive Logic Programming. The potential usefulness of the method is illustrated
by means of an example taken from the tourism application domain.
- MCINTYRE: A Monte Carlo System for Probabilistic Logic Programming
Fabrizio Riguzzi 521-541
Probabilistic Logic Programming is receiving an increasing attention for its ability to model domains with complex and uncertain relations among entities.
In this paper we concentrate on the problem of approximate inference in probabilistic logic programming languages based on the distribution semantics.
A successful approximate approach is based on Monte Carlo sampling, that consists in verifying the truth of the query in a normal program sampled from the probabilistic program.
The ProbLog system includes such an algorithm and so does the cplint suite.
In this paper we propose an approach for Monte Carlo inference that is based on a program transformation that translates a probabilistic program into a normal program to which the query can be posed. The current sample is stored in the internal database of the Yap Prolog engine.
The resulting system, called MCINTYRE for Monte Carlo INference wiTh Yap REcord, is evaluated on various problems: biological networks, artificial datasets and a hidden Markov model. MCINTYRE is compared with the Monte Carlo algorithms of ProbLog and
cplint and with the exact inference of the PITA system. The results show that MCINTYRE is faster than the other Monte Carlo systems.
- Logic-based Reasoning Support for SBVR
Dmitry Solomakhin, Enrico Franconi, Alessandro Mosca 543-560
Automated support to enterprise modeling has increasingly become a subject of interest for organizations seeking solutions for storage, distribution and analysis of knowledge about business processes. This interest has recently resulted in approving the standard for specifying Semantics of Business Vocabulary and Business Rules (SBVR). Despite the existence of formally grounded notations, up to now SBVR still lacks a sound and consistent logical formalization which would allow developing automated solutions able to check the consistency of a set of business rules.
This work reports on the attempt to provide logical foundations for SBVR by the means of defining a specific first-order deontic-alethic logic (FODAL). The connections of FODAL with the modal logic
QK and the description logic ALCQI have been investigated and, on top of the obtained theoretical results, a special tool providing automated support for consistency checks of a set of
ALCQI-expressible deontic and alethic business rules has been implemented.
- A Tabled Prolog Program for Solving Sokoban
Neng-Fa Zhou, Agostino Dovier 561-575
This paper presents our program in B-Prolog submitted to the third ASP solver
competition for the Sokoban problem. This program, based on dynamic programming, treats Sokoban as a generalized shortest path problem. It divides a problem into independent subproblems and uses mode-directed tabling to store subproblems and their answers. This program is very simple but quite efficient. Without use of any sophisticated domain knowledge, it easily solves 14 of the 15 instances used
in the competition. We show that the approach can be easily applied to other optimization planning problems.