Starting from the pioneering work on Linda and Gamma, coordination models, languages and technologies have gone through an amazing evolution process over the years. From closed to open systems, from parallel computing to multi-agent systems, from database integration to knowledge-intensive environments, coordination abstractions and technologies have gained in relevance and impact in those scenarios where complexity is a key factor. In this talk, we outline and motivate 25 years of evolution of coordination models and languages, with particular care to logic-based and declarative approaches, and discuss their potential perspectives in the future of artificial systems.

Static analysis is important for verification, debugging and optimisation of computer programs. In this context, we present the use of propositional logic formulas for the representation of dependency information about variables for Java and Java bytecode programs. The idea comes from the analysis of logic programs and from a mathematical theory known as "domain refinement". The approach is cast here inside a denotational formalism and applied to the static analysis of Java programs, in order to determine when local variables can never contain the special "dangerous" value null. Abstract interpretation is the framework that we use to prove the correctness of the technique. We discuss the benefits of the approach, namely, the complete context and flow sensitivity of the resulting analysis. We also discuss its main limitation, that is, a very weak approximation for instance variables (also known as object "fields"). Hence, we overcome the latter limitation with an oracle-based technique. We conclude with some experimental evaluation over actual programs and with the discussion of further improvements and open problems

The combination of logic and probability is very useful for modeling domains with complex and uncertain relationships among entities. Many probabilistic logic languages have been proposed in various research fields. In logic programming, the distribution semantics has recently gained an increased attention and is adopted by many languages such as the Independent Choice Logic, PRISM, Logic Programs with Annotated Disjunctions and ProbLog. Other languages instead follow a knowledge-based model construction approach in which the probabilistic logic theory is used directly as a template for generating an underlying complex graphical model. In logic programming, CLP(BN) is based on a direct translation into Bayesian networks. Markov Logic instead uses full first order logic formulas for specifying an underlying Markov network. The talk will illustrate these approaches for combining logic and probability and will highlight similarity and differences. The talk will also introduce the types of reasoning that can be performed with these languages: inference, weight learning and structure learning. In inference we want to compute the probability of a query given the model and, possibly, some evidence. In weight learning we know the structural part of the model (the logic formulas) but not the numeric part (the weights) and we want to infer the weights from data. In structure learning we want to infer both the structure and the weights of the model from data. The tutorial will then illustrate existing approaches for inference in probabilistic logic programming languages. It will discuss in details algorithms for performing inference on languages that follow the distribution semantics and in particular the PITA algorithm that uses tabling and answer subsumtpion. PITA is of interest for its speed and versatility, as it can easily be optimized for simpler settings and even possibilistic uncertain reasoning.