Program

Monday, July 4 Tuesday, July 5 Wednesday, July 6 Thursday, July 7 Friday, July 8 Saturday, July 9
8:30–10:30
Tutorial 10:
Computational Fact Checking,
Paolo Papotti
9:00–10:30
Tutorial 1:
Reasoning with Constraints,
Andreas Pieris
9:00–10:30
Tutorial 3:
Provenance,
Val Tannen
9:00–10:30
Tutorial 5:
Probabilistic Databases,
Antoine Amarilli
9:00–10:30
Tutorial 7:
Quantitative Reasoning about Constraint Violations,
Benny Kimelfeld
9:00–10:30
Tutorial 8:
Ontology-Mediated Query Answering,
Carsten Lutz
10:30–11:00   Coffee break 10:30–11:15   Coffee break
11:00–12:15
Tutorial 1, second part
11:00–12:15
Tutorial 3, second part
11:00–12:15
Tutorial 5, second part
11:00–12:15
Tutorial 7, second part
11:00–12:15
Tutorial 8, second part
11:15–12:15
Tutorial 11:
Data Quality,
Floris Geerts
12:15–14:15
Buffet
  12:15–14:15
Buffet
12:15–14:00
Buffet
12:45–14:15
Restaurant
12:45–14:15
Restaurant
14:00–15:00
Tutorial 11, second part
14:15–15:45
Tutorial 2:
Foundations of Graph Databases,
Pablo Barceló
  14:15–15:45
Tutorial 6:
Consistent Query Answering,
Jef Wijsen
 
14:45–16:15
Tutorial 4:
Enumeration,
Nicole Schweikardt
14:45–16:15
Tutorial 9:
Ontology-Based Data Access Made Practical,
Diego Calvanese
15:00–15:45
Coffee break
15:45–16:15
Coffee break
15:45–16:15
Coffee break
15:45–16:30
Tutorial 11, third part
16:15–17:30
Tutorial 2, second part
16:15–16:45
Coffee break
16:15–17:30
Tutorial 6, second part
16:15–16:45
Coffee break
 
16:45–18:00
Tutorial 4, second part
16:45–18:00
Tutorial 9, second part
 
 
   
   
 
Monday, July 4 Tuesday, July 5 Wednesday, July 6 Thursday, July 7 Friday, July 8 Saturday, July 9
  Track 1: Foundational database theory   Track 2: Handling imperfect data

Courses

Tutorial 1: Reasoning with constraints
Andreas Pieris (University of Edinburgh, UK; University of Cyprus, Cyprus) Slides Video

In a relational database system, we can specify semantic properties in the form of integrity constraints (aka dependencies) that should be satisfied by all databases of a certain schema, such as "every person should have at most one social security number". Such properties are crucial in the development of transparent and usable database schemas for complex applications, as well as for optimizing the evaluation of queries. In this tutorial, we are going to discuss the main algorithmic tasks that involve integrity constraints, namely the problem of logical implication of constraints, as well as the problems of query containment and equivalence under constraints and present the main algorithmic tool that allows us to reason with constraints, that is, the chase procedure. Furthermore, we are going to discuss recent results on how integrity constraints can be used for semantically optimizing queries.

Tutorial 2: Foundations of Graph Databases
Pablo Barceló (Pontificia Universidad Católica de Chile, Chile) Slides Video

We introduce the students to the area of modeling and querying graph-structured data. We start by presenting several models that have been used to represent graph-structured data in the context of graph databases, semantic web, knowledge graphs, and others. We then present and study various general-purpose navigational query languages, such as the regular path queries and its extensions with conjunctions, inverses, path comparisons, and abilities to talk about data values. We put a focus on complexity, expressive power, and static analysis optimization for such query languages.

Tutorial 3: Provenance
Val Tannen (University of Pennsylvania, USA) Slides Video

Imagine a computational process that uses a complex input consisting of multiple "items" (e.g.,files, tables, tuples, parameters, configuration rules) The provenance analysis of such a process allows us to understand how the different input items affect the output of the computation. It can be used, for example, to derive confidence in the output (given confidences in the input items), to derive the minimum access clearance for the output (given input items with different classifications), to minimize the cost of obtaining the output (given a complex input item pricing scheme). It also applies to probabilistic reasoning about an output (given input item distributions), as well as to output maintenance, and to debugging. Provenance analysis for queries, views, database ETL tools, and schema mappings is strongly influenced by their declarative nature, providing mathematically nice descriptions of the output-inputs correlation. In a series of papers starting with PODS 2007, through many collaborations, we have developed an algebraic framework for describing such provenance based on commutative semirings and semimodules over such semirings. To begin with, the framework has exploited usefully the observation that, for database provenance, data use has two flavors: joint and alternative. More recently, a treatment of negation based on duality has allowed us to extend the framework to full fixpoint logics.

Tutorial 4: Enumeration
Nicole Schweikardt (Humboldt-Universität zu Berlin, Germany) Slides

Query evaluation is one of the central tasks of a database system. The theoretical foundations of query evaluation rely on a close connection between database theory and mathematical logic, as relational databases correspond to finite relational structures, and queries can be formulated by logical formulae. Starting with Durand and Grandjean's 2007 paper, the fields of logic in computer science and database theory have seen a number of contributions that deal with the efficient enumeration of query results. In this scenario, the objective is as follows: given a structure (i.e., a database) and a logical formula (i.e., a query), after a short precomputation phase, the query results shall be generated one by one, without repetition, with guarantees on the maximum delay time between the output of two tuples. In this vein, the best that one can hope for is constant delay (i.e., the delay may depend on the size of the query but not on that of the database) and linear preprocessing time. By now, quite a number of query evaluation problems are known to admit constant delay algorithms preceded by linear or pseudo-linear time preprocessing. In this tutorial, I will give an overview of results and proof techniques (algorithms as well as conditional lower bounds) concerning the efficient enumeration of query results.

Tutorial 5: Probabilistic Databases
Antoine Amarilli (Télécom Paris, France) Slides Video

This lecture surveys the theoretical research on probabilistic databases, i.e., databases where facts are uncertain and are given with a probability of existence. We will present probabilistic database models, and focus on the fundamental problem of query answering over tuple-independent databases. We will see how to solve this problem efficiently, via the extensional ("in-database") approach, and via the intensional approach that has connections to provenance computation and knowledge compilation. We will also review some complexity lower bounds on the problem and present the dichotomy result by Dalvi and Suciu. We will present ongoing research directions and open problems, e.g., the study of more general query languages, approximate probability computation, restrictions on the database or on probabilities, open-world databases, continuous distributions, etc. We will also present connections to neighboring fields and applications, such as graphical models, probabilistic programming, and machine learning.

Tutorial 6: Consistent Query Answering
Jef Wijsen (Université de Mons, Belgium) Slides Video

Ideally, database instances should respect all integrity constraints imposed on them. Nevertheless, violations of integrity constraints often occur in practice. It is therefore relevant to develop theories about how to handle database instances that violate some integrity constraints, and more particularly, how to cope with query answering in the presence of inconsistency. Such a theory developed over the past twenty years is currently known as “consistent query answering” (CQA). The aim of this tutorial is to summarize and discuss some core concepts and theoretical developments in CQA.

Tutorial 7: Quantitative Reasoning about Constraint Violations
Benny Kimelfeld (Technion, Israel) Slides Video

The tutorial will cover various aspects of quantitative reasoning about database constraints and inconsistency, focusing on theoretical research from recent years. The prominent example of constraints in the tutorial will be the classic functional dependencies. I will discuss inconsistency measures for databases such as the basic ones that are based on repair counting and repair minimization. The need for inconsistency measurement can arise in various scenarios, including reliability estimation for datasets and progress indication in data cleaning. Such measurement is also needed when we wish to quantify the responsibility of individual database components to the overall inconsistency. As in many fields, a conventional responsibility-sharing mechanism is the Shapley value from cooperative game theory. We will recall recent results on the computational complexity of the Shapley value of database tuples with respect to different inconsistency measures. Finally, we will talk about some recent work on database repairing with soft constraints and discuss the connection to computational challenges that arise in probabilistic databases.

Tutorial 8: Ontology-Mediated Query Answering
Carsten Lutz (Universität Bremen, Germany) Slides Video

Today, data is often incomplete and heterogeneous in representation. Think for instance about data on the web or about data that has been collected from several different sources in a data integration effort. Ontology-mediated querying is a paradigm emerging from AI in which an ontology is used to enrich data with domain knowledge, alleviating incompleteness, and to bridge heterogeneous vocabularies. In this tutorial, I will present three topics in ontology-mediated querying. In the first part, I will consider ontologies formulated in the ontology language ALC, survey the expressive power of the resulting ontology-mediated queries, and establish a tight connection to the world of constraint satisfaction problems (CSPs). In the second part, I will look at guarded existential rules as the ontology language and present results on the efficient enumeration of answers. And in the third part, I will join these two strands by considering approximations of ontology-mediated queries based on ALC in terms of ontology-mediated queries based on guarded existential rules.

Tutorial 9: Ontology-Based Data Access Made Practical
Diego Calvanese (Free University of Bozen-Bolzano, Italy; Ontopic s.r.l., Italy; Umeå University, Sweden) Slides Video

In a variety of applications that make use of complex data assets, getting insights into data sources is crucial for decision making, but often challenging. The reason is that it typically requires combining information coming from different sources, and then making sense of the combined data via sophisticated analysis methods that require complex queries. These difficulties can be addressed by relying on Ontology-based Data Access (OBDA, also known as Virtual Knowledge Graphs - VKGs). In OBDA, one obtains a uniform representation of the content of data sources in terms of a knowledge graph. However, such a graph stays virtual and is accessed by formulating queries over an ontology, which in turn is connected to the data sources via declarative mappings. In this tutorial, we discuss practical challenges that are encountered when devising an OBDA solution in complex real-world scenarios, and how these challenges can be met. To do so, we rely on the one hand on a methodological approach to design "well-behaved" mappings; on the other hand we discuss optimization techniques that improve the performance of query processing in OBDA.

Tutorial 10: Computational Fact Checking
Paolo Papotti (EURECOM, France) Slides Video

Misinformation is an important problem but fact checkers are overwhelmed by the amount of false content that is produced online every day. To assist human experts in their efforts, several ongoing projects are proposing computational methods that aim at supporting the different steps in the fact-checking pipeline, from claim detection to their verification. In the first part of the lecture, we will overview the different approaches for the different steps, spanning from solutions involving humans and a crowd of users to fully automated approaches. In the second part, we will focus our attention on the data driven verification methods that use reference information to assess claims. We will review methods that combine solutions from the ML and NLP literature to build data driven verification, such as those that translate text claims into SQL queries on relational databases. We will also cover how the rich semantics in knowledge graphs (KGs) can be used to verify claims and produce explanations, which is a key requirement in this space. Better access to data and new algorithms are pushing computational fact checking forward, with experimental results showing that verification methods enable effective labeling of claims, both in simulations and in real world efforts such as https://coronacheck.eurecom.fr. However, while fact checkers start to adopt some of the resulting tools, the misinformation fight is far from being won. In the last part of this lecture, we will cover the opportunities and limitations of computational fact checking and its role in fighting misinformation.

Tutorial 11: Data Quality
Floris Geerts (University of Antwerp, Belgium) Slides

Data quality is one of the most important problems in data management. Indeed, the presence of dirty data may lead to misleading or biased analytical decisions, loss of revenue, credibility and customers, among other things. Since data is often too big to be manually cleaned or curated, computational methods are required for the detection of inconsistent, inaccurate, incomplete, duplicate, or stale data, and for repairing the data in either an automated way or by leveraging the user’s input. After surveying different types of dirty relational data, we provide an overview of declarative data cleaning approaches. In these approaches, various logical formalisms are used to detect different kinds of dirty data. Moreover, these formalisms, in combination with different repair models, lead to practical algorithms for the repairing of dirty data. To conclude, we describe various recent approaches that integrate machine learning techniques in declarative data cleaning approaches.