20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023 September 2-8, 2023, Rhodes, Greece

Invited Talks

On the need of semantics when tackling Knowledge Graph completion under a Machine Learning perspective

Claudia d’Amato
University of Bari
http://www.di.uniba.it/~cdamato/

Slides: https://kr.org/KR2023/InvitedTalkSlides/Claudiad’Amato.pdf

Knowledge Graphs (KGs) are receiving increasing attention both from academia and industry, as they represent a source of structured knowledge of unprecedented dimension to be exploited in a multitude of application domains as well as research fields. Nevertheless, despite their large usage, it is well known that KGs suffer from incompleteness and noise since they often come as a result of a complex building process. As such non-negligible research efforts are currently devoted to improve the coverage and quality of existing KGs. Particularly, for the purpose numeric based Machine Learning (ML) solutions are generally adopted, given their proved ability to scale on very large KGs. Numeric-based approaches mostly focus on the graph structure and they generally consist of series of numbers without any obvious human interpretation, thus possibly affecting  the interpretability, the explainability and sometimes the trustworthiness of the results. Nevertheless, KGs may also rely on expressive representation languages, e.g. RDFS and OWL, that are also endowed with deductive reasoning capabilities. However, both expressiveness and reasoning are most of the time disregarded by the majority of the numeric methods that have been developed, thus somehow losing knowledge that is already available. In this talk, the role and the value added that the semantics may have for ML solutions will be argued and research directions on empowering ML solution by injecting background knowledge will be presented jointly with an analysis of the most urgent issues that need to be solved.

How to Make Logics Neurosymbolic

Luc de Raedt
KU Leuven
https://wms.cs.kuleuven.be/people/lucderaedt/

Slides: https://kr.org/KR2023/InvitedTalkSlides/LucDeRaedt.pdf

Neurosymbolic AI (NeSy) is regarded as the third wave in AI. It aims at combining knowledge representation and reasoning with neural networks. Numerous approaches to NeSy are being developed and there exists an `alphabet-soup’ of different systems, whose relationships are often unclear. I will discuss the state-of-the art in NeSy
and argue that there are many similarities with statistical relational AI (StarAI).

Taking inspiring from StarAI, and exploiting these similarities, I will argue that Neurosymbolic AI = Logic + Probability + Neural Networks.  I will also provide a recipe for developing NeSy approaches: start from a logic, add a probabilistic interpretation, and then turn neural networks into `neural predicates’. Probability is interpreted broadly here, and is necessary to provide a quantitative and differentiable component to the logic. At the semantic and the computation level, one can then combine logical circuits (ako proof structures) labeled with probability, and neural networks in computation graphs.

I will illustrate the recipe with NeSy systems such as DeepProbLog, a deep probabilistic extension of Prolog, and DeepStochLog, a neural network extension of stochastic definite clause grammars (or stochastic logic programs).

Knowledge Representation in the Languages of Logic Programs under Answer Set Semantics

Michael Gelfond
Texas Tech University
http://redwood.cs.ttu.edu/~mgelfond/

Slides: https://kr.org/KR2023/InvitedTalkSlides/MichaelGelfond.pdf

In this presentation I will talk about several classical problems of knowledge representation and their solutions given in the language of logic programs under the answer set semantics and its extensions. These will include formalization of defaults and their exceptions, development of theories of action and change, solutions to planning and diagnostic problems, combining logical and probabilistic reasoning, etc. The list is far from complete — I will only talk about developments in which I had some degree of personal involvement. I also briefly describe the role of these ideas in industrial applications, and outline several important open problems.

ASP in Industry, here and there

Torsten Schaub
University of Potsdam
https://www.cs.uni-potsdam.de/~torsten/

Slides: https://kr.org/KR2023/InvitedTalkSlides/TorstenSchaub.pdf

Answer Set Programming (ASP) has become a popular paradigm for declarative problem solving and is about to find its way into industry.  This is due to its expressive yet easy knowledge representation language powered by highly performant (Boolean) solving technology.  As with many other such paradigms before, the transition from academia to industry calls for more versatility. Hence, many real-world applications are not tackled by pure ASP but rather hybrid ASP.  The corresponding ASP systems are usually augmented by foreign language constructs from which additional inferences can be drawn.  Examples include linear equations or temporal formulas.  For the design of “sound” systems, however, it is indispensable to provide semantic underpinnings right from the start.  To this end, we will discuss the vital role of ASP’s logical foundations, the logic of Here-and-There and its non-monotonic extension, Equilibrium Logic, in designing hybrid ASP systems and highlight some of the resulting industrial applications.

Reasoning about reasoning about reasoning: from logic to the lab

Rineke Verbrugge
University of Groningen
https://rinekeverbrugge.nl/

Slides: https://kr.org/KR2023/InvitedTalkSlides/RinekeVerbrugge.pdf

When engaging in social interaction, people rely on their ability to reason about other people’s mental states, including their goals, intentions, knowledge and beliefs. This theory of mind ability allows them to understand, predict, and even manipulate the behavior of others. People can also use their theory of mind recursively, which allows them to understand sentences like “Alice believes that Bob does not know that she wrote a novel under pseudonym”. In the current era of hybrid intelligence, teams may consist of humans, robots and software agents. For better coordination, it would be beneficial if the computational members of the team could recursively reason about the minds of their human colleagues.
While the usefulness of higher orders of theory of mind is apparent in many social interactions, empirical evidence so far suggests that people usually do not use this recursive ability spontaneously, even when doing so would be highly beneficial. In this lecture, we discuss some of our computational modelling research and empirical experiments. How do children develop second-order theory of mind? Can we entice adults to engage in higher-order theory of mind reasoning by letting them play games against computational agents? What’s logic got to do with reasoning about reasoning about reasoning? Do corvids have any theory of mind? And how about ChatGPT?