Keynote Speakers

Meghyn Bienvenu

LaBRI - CNRS & University of Bordeaux, France

KR Meets Data Quality

Headshot of Meghyn Bienvenu

Abstract:
Real-world data notoriously suffers from various forms of imperfections (missing facts, erroneous facts, duplicates, etc.), which can limit its utility and lead to flawed analyses and unreliable decision making. This makes data quality an issue of paramount importance across application domains, and one which I'll argue can both benefit from KR research and serve as a testbed for KR techniques. Indeed, while recent years have seen increasing interest in machine learning-based approaches, declarative approaches to improving data quality remain highly relevant, due to their better interpretability. In this talk, I will illustrate the synergy between data quality and KR by giving an overview of some of my recent work on querying inconsistent data using repair-based semantics and on rule-based approaches to entity resolution, highlighting the insights gained and directions for future research.

Bio:
Meghyn Bienvenu is a senior researcher (directrice de recherche) at the CNRS (French National Center for Scientific Research), based at the LaBRI research lab in Bordeaux, France. Her research interests span a range of topics in knowledge representation and reasoning and database theory, but she is best known for her contributions to ontology-mediated query answering and to the study of logic-based methods for handling inconsistent data. Bienvenu's research has been recognized by an invited Early Career Spotlight talk at IJCAI'16, the 2016 CNRS Bronze Medal in computer science, and together with her coauthors, a 2023 ACM PODS Alberto Mendelzon Test-of-Time Award. She has taken on numerous responsibilities within the AI, KR, and database theory communities, notably serving as PC co-chair for KR 2021 and associate editor of Artificial Intelligence Journal.

Subbarao Kambhampati

Arizona State University, USA

Can LLMs Really Reason & Plan?

Headshot of Subbarao Kambhampati

Slides
Audio

Abstract:
Large Language Models (LLMs) are on track to reverse what seemed like an inexorable shift of AI from explicit to tacit knowledge tasks. Trained as they are on everything ever written on the web, LLMs exhibit "approximate omniscience" — they can provide answers to all sorts of queries, but with nary a guarantee. This could herald a new era for knowledge-based AI systems — with LLMs taking the role of (blowhard?) experts. But first, we have to stop confusing the impressive form of the generated knowledge for correct content, and resist the temptation to ascribe reasoning, planning, self-critiquing etc. powers to approximate retrieval by these n-gram models on steroids. We have to focus instead on LLM-Modulo techniques that complement the unfettered idea generation of LLMs with careful vetting by model-based AI systems. In this talk, I will reify this vision and attendant caveats in the context of the role of LLMs in planning tasks.

Bio:
Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati's research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Nina Narodytska

VMware Research by Broadcom, USA

Logic-based Explainability of ML Models

Headshot of Nina Narodytska

Abstract:
Machine learning models are among the most successful artificial intelligence technologies making an impact in a variety of practical applications. However, many concerns were raised about the 'magical' power of these models. It is disturbing that we are clearly lacking in understanding of the decision making process behind this technology. Therefore, a natural question is whether we can trust decisions that neural networks make. There are a large number of research approaches to explainability. One popular family of methods, so-called ad-hoc explainability methods, uses heuristic-based solutions. These methods are among the most practical due to their scalability but they do not provide any guarantees on the quality of explanations. To address this issue, we propose a formal approach where explainability is formalized as a logical problem and solved using optimization tools, like SMT, SAT, ILP. Using these techniques we are able to compute provably correct explanations for smaller ML models. We consider several techniques to scale logic-based methods to larger ML models. We will highlight an interesting connection between explainability and robustness of ML models.

Bio:
Nina Narodytska is a staff researcher at VMware Research by Broadcom. Prior to VMware, she was a researcher at Samsung Research America. She completed postdoctoral studies in the Carnegie Mellon University School of Computer Science and the University of Toronto. She received her PhD from the University of New South Wales. She was named one of "AI's 10 to Watch"researchers in the field of AI in 2013. She has presented invited talks and tutorials at FMCAD'18, CP'19, AAAI'20, IJCAI'20, LMML'22, CP'22 and ESSAI'23.

Sheila McIlraith & Murray Shanahan in conversation with Joe Halpern

University of Toronta, Canada & Imperial College London, UK & Cornell University, USA

Headshot of Sheila McIlraith
Headshot of Murray Shanahan
Headshot of Joe Halpern

Great Ideas from KR: drawing on the past to shape the future

Recording
Recording (Audio only)

Abstract:
McIlraith and Murray Shanahan, in conversation with Joe Halpern, will reflect on KR's noble intellectual legacy, drawing on some of their personal favourite ideas from the KR canon. They will also speculate on the role of KR in this era of large language models, including how KR will shape the future of AI.

Bios:
Sheila McIlraith is a Professor in the Department of Computer Science at the University of Toronto, a Canada CIFAR AI Chair (Vector Institute), and an Associate Director and Research Lead at the Schwartz Reisman Institute for Technology and Society. McIlraith's research is in the area of AI sequential decision making broadly construed, with a focus on human-compatible AI. McIlraith is a Fellow of the ACM and the Association for the Advancement of Artificial Intelligence (AAAI). With co-authors McIlraith has been honoured with the 2011 SWSA 10-year Award, recognizing the highest impact paper from ISWC 10 years prior; in 2022, the 2022 ICAPS Influential Paper Award, a 10-year test of time award; and most recently the 2023 IJCAI-JAIR Paper Prize, awarded annually to an outstanding paper published in JAIR in the preceding five years.

Murray Shanahan is a principal research scientist at Google DeepMind and Professor of Cognitive Robotics at Imperial College London. His publications span artificial intelligence, robotics, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He is active in public engagement, and was scientific advisor on the film Ex Machina. His books include “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015).

Joseph (Joe) Halpern is the Joseph C. Ford Professor of the Computer Science Department at Cornell University. His major research interests are in reasoning about knowledge and uncertainty, security, distributed computation, decision theory, and game theory. He has coauthored 6 patents, three books (“Reasoning About Knowledge”, “Reasoning about Uncertainty”, and “Actual Causality”), and over 400 tech- nical publications. Halpern is a Fellow of AAAI, AAAS (American Association for the Advancement of Science), the American Academy of Arts and Sciences, ACM, IEEE, the Game Theory Society, the National Academy of Engi- neering, and SAET (Society for the Advancement of Eco- nomic Theory). Among other awards, he received the Kampe de Feriet Award in 2016, the ACM SIGART Au- tonomous Agents Research Award in 2011, the Dijkstra Prize in 2009, the ACM/AAAI Newell Award in 2008, the Godel Prize in 1997, was a Guggenheim Fellow in 2001-02, and a Fulbright Fellow in 2001-02 and 2009-10.