Keynote Speakers

Meghyn Bienvenu

LaBRI - CNRS & University of Bordeaux, France

More information to follow.

Subbarao Kambhampati

Arizona State University, USA

Can LLMs Really Reason & Plan?

Headshot of Subbarao Kambhampati

Abstract:
Large Language Models (LLMs) are on track to reverse what seemed like an inexorable shift of AI from explicit to tacit knowledge tasks. Trained as they are on everything ever written on the web, LLMs exhibit "approximate omniscience" — they can provide answers to all sorts of queries, but with nary a guarantee. This could herald a new era for knowledge-based AI systems — with LLMs taking the role of (blowhard?) experts. But first, we have to stop confusing the impressive form of the generated knowledge for correct content, and resist the temptation to ascribe reasoning, planning, self-critiquing etc. powers to approximate retrieval by these n-gram models on steroids. We have to focus instead on LLM-Modulo techniques that complement the unfettered idea generation of LLMs with careful vetting by model-based AI systems. In this talk, I will reify this vision and attendant caveats in the context of the role of LLMs in planning tasks.

Bio:
Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati's research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Nina Narodytska

VMware Research by Broadcom, USA

Logic-based Explainability of ML Models

Headshot of Nina Narodytska

Abstract:
Machine learning models are among the most successful artificial intelligence technologies making an impact in a variety of practical applications. However, many concerns were raised about the 'magical' power of these models. It is disturbing that we are clearly lacking in understanding of the decision making process behind this technology. Therefore, a natural question is whether we can trust decisions that neural networks make. There are a large number of research approaches to explainability. One popular family of methods, so-called ad-hoc explainability methods, uses heuristic-based solutions. These methods are among the most practical due to their scalability but they do not provide any guarantees on the quality of explanations. To address this issue, we propose a formal approach where explainability is formalized as a logical problem and solved using optimization tools, like SMT, SAT, ILP. Using these techniques we are able to compute provably correct explanations for smaller ML models. We consider several techniques to scale logic-based methods to larger ML models. We will highlight an interesting connection between explainability and robustness of ML models.

Bio:
Nina Narodytska is a staff researcher at VMware Research by Broadcom. Prior to VMware, she was a researcher at Samsung Research America. She completed postdoctoral studies in the Carnegie Mellon University School of Computer Science and the University of Toronto. She received her PhD from the University of New South Wales. She was named one of "AI's 10 to Watch"researchers in the field of AI in 2013. She has presented invited talks and tutorials at FMCAD'18, CP'19, AAAI'20, IJCAI'20, LMML'22, CP'22 and ESSAI'23.

Sheila McIlraith & Murray Shanahan

University of Toronta, Canada & Imperial College London, UK

Headshot of Sheila McIlraith
Headshot of Murray Shanahan

More information to follow.

Bios:
Sheila McIlraith is a Professor in the Department of Computer Science at the University of Toronto, a Canada CIFAR AI Chair (Vector Institute), and an Associate Director and Research Lead at the Schwartz Reisman Institute for Technology and Society. McIlraith's research is in the area of AI sequential decision making broadly construed, with a focus on human-compatible AI. McIlraith is a Fellow of the ACM and the Association for the Advancement of Artificial Intelligence (AAAI). With co-authors McIlraith has been honoured with the 2011 SWSA 10-year Award, recognizing the highest impact paper from ISWC 10 years prior; in 2022, the 2022 ICAPS Influential Paper Award, a 10-year test of time award; and most recently the 2023 IJCAI-JAIR Paper Prize, awarded annually to an outstanding paper published in JAIR in the preceding five years.

Murray Shanahan is a principal research scientist at Google DeepMind and Professor of Cognitive Robotics at Imperial College London. His publications span artificial intelligence, robotics, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He is active in public engagement, and was scientific advisor on the film Ex Machina. His books include “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015).