TopicsLarge Language Models, Reinforcement Learning, Variational Inference
TopicsFoundation Models, AI, Reasoning
Tony Cohn is Professor of Automated Reasoning at the University of Leeds. He holds BSc and PhD degrees from the University of Essex where he studied under Pat Hayes. He spent 10 years at the University of Warwick before moving to Leeds in 1990 where he founded a research group working on knowledge representation and reasoning with a particular focus on qualitative spatial/spatio-temporal reasoning, the best known being the well cited region connection calculus (RCC) – the KR-92 paper describing RCC won the 2020 KR Test-of-Time award. He was awarded the 2021 Herbert A. Simon Prize for Advances in Cognitive Systems for for his research on qualitative representation and reasoning about space and time, cognitive vision and robotics, and visually-grounded language processing.
He is Editor-in-Chief Spatial Cognition and Computation and has been Chairman/President of the UK AI Society SSAISB, the European Association for Artificial Intelligence (EurAI), KR inc, the IJCAI Board of Trustees and was the Editor-in-Chief for Artificial Intelligence 2007-2014 and of the AAAI Press 2004-14. He remains a Director of KR Inc.
He is the recipient of the 2015 IJCAI Donald E Walker Distinguished Service Award which honours senior scientists in AI for contributions and service to the field during their careers, as well as the 2012 AAAI Distinguished Service Award for “extraordinary and sustained service to the artificial intelligence community”. He is a Fellow of the Royal Academy of Engineering, and is also a Fellow of AAAI, AISB, EurAI (Founding Fellow), AAIA, the BCS, and the IET. He was a member of the UK Research Excellence Framework (REF) 2014 Sub Panel 11 (Computer Science and Informatics) of Panel B.
Foundation Models such as GPT/ChatGPT have become very popular recently and many claims have been made about their abilities, including for commonsense reasoning. In this talk I will present some investigations to determine the extent to which this is true, for the particular case of spatial reasoning. I will also talk about benchmarks for commonsense reasoning, a schema for categorising evaluation instruments such as benchmarks and competitions, and argue for a shift away from aggregation of metrics and towards reporting granular performance breakdowns.
TopicsFoundation Models, Large Language Models, Deep Learning
Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). His team develops solutions in the areas of medical, legal and general document understanding which in their core build upon (large) pre-trained language models. Sven Giesselbach is also part of the Lamarr Institute and the OpenGPT-X project in which he investigates various aspects of Foundation Models. Based on his project experience of more than 25 natural language understanding projects he studies the effect of Foundation Models on the execution of Natural Language Understanding projects and the novel challenges and requirements which arise with them. He has published several papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. Most recently he co-authored a book on “Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media” which will be published by Springer Nature.
Gerhard Paaß, Sven Giesselbach, Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media, , Springer, May, 2023