Keynote Speakers

Gabriel Barth-Maron
DeepMind, London, UK

Topics

Large Language Models, Reinforcement Learning, Variational Inference

Biography

I am a Staff Research Engineer at Google DeepMind in London. My research interests span reinforcement learning, data-efficient learning, multimodal modeling, and training large-scale models. I’m also interested in building tools that accelerate the pace of research in machine learning and AI.

Among other contributions, I pioneered Distributed Reinforcement Learning at DeepMind and in the greater academic community. Our papers Distributed Prioritized Experience Replay and Distributed Distributional Deterministic Policy Gradients (D4PG) helped to prove the effectiveness of using Distributed Reinforcement Learning. We developed and open-sourced Acme, Reverb, and Launchpad to make Distributed RL easier.

Recently I have been working on extending transformers to multiple modalities. One example of this is Gato, a multi-modal, multi-task, multi-embodiment generalist policy. As part of Google DeepMind’s Gemini team I am working on the next generation of large-scale multimodal transformer models.

I hold a BA in mathematical economics and a ScM in computer science from Brown University.

Talk



Anthony G. Cohn
The Alan Turing Institute, UK

Topics

Foundation Models, AI, Reasoning

Biography

Tony Cohn is Professor of Automated Reasoning at the University of Leeds. He holds BSc and PhD degrees from the University of Essex where he studied under Pat Hayes. He spent 10 years at the University of Warwick before moving to Leeds in 1990 where he founded  a research group working on knowledge representation and reasoning with a particular focus on qualitative spatial/spatio-temporal reasoning, the best known being the well cited region connection calculus (RCC) – the KR-92 paper describing RCC won the 2020 KR Test-of-Time award.  He was awarded the 2021 Herbert A. Simon Prize for Advances in Cognitive Systems for for his research on qualitative representation and reasoning about space and time, cognitive vision and robotics, and visually-grounded language processing.

He is Editor-in-Chief Spatial Cognition and Computation and has been Chairman/President of the UK AI Society SSAISB, the European Association for Artificial Intelligence (EurAI), KR inc, the IJCAI Board of Trustees and was the Editor-in-Chief for Artificial Intelligence 2007-2014 and of the AAAI Press 2004-14. He remains a Director of KR Inc.

He is the recipient of the 2015 IJCAI Donald E Walker Distinguished Service Award which honours senior scientists in AI for contributions and service to the field during their careers, as well as the 2012 AAAI Distinguished Service Award for “extraordinary and sustained service to the artificial intelligence community”. He is a Fellow of the Royal Academy of Engineering,  and is also a Fellow of AAAI, AISB,  EurAI (Founding Fellow), AAIA, the BCS, and the IET.  He was a member of the UK Research Excellence Framework (REF) 2014 Sub Panel 11 (Computer Science and Informatics) of Panel B.

Talk



Sven Giesselbach
Fraunhofer Institute - IAIS, Germany

Topics

Foundation Models, Large Language Models, Deep Learning

Biography

Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). His team develops solutions in the areas of medical, legal and general document understanding which in their core build upon (large) pre-trained language models. Sven Giesselbach is also part of the Lamarr Institute and the OpenGPT-X project in which he investigates various aspects of Foundation Models. Based on his project experience of more than 25 natural language understanding projects he studies the effect of Foundation Models on the execution of Natural Language Understanding projects and the novel challenges and requirements which arise with them. He has published several papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. Most recently he co-authored a book on “Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media” which will be published by Springer Nature.

Gerhard Paaß, Sven Giesselbach, Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media, Springer, May, 2023

https://link.springer.com/book/9783031231896

Talk