Speakers


Antonios Anastasopoulos
Antonis Anastasopoulos

Assistant professor at George Mason Computer Science & Associate Researcher at Archimedes, Athena Research Center
Presentation title: Machine Translation and Low-Resource NLP

Antonios Anastasopoulos is an Assistant Professor in Computer Science at George Mason University. He received his PhD in Computer Science from the University of Notre Dame and then did a postdoc at Languages Technologies Institute at Carnegie Mellon University. His research is on natural language processing with a focus on low-resource settings, endangered languages, and cross-lingual learning, and is currently funded by the National Science Foundation, the National Endowment for the Humanities, the DoD, Google, Amazon, Microsoft, and Meta.

Mohit Bansal
Mohit Bansal

Professor in the Computer Science department at UNC Chapel Hill
Presentation title: Trustworthy Planning Agents for Collaborative Reasoning and Multimodal Generation

Dr. Mohit Bansal is the Parker Distinguished Professor in the Computer Science department at UNC Chapel Hill. He received his PhD from UC Berkeley in 2013 and his BTech from IIT Kanpur in 2008. His research expertise is in multimodal generative models, reasoning and planning agents, faithful language generation, and interpretable, efficient, and generalizable deep learning. He is a AAAI Fellow and recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), IIT Kanpur Young Alumnus Award, DARPA Director's Fellowship, NSF CAREER Award, Army Young Investigator Award (YIP), and outstanding paper awards at ACL, CVPR, EACL, COLING, CoNLL, and TMLR

Eunsol Choi
Eunsol Choi

Professor of computer science and data science at New York University
Presentation title: Equipping LLMs for Interaction

Eunsol Choi is an assistant professor of computer science and data science at New York University. Her research spans natural language processing and machine learning, with a focus on interpreting and reasoning about text in dynamic real-world contexts. Prior to joining NYU, she was an assistant professor at the University of Texas at Austin and a visiting researcher at Google. She holds a Ph.D. in computer science and engineering from the University of Washington. She is a recipient of a Facebook research fellowship, Google faculty research award, Sony faculty award, and an outstanding paper award at EMNLP.

Raquel Fernández
Raquel Fernández

Professor of Computational Linguistics & Dialogue Systems, University of Amsterdam
Presentation title: Multimodal NLP

Raquel Fernández is Professor of Computational Linguistics and Dialogue Systems at the Institute for Logic, Language & Computation, University of Amsterdam, where she leads the Dialogue Modelling Group. Raquel received her PhD from King's College London and, before moving to Amsterdam, held research positions at the University of Potsdam and Stanford University. Her research focuses on how language use is shaped by perception and social interaction. How are language and vision connected? What coordination strategies help us to communicate successfully with our dialogue partners? And how can answers to these questions lead to better NLP systems? Her group carries out research on these and related topics from an interdisciplinary perspective, at the interface of computational linguistics, cognitive science, and artificial intelligence

Ryan McDonald
Yulan He

Professor in Natural Language Processing at the Department of Informatics in King’s College London.
Presentation title: Encoder-Decoder Models

Yulan He is a Professor in Natural Language Processing at the Department of Informatics in King’s College London. She is currently holding a prestigious 5-year UKRI Turing AI Fellowship. Her recent research focused on addressing the limitations of Large Language Models (LLMs), aiming to enhance their reasoning capabilities, robustness, and explainability. She has published over 250 papers on topics such as self-evolution of LLMs, mechanistic interpretability, and LLM for educational assessment. She received several prizes and awards for her research, including an SWSA Ten-Year Award, a CIKM Test-of-Time Award, and was recognised as an inaugural Highly Ranked Scholar by ScholarGPS.

Marie-Catherine de Marneffe
Marie-Catherine de Marneffe

FNRS research associate and professor at UCLouvain
Presentation title: Human variation in NLP

Marie-Catherine de Marneffe is a FNRS research associate and professor at UCLouvain. She obtained her PhD under the supervision of Chris Manning at Stanford University and worked 10 years in the Linguistics department at The Ohio State University as assistant then associate professor. Her main research interests are in computational pragmatics, building models that capture what people infer “between the lines”. She is also one of the principal developers of the Universal Dependencies framework. Her research work has been funded by Google Inc., the National Science Foundation and the FNRS

Ryan McDonald
Ryan McDonald

AI consultant and investor
Presentation title: Classification

Ryan McDonald is an expert in Machine Learning and its application to NLP. He currently consults on a variety of efforts from autonomous agents to scientific simulation. Prior to this, Ryan was the Chief Scientist at ASAPP working on NLP and ML research focusing on CX and enterprise and was a Research Scientist in the Language Team at Google for 15 years where he helped build state-of-the-art NLP and ML technologies and pushed them to production. He received his PhD from the University of Pennsylvania focusing on multilingual syntax. In 2023 his work on Universal Dependencies received the ACL 10 year test-of-time award.

Preslav Nakov
Preslav Nakov

Professor and Department Chair for NLP at the Mohamed bin Zayed University of Artificial Intelligence
Presentation title: Towards Truly Open, Language-Specific, Safe, Factual, and Specialized Large Language Models

Preslav Nakov is Professor and Department Chair for NLP at the Mohamed bin Zayed University of Artificial Intelligence. He is part of the core team at MBZUAI's Institute for Foundation Models that developed Jais, the world's best open-source Arabic-centric LLM, Nanda, the world's best open-weights Hindi model, Sherkala, the world's best open-weights Kazakh model, and LLM360, the first truly open LLM (open weights, open data, and open code). Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees. Formerly, he was PC chair of ACL 2022, and President of ACL SIGLEX. He is also member of the editorial board of several journals including Computational Linguistics, TACL, ACM TOIS, IEEE TASL, IEEE TAC, CS&L, NLE, AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and 250+ research papers. He received a Best Paper Award at ACM WebSci'2022, a Best Long Paper Award at CIKM'2020, a Best Resource Paper Award at EACL'2024, a Best Demo Paper Award (Honorable Mention) at ACL'2020, a Best Task Paper Award (Honorable Mention) at SemEval'2020, a Best Poster Award at SocInfo'2019, and the Young Researcher Award at RANLP’2011. He was also the first to receive the Bulgarian President's John Atanasoff award, named after the inventor of the first automatic electronic digital computer. His research was featured by over 100 news outlets, including Reuters, Forbes, Financial Times, CNN, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.

Vlad Niculae
Vlad Niculae

University of Amsterdam
Presentation title: Structure prediction

Vlad Niculae is an assistant professor at the Language Technology Lab, University of Amsterdam. His research focuses on machine learning for natural language processing, specifically focusing on keywords like structure, sparsity, geometry, and optimization. Vlad was previously a post-doc in the Sardine lab in Lisbon, and obtained a PhD from Cornell advised by Claire Cardie.

Anna Rogers
Anna Rogers

Associate Professor in the Computer Science Department at the IT University of Copenhagen
Presentation title: LLMs and Factuality

Anna Rogers is an Associate Professor in the Computer Science Department at the IT University of Copenhagen. Her main research area is Natural Language Processing, in particular analysis and evaluation of pre-trained language models. She is currently an editor-in-chief of ACL Rolling Review, the peer review platform used by major NLP conferences.

Co-Organizers


Athenarc
Athenarc
Democritos
Aueb
Heriot Watt University
University of Manchester