
Large Language Models (LLMs) are revolutionizing multilingual natural language processing, yet their performance across languages remains uneven and fraught with challenges. This talk will explore recent research outlining both the limitations and advancements in multilingual LLMs, with a focus on ambiguity resolution, idiomatic language comprehension, and long-context reasoning. We will examine how factors beyond sheer data quantity affect model performance, and discuss promising strategies, including neurosymbolic reasoning and grammar prompting for more robust language understanding. By synthesizing insights from recent studies, we aim to highlight pathways for developing more effective and equitable multilingual language technologies.
Ameeta Agrawal's research focuses on Natural Language Processing (NLP) and Large Language Models (LLMs), with an emphasis on enhancing language understanding across diverse linguistic landscapes. She holds a Ph.D. from York University and is currently an Assistant Professor of Computer Science at Portland State University, where she leads the PortNLP Lab. Passionate about language diversity, Ameeta speaks multiple languages, inspiring her commitment to developing more inclusive AI technologies. She has contributed to several NLP conferences, including ACL, EMNLP, and NAACL, working toward more effective and equitable multilingual language models.