AI Seminar: Large Language Models & Symbolic Structures

Yoon Kim
Event Speaker
Yoon Kim
Event Speaker Description
Assistant Professor
Department of Electrical Engineering and Computer Science
Event Type
Artificial Intelligence
Event Location
KEC 1001 and Zoom
Event Description


Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints.

Speaker Biography

Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush.