Keynote Speakers

The following speakers have graciously accepted to give keynotes at EACL 2021.

Melanie Mitchell

Title: Why AI is Harder Than We Think

Abstract: Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.

Speaker Bio: Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

Fernanda Ferreira

Title: Putting things in order: Linearization decisions in language production

Abstract: Speakers must decide how to convert unordered thoughts and ideas into a structured sequence of linguistic forms that communicates their intended message; that is, they must make a series of linearization decisions. One approach to this decision-making challenge is for speakers to begin with information that is easy to access and encode, allowing them to retrieve more difficult material during articulation and minimizing the need for pauses and other disfluencies. On this view, which is sometimes referred to as the Easy-First strategy, ordering decisions emerge as a byproduct of speakers’ attempts to accommodate the early placement of a linguistic expression. This incremental strategy is also thought to characterize multi-utterance production, which implies that the initial utterance of a discourse will reflect easily accessed or primed content. Using scene description tasks, we have developed a competing theory which assumes that speakers instead build a detailed macro-plan for an upcoming sequence of utterances that reflects the semantics of the scene. Our research shows that the order in which objects in a scene are described correlates with a specific aspect of object meaning, namely what we term “interactability”: the extent to which a human would be likely to interact with the object. We conclude that linearization decisions in language production are primarily driven not by an Easy-First strategy but instead emerge from a hierarchical plan that is based on a semantic representation of object affordances.

Speaker Bio: Fernanda Ferreira, PhD, is Professor of Psychology and Member of the Graduate Program in Linguistics at the University of California, Davis. Her research is focused on uncovering the mechanisms that enable humans to understand and generate language in real time and in cooperation with other cognitive systems. In 1995 she received the American Psychological Association’s Distinguished Scientific Award for Early Career Contribution to Psychology (Human Learning and Cognition), and she is a Fellow of the American Psychological Society, the Cognitive Science Society, and the Royal Society of Edinburgh. In 2015, Dr. Ferreira was elected to the Governing Board of the Psychonomic Society and currently is the Chair of its Fellows Committee. She served as Associate Editor of the journal Cognitive Psychology from 2013-2020, and from 2006-2010 she was Editor-in-Chief of the Journal of Experimental Psychology: General. She has been a member of the Linguistics Panel of the National Science Foundation and is currently a standing member of the National Institutes of Health Study Section on Language and Communication.

Marco Baroni

Title: On the gap between computational and theoretical linguistics

Abstract: Deep nets trained on large amounts of unannotated text develop impressive linguistic skills. For years now, linguistically-inclined computational linguists have systematically studied the behaviour of these models through a variety of grammatical tasks, in search for new insights on the nature of language. However, this line of work has had virtually no impact on theoretical linguistics. In my talk, after reviewing some of the most exciting work in the area, I would like to provide some conjectures about why theoretical linguists do not care, and suggest a few possible avenues for a more fruitful convergence between the fields.

Speaker bio: Marco Baroni received a PhD in Linguistics from the University of California, Los Angeles, in the year 2000. After several experiences in research and industry, he joined the Center for Mind/Brain Sciences of the University of Trento, where he became associate professor in 2013. In 2016, Marco joined the Facebook Artificial Intelligence Research team in Paris. In 2019, he became ICREA research professor, affiliated with the Linguistics Department of Pompeu Fabra University in Barcelona. Marco’s work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Starting Grant, the ICAI-JAIR best paper prize and the ACL test-of-time award. Marco’s current research focuses on how to improve communication between artificial neural networks, taking inspiration from human language and other animal communication systems.