Best Paper Awards
The Best Paper Awards recognise outstanding papers at EACL 2021. The best paper awards were selected as follows:
Based on the paper reviews, the opinions of the area chair, and the senior area chair(s), we constructed a short-list of strong papers. The short-list contained papers of diverse topics e.g., on tasks, models, resources, evaluation methodologies, etc. The papers came from most of the conference tracks and were in both short and long papers format.
The short-listed anonymized papers were sent to the Best Paper Award Committee for further review. Based on the BPA committee judgments, the top papers were awarded honourable mentions and best papers. Overall we award 2 best papers and 3 honorable mentions, for both the long papers and short papers lists.
The following papers have been given awards.
Best Long Paper Awards
- Error Analysis and the Role of Morphology
Marcel Bollmann and Anders Søgaard - Is Supervised Syntactic Parsing Beneficial for Language Understanding Tasks? An Empirical Investigation
Goran Glavaš and Ivan Vulić
Honourable Mention Papers
- Cognition-aware Cognate Detection
Diptesh Kanojia, Prashant Sharma, Sayali Ghodekar, Pushpak Bhattacharyya, Gholamreza Haffari and Malhar Kulkarni - Hidden Biases in Unreliable News Detection Datasets
Xiang Zhou, Heba Elfardy, Christos Christodoulopoulos, Thomas Butler and Mohit Bansal - A phonetic model of non-native spoken word processing
Yevgen Matusevych, Herman Kamper, Thomas Schatz, Naomi Feldman and Sharon Goldwater
Best Short Paper Awards
- Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets
Patrick Lewis, Pontus Stenetorp and Sebastian Riedel - Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
Nora Kassner, Philipp Dufter and Hinrich Schütze
Honourable Mention Papers
- We Need To Talk About Random Splits
Anders Søgaard, Sebastian Ebert, Jasmijn Bastings and Katja Filippova - ProFormer: Towards On-Device LSH Projection Based Transformers
Chinnadhurai Sankar, Sujith Ravi and Zornitsa Kozareva - Applying the Transformer to Character-level Transduction
Shijie Wu, Ryan Cotterell and Mans Hulden