Skip to content

Latest commit

 

History

History
229 lines (201 loc) · 20.9 KB

File metadata and controls

229 lines (201 loc) · 20.9 KB

Main page

This year's BlackboxNLP workshop will have a hybrid programme. The first half of the program is entirely virtual, the second half is hosted both on-site and online. All plenary sessions will be held (either livestreamed or broadcasted) in Zoom and followed by a live Q&A (unless the schedule indicates otherwise). Questions can be asked in the Zoom chat during the presentations. Poster sessions will be held on Gather.town. The links to the zoom session and Gather.town space can be found on the Underline page of the workshop (only accessible with conference registration). In Punta Cana, we'll be in room Bavaro 2.

Summary of the programme

San Francisco (UTC-8) | Punta Cana (UTC-4) | London (UTC) | Beijing (UTC+8)

22:00 - 22:15 | 02:00 - 02:15 | 06:00 - 06:15 | 14:00 - 14:15 Opening remarks. Zoom.
22:15 - 23:00 | 02:15 - 03:00 | 06:15 - 07:00 | 14:15 - 15:00 Keynote 1 (w/ Q&A): Willem Zuidema. Zoom.
23:00 - 23:15 | 03:00 - 03:15 | 07:00 - 07:15 | 15:00 - 15:15 Break.
23:15 - 00:00 | 03:15 - 04:00 | 07:15 - 08:00 | 15:15 - 16:00 Oral presentation session 1 (w/ Q&A). Zoom.
00:00 - 00:30 | 04:00 - 04:30 | 08:00 - 08:30 | 16:00 - 16:30 Break.
00:30 - 02:00 | 04:30 - 06:00 | 08:30 - 10:00 | 16:30 - 18:00 Poster session 1. Gather.town.
02:00 - 02:15 | 06:00 - 06:15 | 10:00 - 10:15 | 18:00 - 18:15 Break.
02:15 - 03:00 | 06:15 - 07:00 | 10:15 - 11:00 | 18:15 - 19:00 Oral presentation session 2 (w/ Q&A). Zoom.
03:00 - 03:30 | 07:00 - 07:30 | 11:00 - 11:30 | 19:00 - 19:30 Break.
03:30 - 04:00 | 07:30 - 08:00 | 11:30 - 12:00 | 19:30 - 20:00 Keynote 2 (w/o Q&A): Ana Marasović. Zoom.
04:00 - 04:30 | 08:00 - 08:30 | 12:00 - 12:30 | 20:00 - 20:30 Keynote 3 (w/o Q&A): Sara Hooker. Zoom.
04:30 - 04:45 | 08:30 - 08:45 | 12:30 - 12:45 | 20:30 - 20:45 Closing -- virtual program. Zoom.
Break. Hybrid program starts (On-site sessions in room Bavaro 2).
05:00 - 05:15 | 09:00 - 09:15 | 13:00 - 13:15 | 21:00 - 21:15 Opening remarks and best paper award. On-site & Zoom.
05:15 - 06:00 | 09:15 - 10:00 | 13:15 - 14:00 | 21:15 - 22:00 Keynote 4 (w/ Q&A): Willem Zuidema. On-site & Zoom.
06:00 - 06:30 | 10:00 - 10:30 | 14:00 - 14:30 | 22:00 - 22:30 Oral presentation session 3 (w/ Q&A). On-site & Zoom.
06:30 - 07:00 | 10:30 - 11:00 | 14:30 - 15:00 | 22:30 - 23:00 Coffee break.
07:00 - 08:00 | 11:00 - 12:00 | 15:00 - 16:00 | 23:00 - 00:00 Poster session 2. Gather.town.
08:00 - 09:00 | 12:00 - 13:00 | 16:00 - 17:00 | 00:00 - 01:00 Lunch break.
09:00 - 09:45 | 13:00 - 13:45 | 17:00 - 17:45 | 01:00 - 01:45 Keynote 5 (w/ Q&A): Sara Hooker. On-site & Zoom.
09:45 - 10:15 | 13:45 - 14:15 | 17:45 - 18:15 | 01:45 - 02:15 Oral presentation session 4 (w/ Q&A). On-site & Zoom.
10:15 - 10:45 | 14:15 - 14:45 | 18:15 - 18:45 | 02:15 - 02:45 Coffee break.
10:45 - 12:15 | 14:45 - 16:15 | 18:45 - 20:15 | 02:45 - 04:15 Poster session 3. On-site & Gather.town.
12:15 - 10:45 | 16:15 - 14:45 | 20:15 - 18:45 | 04:15 - 02:45 Coffee break.
12:45 - 13:15 | 16:45 - 17:15 | 20:45 - 21:15 | 04:45 - 05:15 Oral presentation session 5 (w/ Q&A). On-site & Zoom.
13:15 - 14:00 | 17:15 - 18:00 | 21:15 - 22:00 | 05:15 - 06:00 Keynote 6 (w/ Q&A): Ana Marasović. On-site & Zoom.
14:00 - 14:15 | 18:00 - 18:15 | 22:00 - 22:15 | 06:00 - 06:15 Closing remarks. On-site & Zoom.

Oral presentation session 1 (Time indications: Punta Cana)

  • 3:15 - 3:30 To what extent do human explanations of model behavior align with actual model behavior? Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela and Adina Williams.
  • 3:30 - 3:45 (Live, w/ Q&A) Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings. Hendrik Schuff, Hsiu-Yu Yang, Heike Adel and Ngoc Thang Vu.
  • 3:45 - 4:00 (Live, w/ Q&A) The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. Laura Aina and Tal Linzen.

Oral presentation session 2 (Time indications: Punta Cana)

  • 6:15 - 6:30 (w/ Q&A) On the Limits of Minimal Pairs in Contrastive Evaluation. Jannis Vamvas and Rico Sennrich.
  • 6:30 - 6:45 (Live, w/ Q&A) Test Harder than You Train: Probing with Extrapolation Splits. Jenny Kunz and Marco Kuhlmann.
  • 6:45 - 7:00 What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations. Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd and Sameer Singh.

Oral presentation session 3, Zoom + Room Bavaro 2 (Time indications: Punta Cana)

  • 10:00 - 10:15 (w/ Q&A) On the Limits of Minimal Pairs in Contrastive Evaluation. Jannis Vamvas and Rico Sennrich.
  • 10:15 - 10:30 (Live, w/ Q&A) Test Harder than You Train: Probing with Extrapolation Splits. Jenny Kunz and Marco Kuhlmann.

Oral presentation session 4, Zoom + Room Bavaro 2 (Time indications: Punta Cana)

  • 13:45 - 14:00 What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations. Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd and Sameer Singh.
  • 14:00 - 14:15 (Live, w/ Q&A) To what extent do human explanations of model behavior align with actual model behavior? Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela and Adina Williams.

Oral presentation session 5, Zoom + Room Bavaro 2 (Time indications: Punta Cana)

  • 16:45 - 17:00 (Live, w/ Q&A) Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings. Hendrik Schuff, Hsiu-Yu Yang, Heike Adel and Ngoc Thang Vu.
  • 17:00 - 17:15 (Live, w/ Q&A) The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. Laura Aina and Tal Linzen.

Poster session 1 (gather.town)

Archival papers

  • Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN. Rahma Chaabouni, Roberto Dessì and Eugene Kharitonov.
  • A howling success or a working sea? Testing what BERT knows about metaphors. Paolo Pedinotti, Eliana Di Palma, Ludovica Cerini and Alessandro Lenci.
  • Variation and generality in encoding of syntactic anomaly information in sentence embeddings. Qinxuan Wu and Allyson Ettinger.
  • Screening Gender Transfer in Neural Machine Translation. Guillaume Wisniewski, Lichao Zhu, Nicolas Bailler and François Yvon.
  • What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study. Ayush Kumar, Mukuntha Narayanan Sundararaman and Jithendra Vepa.
  • Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference. Hitomi Yanaka and Koji Mineshima.
  • Investigating Negation in Pre-trained Vision-and-language Models. Radina Dobreva and Frank Keller.
  • How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings. Badr M. Abdullah, Iuliia Zaitova, Tania Avgustinova, Bernd Möbius and Dietrich Klakow.
  • Exploratory Model Analysis Using Data-Driven Neuron Representations. Daisuke Oba, Naoki Yoshinaga and Masashi Toyoda.
  • ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns’ Semantic Properties and their Prototypicality. Marianna Apidianaki and Aina Garí Soler.
  • Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently. Lis Kanashiro Pereira, Yuki Taya and Ichiro Kobayashi.
  • On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning. Marc Tanti, Lonneke van der Plas, Claudia Borg and Albert Gatt.
  • Language Models Use Monotonicity to Assess NPI Licensing. Jaap Jumelet, Milica Denic, Jakub Szymanik, Dieuwke Hupkes and Shane Steinert-Threlkeld.
  • Testing the linguistics of transformer generalizations. Saliha Muradoglu and Mans Hulden.
  • Do Language Models know the Way to Rome? Bastien Liétard, Mostafa Abdou and Anders Søgaard.

Extended abstracts

  • BPE affects Training Data Memorization by Transformer Language Models. Eugene Kharitonov, Marco Baroni and Dieuwke Hupkes.
  • Transformers Scan both Left and Right -- When they Have a Cue. Jan H. Athmer and Denis Paperno.
  • Probing structures in the visual region embeddings from multimodal BERT. Victor Milewski, Miryam de Lhoneux and Marie-Francine Moens.
  • Explaining Classes through Word Attributions. Samuel Rönnqvist, Amanda Myntti, Aki-Juhani Kyröläinen, Sampo Pyysalo, Veronika Laippala and Filip Ginter.

Findings papers

  • Distilling Word Meaning in Context from Pre-trained Language Models. Yuki Arase and Tomoyuki Kajiwara.
  • Probing Pre-trained Language Models for Semantic Attributes and their Values. Meriem Beloucif.

Poster session 2 (gather.town)

Archival papers

  • ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective. Tessa Masis and Carolyn Anderson.
  • Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN. Rahma Chaabouni, Roberto Dessì and Eugene Kharitonov.
  • A howling success or a working sea? Testing what BERT knows about metaphors. Paolo Pedinotti, Eliana Di Palma, Ludovica Cerini and Alessandro Lenci.
  • Efficient Explanations from Empirical Explainers. Robert Schwarzenberg, Nils Feldhus and Sebastian Möller.
  • Variation and generality in encoding of syntactic anomaly information in sentence embeddings. Qinxuan Wu and Allyson Ettinger.
  • Screening Gender Transfer in Neural Machine Translation. Guillaume Wisniewski, Lichao Zhu, Nicolas Bailler and François Yvon.
  • Not all parameters are born equal: Attention is mostly what you need. Nikolay Bogoychev.
  • An Investigation of Language Model Interpretability via Sentence Editing. Samuel Stevens and Yu Su.
  • Relating Neural Text Degeneration to Exposure Bias. Ting-Rui Chiang and Yun-Nung Chen.
  • What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study. Ayush Kumar, Mukuntha Narayanan Sundararaman and Jithendra Vepa.
  • Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference. Hitomi Yanaka and Koji Mineshima.
  • How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings. Badr M. Abdullah, Iuliia Zaitova, Tania Avgustinova, Bernd Möbius and Dietrich Klakow.
  • The Acceptability Delta Criterion: Testing Knowledge of Language using the Gradience of Sentence Acceptability. Héctor Vázquez Martínez.
  • Analyzing BERT's Knowledge of Hypernymy via Prompting. Michael Hanna and David Mareček.
  • ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns’ Semantic Properties and their Prototypicality. Marianna Apidianaki and Aina Garí Soler.
  • Word Equations: Inherently Interpretable Sparse Word Embeddings through Sparse Coding. Adly Templeton.
  • Learning Mathematical Properties of Integers. Maria Ryskina and Kevin Knight.
  • Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji and Yanjun Qi.
  • Interacting Knowledge Sources, Inspection and Analysis: Case-studies on Biomedical text processing. Parsa Bagherzadeh and Sabine Bergler.
  • Attacks against Ranking Algorithms with Text Embeddings: {A} Case Study on Recruitment Algorithms. Anahita Samadi, Debapriya Banerjee and Shirin Nilizadeh.
  • On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning. Marc Tanti, Lonneke van der Plas, Claudia Borg and Albert Gatt.
  • Training Dynamic based data filtering may not work for NLP datasets. Arka Talukdar, Monika Dagar, Prachi Gupta and Varun Menon.
  • Controlled tasks for model analysis: Retrieving discrete information from sequences. Ionut-Teodor Sorodoc, Gemma Boleda and Marco Baroni.
  • BERT Has Uncommon Sense: Similarity Ranking for Word Sense BERTology. Luke Gessler and Nathan Schneider.
  • How Length Prediction Influence the Performance of Non-Autoregressive Translation? Minghan Wang, GUO Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Hengchao Shang, Min Zhang, Shimin Tao and Hao Yang.
  • Language Models Use Monotonicity to Assess NPI Licensing. Jaap Jumelet, Milica Denic, Jakub Szymanik, Dieuwke Hupkes and Shane Steinert-Threlkeld.
  • Do contextual language embeddings distinguish between intersective and strictly subsective adjectives? Michael Goodale and Salvador Mascarenhas.
  • Explaining NLP Models via Minimal Contrastive Editing (MiCE). Alexis Ross, Ana Marasović and Matthew Peters.
  • An in-depth look at Euclidean disk embeddings for structure preserving parsing. Federico Fancellu, Lan Xiao, Allan Jepson and Afsaneh Fazly.

Extended abstracts

  • On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation. Gal Patel, Leshem Choshen and Omri Abend.
  • BPE affects Training Data Memorization by Transformer Language Models. Eugene Kharitonov, Marco Baroni and Dieuwke Hupkes.
  • Transformers Scan both Left and Right -- When they Have a Cue. Jan H. Athmer and Denis Paperno.
  • Probing structures in the visual region embeddings from multimodal BERT. Victor Milewski, Miryam de Lhoneux and Marie-Francine Moens.
  • Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords. Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend and Vivek Srikumar.
  • Human Evaluation Study for Explaining Knowledge Graph Completion. Timo Sztyler and Carolin Lawrence.

Findings papers

  • Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques.
  • Distilling Word Meaning in Context from Pre-trained Language Models. Yuki Arase and Tomoyuki Kajiwara.
  • Probing Pre-trained Language Models for Semantic Attributes and their Values. Meriem Beloucif.
  • Making Heads and Tails of Models with Marginal Calibration for Sparse Tagsets. Michael Kranzlein, Nelson F. Liu and Nathan Schneider.

Poster session 3 (gather.town)

Archival papers

  • ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective. Tessa Masis and Carolyn Anderson.
  • Enhancing Interpretable Clauses Semantically using Pretrained Word Representation. Rohan Kumar Yadav, Lei Jiao, Ole-Christoffer Granmo and Morten Goodwin.
  • Not all parameters are born equal: Attention is mostly what you need. Nikolay Bogoychev.
  • Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids’ Representations. Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi and Mohammad Taher Pilehvar.
  • Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?. Tobias Norlund, Lovisa Hagström and Richard Johansson.
  • Relating Neural Text Degeneration to Exposure Bias. Ting-Rui Chiang and Yun-Nung Chen.
  • How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings. Badr M. Abdullah, Iuliia Zaitova, Tania Avgustinova, Bernd Möbius and Dietrich Klakow.
  • The Acceptability Delta Criterion: Testing Knowledge of Language using the Gradience of Sentence Acceptability. Héctor Vázquez Martínez.
  • Exploratory Model Analysis Using Data-Driven Neuron Representations. Daisuke Oba, Naoki Yoshinaga and Masashi Toyoda.
  • Learning Mathematical Properties of Integers. Maria Ryskina and Kevin Knight.
  • Probing Language Models for Understanding of Temporal Expressions. Shivin Thukral, Kunal Kukreja and Christian Kavouras.
  • Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji and Yanjun Qi.
  • How Does BERT Rerank Passages? An Attribution Analysis with Information Bottlenecks. Zhiying Jiang, Raphael Tang, Ji Xin and Jimmy Lin.
  • Training Dynamic based data filtering may not work for NLP datasets. Arka Talukdar, Monika Dagar, Prachi Gupta and Varun Menon.
  • Controlled tasks for model analysis: Retrieving discrete information from sequences. Ionut-Teodor Sorodoc, Gemma Boleda and Marco Baroni.
  • Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers. Jason Phang, Haokun Liu and Samuel R. Bowman.
  • An in-depth look at Euclidean disk embeddings for structure preserving parsing. Federico Fancellu, Lan Xiao, Allan Jepson and Afsaneh Fazly.

Extended abstracts

  • Generalization in neural sequence models: a case study in symbolic mathematics. Sean Welleck, Peter West, Jize Cao and Yejin Choi.

Findings papers

  • Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates. Xiaochuang Han and Yulia Tsvetkov.
  • Probing Across Time: What Does RoBERTa Know and When? Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi and Noah A. Smith.
  • Generating Realistic Natural Language Counterfactuals. Marcel Robeer.

Poster session 3 (On site, Room Bavaro 2)

Archival papers

  • Efficient Explanations from Empirical Explainers. Robert Schwarzenberg, Nils Feldhus and Sebastian Möller.
  • Enhancing Interpretable Clauses Semantically using Pretrained Word Representation. Rohan Kumar Yadav, Lei Jiao, Ole-Christoffer Granmo and Morten Goodwin.
  • Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?. Tobias Norlund, Lovisa Hagström and Richard Johansson.
  • Analyzing BERT's Knowledge of Hypernymy via Prompting. Michael Hanna and David Mareček.
  • Explaining NLP Models via Minimal Contrastive Editing (MiCE). Alexis Ross, Ana Marasović and Matthew Peters.

Extended abstracts

  • On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation. Gal Patel, Leshem Choshen and Omri Abend.

Findings papers

  • Generating Realistic Natural Language Counterfactuals. Marcel Robeer.