Yihong will be presenting ReFactorGNNs in ELLIS PhD Symposium 2022. Come to our poster if you are curious about why factorisation-based models are special message-passing GNNs!
Our paper ReFactorGNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective has been accepted by NeurIPS 2022! Congrats Yihong, Pushkar, Luca, Pasquale, Pontus and Sebastian!
Our work Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity has been selected as an outstanding paper at ACL 2022!
The call for participation for the Shared Task at the DADC Workshop co-located with NAACL ‘22 in Seattle is now live! We have three fantastic tracks for you to participate in. Sign up here!
Additional resources from our work on Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation at EMNLP 2021 are now available! We are releasing a collection of synthetically-generated adversarial QA pairs and related resources as well as the models used to generate the questions.
Our AAAI 2022 tutorial, On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices, was an outstanding success, with more than 600 attendees – check it out! Congratulations Pasquale and collaborators!
AdversarialQA is currently the 3rd most downloaded QA dataset on Huggingface 🤗 Datasets right after the benchmark SQuADv1.1 and SQuADv2!
Affiliated Faculty (Associate Professor)
Senior Research Fellow, Principal Investigator for H2020 CLARIFY
Now a Research Scientist at DeepMind
Alastair’s interests lie in natural language processing & machine learning.
Now a Research Scientist at DeepMind
Now a PhD student at MIT
Now a research associate at University of Sheffield
Now back to being a Ph.D. student at the University of Tokyo.
Now a master student at Tohoku University
Now back to being a PhD student at the Chinese Academy of Sciences.
Now a senior lecturer at University of Cambridge
Now CEO at CheckStep
Now a post-doc at University of Ghent
Now a research scientist at Preferred Networks (PFN)
Now back to being a PhD student at Xerox Research Centre Europe
Now a Research Scientist at Facebook
Now an associate professor at University of Copenhagen
Now an assistant professor at Tohoku University
Now a PhD student at University of Washington
V. Ivan Sanchez
Now an NLP researcher at Lenovo
Now back to being a PhD student at MIT
Now a student at Toyota Technological Institute at Chicago
Now a ML engineer at PolyAI
A synthetic dataset of 315k QA pairs on passages from SQuAD designed to help make QA models more robust to human adversaries. This resource is also available in HuggingFace datasets at https://huggingface.co/datasets/mbartolo/synQA.
AdversarialQA (from Beat the AI)
A dataset of 36k challenging extractive QA pairs consisting of training, evaluation and test data collected using three different models-in-the-loop: BiDAF, BERT and RoBERTa.
KILT: a Benchmark for Knowledge Intensive Language Tasks
A resource for training, evaluating and analyzing NLP models on Knowledge Intensive Language Tasks. KILT has been built from 11 datasets representing 5 tasks.
A multi-way aligned extractive QA evaluation benchmark MLQA contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese.
ShARC: Shaping Answers with Rules through Conversation
A collection of 32k task instances based on real-world rules and crowd-generated questions and scenarios requiring both the interpretation of rules and the application of background knowledge.
WikiHop & MedHop (QAngaroo)
Multi-hop question answering datasets from two different domains, designed to enabe models to combine disjoint pieces of textual evidence.