Yihong will be presenting ReFactorGNNs in ELLIS PhD Symposium 2022. Come to our poster if you are curious about why factorisation-based models are special message-passing GNNs!
Our paper ReFactorGNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective has been accepted by NeurIPS 2022! Congrats Yihong, Pushkar, Luca, Pasquale, Pontus and Sebastian!
Our work Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity has been selected as an outstanding paper at ACL 2022!
The call for participation for the Shared Task at the DADC Workshop co-located with NAACL ‘22 in Seattle is now live! We have three fantastic tracks for you to participate in. Sign up here!
Additional resources from our work on Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation at EMNLP 2021 are now available! We are releasing a collection of synthetically-generated adversarial QA pairs and related resources as well as the models used to generate the questions.
Our AAAI 2022 tutorial, On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices, was an outstanding success, with more than 600 attendees – check it out! Congratulations Pasquale and collaborators!
AdversarialQA is currently the 3rd most downloaded QA dataset on Huggingface 🤗 Datasets right after the benchmark SQuADv1.1 and SQuADv2!
Our proposal for the First Workshop on Dynamic Adversarial Data Collection has been accepted! See you at NAACL ‘22 in Seattle!
Pasquale is joining the School of Informatics at the University of Edinburgh as a faculty member, and is currently recruiting PhD students! If you would like to work with an ICLR 2021 Outstanding Paper Award winner in an amazing NLP department, make sure to get in touch with him!
Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations is now SoTA on two OGB link property prediction datasets ogbl-biokg and ogbl-wikikgv2. Check out how to reproduce these results here!
Our paper Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations will be presented at AKBC 2021! We propose to incorporate relation prediction into the 1vsAll objective for training better knowledge graph embeddings. Check out our code here.
Maximilian presented his research on adversarial examples in NLP at the UCL AI Centre seminar series. The recording is available here.
Our paper Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions will be presented at NeurIPS 2021! We propose a way to back-propagate through algorithms – check out our video, as well as Yannic Kilcher’s video on our paper! Congrats Pasquale and collaborators!
Our paper, Question and Answer Test-Train Overlap in Open-Domain Question Answering, has won a Best Paper award at EACL 2021! Congrats to authors Patrick, Pontus and Sebastian.
Our paper Complex Query Answering with Neural Link Predictors, based on Erik’s MSc thesis supervised by Pasquale, won an Outstanding Paper Award at ICLR 2021!
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them has been released on ArXiv! Two times faster and more accurate QA models by using a huge collection of 65M automatically-generated QA-pairs. This leads to a flexible QA system that can be optimised for memory, speed or accuracy. Check out the PAQ data here, code and models coming soon!
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval has been accepted at ICLR 2021. This simple recursive retrieval gets state-of-the-art results without requiring additional resources such as hyperlink networks.
Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets has been accepted at EACL 2021. If you’re doing open domain QA, evaluate your model on our test sets to see whether it generalizes, or is just memorizing the training set. Get the data here
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models has been accepted at EACL 2021. We propose two intuitive metrics, skew and stereotype, that quantify and analyse the gender bias present in contextual language models when tackling the WinoBias pronoun resolution task.
Complex Query Answering with Neural Link Predictors, our state-of-the-art approach for answering complex queries on large and incomplete Knowledge Graphs, will appear at ICLR 2021 as an Oral – top 2% of all publications!
Don’t Read Too Much into It: Adaptive Computation for Open-Domain Question Answering will appear at EMNLP 2020. We propose an adaptive computation method that significantly reduces the computational cost of ODQA systems while retraining a similar performance.
Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension has been accepted at TACL 2020, and we’ll be presenting it at EMNLP 2020. We’ve also publicly released the dataset and you can try to beat our best model and submit to our online leaderboard through Dynabench.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks has been accepted at NeurIPS 2020. We’ve also released a blog post and released the code as part of the HuggingFace ecosystem. Check out a demo of RAG here
KILT: a Benchmark for Knowledge Intensive Language Tasks is now available on ArXiv! KILT is set of tools and data to accelerate research progress on open domain and knowledge intensive NLP, including open domain QA, fact checking, relation extraction and entity linking. KILT will make your work easier, more comparable and reproducible, and allow researchers to share components more easily.
check out the code, and leaderboard, and HuggingFace integrations.
Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets is now on ArXiv! Do you use NaturalQuestions, TriviaQA, or WebQuestions? It turns out 60% of test set answers are also in the train set. More surprising, 30% of test questions have a close paraphrase in the train set. We look at what this means for models. Annotations and code available here
R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason was presented at ACL 2020. It previously won the “Best Linguistic Resource” award at the 26th annual meeting of the Japanese Association for Natural Language Processing.
“How Context Affects Language Models’ Factual Predictions” has been been awarded Best Paper at AKBC 2020!
Pasquale’s paper Learning Reasoning Strategies in End-to-End Differentiable Proving will appear at ICML 2020! We propose a neuro-symbolic reasoning model that can learn to dynamically select and generate rules conditioned on the goal during their reasoning process via gradient-based optimisation.
UCL NLP members authored two chapters in Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges, edited by IOS Press!
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks is now up on ArXiv. State-of-the-art open domain Question Answering, and more factual and specific generation, by combining the power of pre-trained dense retrieval and seq2seq models.
Our paper on how to improve unsupervised factual predictions using retrieved context,“How Context Affects Language Models’ Factual Predictions” has been accepted at AKBC2020!
Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations to be presented at ACL 2020! In this collaboration with Oxford and DeepMind researchers we show that you can invert NLP models and make them produce mutually inconsistent explanations.
MLQA: Evaluating Cross-lingual Extractive Question Answering has been accepted at ACL 2020! Lets make sure question answering works in all languages. Get started here
Towards machine-assisted meta-studies: the Hubble constant appeared in the Monthly Notices of the Royal Astronomical Society.
Our new work in collaboration with NYU and FAIR on Unsupervised Question Decomposition for multi-hop QA is now up on ArXiv. Break down complex questions into a list of simple questions without any labelled data. Strong, robust results on HotPotQA, and don’t need any supporting fact annotation.
Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets was presented at AAAI 2020.
Differentiable Reasoning on Large Knowledge Bases and Natural Language will be presented at AAAI 2020 as an oral (4.5% acceptance rate)! We show how neuro-symbolic reasoning can be scaled to very large knowledge bases and text corpora.
We’re releasing a new cross-lingual Question Answering dataset, MLQA: Evaluating Cross-lingual Extractive Question Answering Check out the dataset here
Language Models as Knowledgebases? accepted at EMNLP 2019! Code for the paper’s experiments, and the LAMA probe are available here
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language accepted at ACL 2019! Code and synthetic training data is available here
Unsupervised Question Answering by Cloze Translation accepted at ACL 2019! Code and synthetic training data will be available here
Interpretation of Natural Language Rules in Conversational Machine Reading accepted at EMNLP 2018 - CodaLab challenge are now LIVE at sharc-data.github.io!
Wronging a Right: Generating Better Errors to Improve Grammatical Error Detection accepted at EMNLP 2018 - code available at this link!
Cape achieves new SoTA on the TriviaQA Wiki dataset codalab Leaderboard. More details can be found here. Cape is super easy to use, extend and integrate into all kinds of software!
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge accepted at CoNLL 2018 - code available at this link!
We are proud of our 2nd place in the EMNLP 2018 FEVER shared task (fever.ai) thanks to the amazing work of @takuma_ynd!
We will be giving a tutorial on Machine Reading at UAI 2018!
Jack the Reader – A Machine Reading Framework accepted at ACL, System Demonstrations track!
Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers accepted at ACL!
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness accepted at HLT-NAACL!
Convolutional 2D Knowledge Graph Embeddings accepted at AAAI!
The 6th Workshop on Automated Knowledge Base Construction (AKBC 2017) returns to NIPS: submit your papers by October 21st!
End-to-end Differentiable Proving accepted at NIPS!
Adversarial Sets for Regularising Neural Link Predictors accepted at UAI!
Programming with a Differentiable Forth Interpreter accepted at ICML!
A Supervised Approach to Extractive Summarisation of Scientific Papers, a paper based on Ed Collins’ MEng thesis, accepted at CoNLL!
Frustratingly Short Attention Spans in Neural Language Modeling, a paper based on Michal Daniluk’s MSc Machine Learning project, accepted to ICLR! Michal also received the MSc Machine Learning Programme Director’s Award (2015-2016) for Outstanding Project Report (Second Place)
SemEval 2017 Science task description paper preprint now available online.
Tim Rocktäschel is awarded a Google Ph.D. Fellowship in Natural Language Processing
Multi-Task Learning of Keyphrase Boundary Classification accepted at ACL!
Neural Architectures for Fine-grained Entity Type Classification wins outstanding paper award at EACL!
2 papers by our group accepted at EACL!
We co-organised a Poetry AI workshop at UCL, at which humans acted as a neural network to generate poetry. Slides of the event are available here.
Wired article discussing our project on machine reading of scientific publications and how it could aid peer review
SemEval 2017 Science results are announced. Congratulations to the winning teams, s2_end2end, MayoNLP and MIT!
We are co-organising the first workshop for women and underrepresented minorities in NLP (WiNLP) at ACL 2017. Consider participating!
We are co-organizing the SemEval 2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications (ScienceIE). Consider participating!
emoji2vec: Learning Emoji Representations from their Description, a paper based on Ben Eisner’s UCLMR internship, won the best paper award at SocialNLP 2016!
Defining Words with Words: Beyond the Distributional Hypothesis was awarded the best proposal award at RepEval!
We are co-organizing a workshop on Neural Abstract Machines & Program Induction (NAMPI) at NIPS 2016. Consider submitting a paper!