Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations to be presented at ACL 2020! In this collaboration with Oxford and DeepMind researchers we show that you can invert NLP models and make them produce mutually inconsistent explanations.