DMI Webinar "Controllable Natural Language Generation for Fairness and Creativity"
"Controllable Natural Language Generation for Fairness and Creativity"
08 September 2021, 18:00-19:00 CEST
ABSTRACT:
Recent advances in large pre-trained language models have demonstrated strong results in generating natural languages and significantly improved performances for many natural language generation (NLG) applications.
However, when the generation tasks are open-ended and the content is under-specified, existing techniques struggle to generalize to novel scenarios and generate long-term coherent and creative content. Moreover, the model exhibit societal biases that are learned from the pre-training corpora. This happens because the generation models are trained to capture the surface patterns (i.e. sequences of words) following the left-to-right order, instead of capturing underlying semantics and discourse structures. In this talk, I will present our recent works on controllable text generation to enhance the fairness and creativity of the model. We explore hierarchical generation and constrained decoding, with applications to creative story generation and debiasing dialog responses.
BIO:
Nanyun (Violet) Peng is an Assistant Professor of Computer Science at the University of California, Los Angeles. Prior to that, she spent three years at the University of Southern California's Information Sciences Institute as an Assistant Research Professor. She received her Ph.D. in Computer Science from Johns Hopkins University, Center for Language and Speech Processing. Her research focuses on fairness, robustness, and generalizability of NLP models, with applications to natural language generation and low-resource information extraction.
The talks will be held online. For more information, write to dmi@unibocconi.it.