Discourse-Aware Neural Rewards for Coherent Text Generation


In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. We learn neural rewards to model cross-sentence ordering as a means to approximate discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards.

Proceedings of the 16th Annual Meeting of the North American Association for Computational Linguistics (NAACL)