– Claire
Contents
1.1.1Generating Text from Meaning Representations
1.1.2Generating Text from Data
1.1.3Generating Text from Text
2.2Meaning Representations-to-Text Generation
2.2.1Grammar-Centric Approaches
2.2.2Statistical MR-to-Text Generation
2.3.1Sentence Simplification and Sentence Compression
3.1.1Convolutional Neural Networks
3.1.2Recurrent Neural Networks
3.2The Encoder-Decoder Framework
3.2.1Learning Input Representations with Bidirectional RNNs
3.2.2Generating Text Using Recurrent Neural Networks
3.2.3Training and Decoding with Sequential Generators
3.3Differences with Pre-Neural Text-Production Approaches
5Building Better Input Representations
5.1Pitfalls of Modelling Input as a Sequence of Tokens
5.1.1Modelling Long Text as a Sequence of Tokens
5.1.2Modelling Graphs or Trees as a Sequence of Tokens
5.1.3Limitations of Sequential Representation Learning
5.2.1Modelling Documents with Hierarchical LSTMs
5.2.2Modelling Document with Ensemble Encoders
5.2.3Modelling Document With Convolutional Sentence Encoders