Название: Deep Learning Approaches to Text Production
Автор: Shashi Narayan
Издательство: Ingram
Жанр: Программы
Серия: Synthesis Lectures on Human Language Technologies
isbn: 9781681738215
isbn:
3.6Sketches of LSTM and GRU cells
3.7Two-dimensional representation of word embeddings
3.8RNN-based encoder-decoder architecture
3.9The German word “zeit” with its two translations
3.10Bidirectional RNNs applied to a sentence
3.11RNN decoding steps (Continues.)
3.12(Continued.) RNN decoding steps
3.13RNN decoding: conditional generation at each step
4.1Example input/output with missing, added, or repeated information
4.2Focusing on the relevant source word
4.3Sketch of the attention mechanism
4.4Example delexicalisations from the E2E and WebNLG data sets
4.5Interactions between slot values and sentence planning
4.6Example of generated text containing repetitions
4.7The impact of coverage on repetition
4.8Evolution of the DA vector as generation progresses
5.1Bidirectional RNN modelling document as a sequence of tokens
5.2Linearising AMR for text production
5.3Linearising RDF to prepare input-output pairs for text production
5.4Linearising dialogue moves for response generation
5.5Linearising Wikipedia descriptions for text generation
5.6Hierarchical document representation for abstractive document summarisation
5.7Multi-agent document representation for abstractive document summarisation
5.8Communication among multiple agents, each encoding a paragraph
5.9Extractive summarisation with a hierarchical encoder-decoder model.
5.10Graph-state LSTMs for text production from AMR graphs
5.11Graph-triple encoder for text production from RDF triple sets
5.12Graph convolutional networks for encoding a sentence
6.1An example of abstractive sentence summarisation.
6.2Selective encoding for abstractive sentence summarisation
6.3Heat map learned with the selective gate mechanism
6.4Two-step process for content selection and summary generation
6.5Graph-based attention to select salient sentences for abstractive summarisation
6.6Generating biography from Wikipedia infobox
6.7Example of word-property alignments for the Wikipedia abstract and facts
6.8The exposure bias in cross-entropy trained models
6.9Text production as a reinforcement learning problem
6.10The curriculum learning in application
6.11Deep reinforcement learning for sentence simplification
6.12Extractive summarisation with reinforcement learning
6.13Inconsistent responses generated by a sequence-to-sequence model
6.14Single-speaker model for response generation
6.15Examples of speaker consistency and inconsistency in the speaker model
6.16Responses to “Do you love me?” from the speaker-addressee model
7.1Infobox/text example from the WikiBio data set
7.2Example data-document pair from the extended WikiBio data set