Название: Natural Language Processing for the Semantic Web
Автор: Diana Maynard
Издательство: Ingram
Жанр: Программы
Серия: Synthesis Lectures on the Semantic Web: Theory and Technology
isbn: 9781627056328
isbn:
9.2 Semantic-Based User Modeling
9.2.1 Constructing Social Semantic User Models from Semantic Annotations
9.2.2 Discussion
9.3 Filtering and Recommendations for Social Media Streams
9.4 Browsing and Visualization of Social Media Streams
9.5 Discussion and Future Work
10.1 Summary
10.2 Future Directions
10.2.1 Cross-media Aggregation and Multilinguality
10.2.2 Integration and Background Knowledge
10.2.3 Scalability and Robustness
10.2.4 Evaluation, Shared Datasets, and Crowdsourcing
Acknowledgments
This work was supported by funding from PHEME, DecarboNet, COMRADES, uComp, and the Engineering and Physical Sciences Research Council (grant EP/I004327/1). The authors also wish to thank colleagues from the GATE team, listed here in alphabetical order: Hamish Cunningham, Leon Derczynski, Genevieve Gorrell, Mark Greenwood, Johann Petrak, Angus Roberts, Ian Roberts, Dominic Rout, Wim Peters; and the many other colleagues who have contributed fruitful discussions.
Diana Maynard, Kalina Bontcheva, and Isabelle Augenstein
November 2016
CHAPTER 1
Introduction
Natural Language Processing (NLP) is the automatic processing of text written in natural (human) languages (English, French, Chinese, etc.), as opposed to artificial languages such as programming languages, to try to “understand” it. It is also known as Computational Linguistics (CL) or Natural Language Engineering (NLE). NLP encompasses a wide range of tasks, from low-level tasks, such as segmenting text into sentences and words, to high-level complex applications such as semantic annotation and opinion mining. The Semantic Web is about adding semantics, i.e., meaning, to data on the Web, so that web pages can be processed and manipulated by machines more easily. One central aspect of the idea is that resources are described using unique identifiers, called uniform resource identifiers (URIs). Resources can be entities, such as “Barack Obama,” concepts such as “Politician” or relations describing how entities relate to one another, such as “spouse-of.” NLP techniques provide a way to enhance web data with semantics, for example by automatically adding information about entities and relations and by understanding which real-world entities are referenced so that a URI can be assigned to each entity.
The goal of this book is to introduce readers working with, or interested in, Semantic Web technologies, to the topic of NLP and its role and importance in the field of the Semantic Web. Although the field of NLP has existed long before the advent of the Semantic Web, it has only been in recent years that its importance here has really come to the fore, in particular as Semantic Web technologies move toward more application-oriented realizations. The purpose of this book is therefore to explain the role of NLP and to give readers some background understanding about some of the NLP tasks that are most important for Semantic Web applications, plus some guidance about choosing methods and tools that fit most appropriately for a particular scenario. Ultimately, the reader should come away armed with the knowledge to understand the main principles and, if necessary, to choose suitable NLP technologies that can be used to enhance their Semantic Web applications.
The overall structure of the book is as follows. We first describe some of the core low-level components, in particular those which are commonly found in open source NLP toolkits and used widely in the community. We then show how these tools can be combined and used as input for the higher-level tasks such as Information Extraction, semantic annotation, social media analysis, and opinion mining, and finally how applications such as semantically enhanced information retrieval and visualization, and the modeling of online communities, can be built on top of these.
One point we should make clear is that when we talk about NLP in this book, we are referring principally to the subtask of Natural Language Understanding (NLU) and not to the related subtask of Natural Language Generation (NLG). While NLG is a useful task which is also relevant to the Semantic Web, for example in relaying the results of some application back to the user in a way that they can easily understand, and particularly in systems that require voice output of results, it goes outside the remit of this book, as it employs some very different techniques and tools. Similarly, there are a number of other tasks which typically fall under the category of NLP but are not discussed here, in particular those concerned with speech rather than written text. However, many applications for both speech processing and natural language generation make use of the low-level NLP tasks we describe. There are also some high-level NLP-based applications that we do not cover in this book, such as Summarization and Question Answering, although again these make use of the same low-level tools.
Most early NLP tools such as parsers (e.g., Schank’s conceptual dependency parser [1]) were rule-based, due partly to the predominance of certain linguistic theories (primarily those of Noam Chomsky [2]), but also due to the lack of computational power which made machine learning methods infeasible. In the 1980s, machine learning systems started coming to the fore, but were still mainly used just to automatically create sets of rules similar to existing manually developed rule systems, using techniques such as decision trees. As statistical models became more popular, particularly in fields such as Machine Translation and Part-of-Speech tagging, where hard rule-based systems were often not sufficient to resolve ambiguities, Hidden Markov Models (HMMs) became popular, introducing the idea of weighted features and probablistic decision-making. In the last few years, deep learning and neural networks have also become very popular, following their spectacular success in the field of image recognition and computer vision (for example in the technology behind self-driving cars), although their success for NLP tasks is currently nowhere near as dramatic. Deep learning is essentially a branch of Machine Learning that uses multiple hierarchical levels of features that are learned in an unsupervised fashion. This makes it very suitable for working with big data, because it is fast and efficient, and does not require the manual creation of training data, unlike supervised machine learning systems. However, as will be demonstrated throughout the course of this book, one of the problems of NLP is that tools almost always need adapting to specific domains and tasks, and for real-world applications this is often easier with rule-based systems. In most cases, combinations of different methods are used, depending on the task.
1.1 INFORMATION EXTRACTION
Information extraction is the process of extracting information and turning it into structured data. This may include populating a structured knowledge source with information from an unstructured knowledge source [3]. The information contained in the structured СКАЧАТЬ