Ontology Engineering. Elisa F. Kendall
Чтение книги онлайн.

Читать онлайн книгу Ontology Engineering - Elisa F. Kendall страница 7

Название: Ontology Engineering

Автор: Elisa F. Kendall

Издательство: Ingram

Жанр: Программы

Серия: Synthesis Lectures on the Semantic Web: Theory and Technology

isbn: 9781681735221

isbn:

СКАЧАТЬ in artificial intelligence, including work in areas of semantic networks, question-answering, neural networks, formal linguistics and natural language processing, theorem proving, and expert systems.

      The term knowledge representation is often used to talk about representation of information for consumption by machines, although “good” knowledge representations should also be readable by people. Every KR language has a number of features, most of which are common to software engineering, query, and other languages. They include: (1) a vocabulary, consisting of some set of logical symbols and reserved terms plus variables and constants; (2) a syntax that provides rules for combining the symbols into well-formed expressions; (3) a formal semantics, including a theory of reference that determines how the constants and variables are associated with things in the universe of discourse and a theory of truth that distinguishes true statements from false ones; and (4) rules of inference, that determine how one pattern can be inferred from another. If the logic is sound, the rules of inference must preserve truth as determined by the semantics. It is this fourth element, the rules of inference and the ability to infer new information from what we already know, that distinguishes KR languages from others.

      Many logic languages and their dialects have been used for KR purposes. They vary from classical first order logic (FOL) in terms of: (1) their syntax; (2) the subsets of FOL they implement (for example, propositional logic without quantifiers, Horn-clause, which excludes disjunctions in conclusions such as Prolog, and terminological or definitional logics, containing additional restrictions); (3) their proof theory, such as monotonic or non-monotonic logic (the latter allows defaults), modal logic, temporal logic, and so forth; and (4) their model theory, which as we mentioned above, determines how expressions in the language are evaluated with respect to some model of the world.

      Classical FOL is two-valued (Boolean); a three-valued logic introduces unknowns; four-valued logic introduces inconsistency. Fuzzy logic uses the same notation as FOL but with an infinite range of certainty factors (0.0–1.0). Also, there are differences in terms of the built-in vocabularies of KR languages: basic ISO/IEC 24707:2018 (2018) is a tight, first-order language with little built in terminology, whereas the Web Ontology Language (Bao et al., 2012) includes support for some aspects of set theory.10

      Description logics (DLs) are a family of logic-based formalisms that represent a subset of first order logic. They were designed to provide a “sweet spot” in that they have a reasonable degree of expressiveness on the ontology spectrum, while not having so much expressive power that it is difficult to build efficient reasoning engines for them. They enable specification of ontologies in terms of concepts (classes), roles (relationships), and individuals (instances).

      Description logics are distinguished by (1) the fact that they have a formal semantics, representing decidable fragments of first order logic, and (2) their provisions for inference services, which include sound and complete decision procedures for key problems. By decidable, we mean that there are effective algorithms for determining the truth value of the expressions stated in the logic. Description logics are highly optimized to support specific kinds of reasoning for implementation in operational systems.11

      Example types of applications of description logics include:

      • configuration systems—product configurators, consistency checking, constraint propagation, etc., whose first significant industrial application was called PROSE (Mc-Guinness and Wright, 1998) and used the CLASSIC knowledge representation system, a description logic, developed by AT&T Bell Laboratories in the late 1980s (Borgida et al., 1989);

      • question answering and recommendation systems, for suggesting sets of responses or options depending on the nature of the queries; and

      • model engineering applications, including those that involve analysis of the ontologies or other kinds of models (systems engineering models, business process models, and so forth) to determine whether or not they meet certain methodological or other design criteria.

      An ontology is a conceptual model of some aspect of a particular universe of discourse (or of a domain of discourse). Typically, ontologies contain only “rarified” or “special” individuals, representing elemental concepts critical to the domain. In other words, they are comprised primarily of concepts, relationships, and axiomatic expressions.

      One of the questions that we are often asked is, “What is the difference between an ontology and a knowledge base?” Sometimes people refer to the knowledge base as excluding the ontology and only containing the information about individuals along with their metadata, for example, the triples in a triple store without a corresponding schema. In other words, the ontology is separately maintained. In other cases, a knowledge base is considered to include both the ontology and the individuals (i.e., the triples in the case of a Semantic Web-based store). The ontology provides the schema and rules for interpretation of the individuals, facts, and other rules comprising the domain knowledge.

      A knowledge graph typically contains both the ontology and related data. In practice, we have found that it is important to keep the ontology and data as separate resources, especially during development. The practice of maintaining them separately but combining them in knowledge graphs and/or applications makes them easier to maintain. Once established, ontologies tend to evolve slowly, whereas the data on which applications depend may be highly volatile. Data for well-known code sets, which might change less frequently than some data sets, can be managed in the form of “OWL ontologies,” but, even in these cases, the individuals should be separate from the ontology defining them to aid in testing, debugging, and integration with other code sets. These data resources are not ontologies in their own right, although they might be identified with their own namespace, etc.

      Most inference engines require in-memory deductive databases for efficient reasoning (including commercially available reasoners). The knowledge base may be implemented in a physical, external database, such as a triple store, graph database, or relational database, but reasoning is typically done on a subset (partition) of that knowledge base in memory.

      Reasoning is the mechanism by which the logical assertions made in an ontology and related knowledge base are evaluated by an inference engine. For the purposes of this discussion, a logical assertion is simply an explicit statement that declares that a certain premise is true. A collection of logical assertions, taken together, form a logical theory. A consistent theory is one that does not contain any logical contradictions. This means that there is at least one interpretation of the theory in which all of the assertions are provably true. Reasoning is used to check for contradictions in a collection of assertions. It can also provide a way of finding information that is implicit in what has been stated. In classical logic, the validity of a particular conclusion is retained even if new information is asserted in the knowledge base. This may change if some of the prior knowledge, or preconditions, are actually hypothetical assumptions that are invalidated by the new information. The same idea applies for arbitrary actions—new information can make preconditions invalid.

      Reasoners work by using the rules of inference to look for the “deductive closure” of the information they are given. They take the explicit statements and the rules of inference and apply those rules to the explicit statements until there are no more inferences they can make. In other words, they find any information that is implicit among the explicit statements. For example, from the following statement about flowering plants, if it has been asserted that x is a flowering plant, then a reasoner can infer that x has a bloom y, and that y has a characteristic which includes a bloom color z:

       (forall ((x СКАЧАТЬ