Statistical Relational Artificial Intelligence. Luc De Raedt
Чтение книги онлайн.

Читать онлайн книгу Statistical Relational Artificial Intelligence - Luc De Raedt страница 5

СКАЧАТЬ agents can be uncertain about what properties individuals have, what relations are true, what individuals exist, whether different terms denote the same individual, and the dynamics of the world.

      The basic building block of StarAI are relational probabilistic models—we use this term in the broad sense, meaning any models that combine relations and probabilities. They can be seen as combinations of probability and predicate calculus that allow for individuals and relations as well as probabilities. In building on top of relational models, StarAI goes far beyond reasoning, optimization, learning, and acting optimally in terms of a fixed number of features or variables, as it is typically studied in machine learning, constraint satisfaction, probabilistic reasoning, and other areas of AI. In doing so, StarAI has the potential to make AI more robust and efficient.

      This book aims to provide an introduction that can help newcomers to the field get started, understand the state-of-the-art and the current challenges, and be ready for future advances. It reviews the foundations of StarAI, motivates the issues, justifies some choices that have been made, and provides some open problems. Laying bare the foundations will hopefully inspire others to join us in exploring the frontiers and the yet unexplored areas.

      The target audience for this book consists of advanced undergraduate and graduate student as well as researchers and practitioners who want to get an overview of the basics and the state-of-the-art in StarAI. To this aim, Part I starts with providing the necessary background in probability and logic. We then discuss the representations of relational probability models and the underlying issues. Afterward, we focus first on inference, in Part II, and then on learning, in Part III. Finally, we touch upon relational tasks that go beyond the basic probabilistic inference and learning tasks as well as some open issues.

      Researchers who are already working on StarAI—we apologize to anyone whose work we are accidentally not citing—may enjoy reading about parts of StarAI with which they are less familiar.

      Since StarAI draws upon ideas developed within many different fields, it can be quite challenging for newcomers to get started.

      One of the challenges of building on top of multiple traditions is that they often use the same vocabulary to mean different things. Common terms such as “variable,” “domain,” “object,” “relation,” and “parameter” have come to have accepted meanings in mathematics, computing, statistics, logic, and probability, but their meanings in each of these areas is different enough to cause confusion. We will be clear about the meaning of these when using them. For instance, we follow the logic tradition and use the term “individuals” for things. They are also called “objects,” but that terminology is often confusing to people who have been brought up with object-oriented programming, where objects are data structures and associated methods. For example, a person individual is a real person and not a data structure that encapsulates information about a person. A computer is not uncertain about its own data structures, but can be uncertain about what exists and what is true in the world.

      Another confusion stems from the term “relational.” Most existing datasets are, or can be, stored in relational databases. Existing machine learning techniques typically learn from datasets stored in relational datasets where the values are Boolean, discrete, ordinal, or continuous. However, in many datasets the values are the names of individuals, as in the following example.

      Figure 1.2: An example dataset that is not amenable to traditional classification.

      Example 1.1 Consider learning from the dataset in Fig. 1.2. The values of the Student and the Course attributes are the names of the students (s1, s2, s3 and s4) and the courses (c1, c2, c3 and c4). The value of the grade here is an ordinal (a is better than b which is better than c). Assume that the task is to predict the grade of students on courses, for example predicting the grade of students s3 and s4 on course c4. There is no information about course c4, and students s3 and s4 have the same average (they both have one “b”); however, it is still possible to predict that one will do better than the other in course c4. This can be done by learning how difficult each course is, and how smart each student is, given the data about students, the courses they take, and the grades they obtain. For example, we may learn that s1 is intelligent, s2 is not as intelligent, course c2 is difficult and course c3 is not difficult, etc. This model then allows for the prediction that s3 will do better than s4 in course c4.

      Standard textbook supervised learning algorithms that learn, e.g., a decision tree, a neural network, or a support vector machine (SVM) to predict grade are not appropriate; they can handle ordinals, but cannot handle the names of students and courses. It is the relationship among the individuals that provides the generalizations from which to learn. Traditional classifiers are unable to take into account such relations. This also holds for learning standard graphical models, such as Bayesian networks. These approaches make what can be seen as a single-table single-row assumption, which requires that each instance is described in a single row by a fixed set of features and all instances are independent of one another (given the model). This clearly does not hold in this dataset as the information about student s1 is spread over multiple rows, and that about course c1 as well. Furthermore, tests on student = s1 or course = c3 would be meaningless if we want to learn a model that generalizes to new students.

      StarAI approaches take into account the relationships among the individuals as well as deal with uncertainty.

      The benefits of combining logical abstraction and relations with probability and statistics are manifold.

      • When learning a model from data, relational and logical abstraction allows one to reuse experience in that learning about one entity improves the prediction for other entities. This can generalize to objects that have never been observed before.

      • Logical variables, which are placeholders for individuals, allow one to make abstractions that apply to all individuals that have some common properties.

      • By using logical variables and unification, one can specify and reason about regularities across different situations using rules and templates rather than having to specify them for each single entity separately.

      • The employed and/or learned knowledge is often declarative and compact, which potentially makes it easier for people to understand and validate.

      • In many applications, background knowledge about the domain can be represented in terms of probability and/or logic. Background knowledge may improve the quality of learning: the logical aspects may focus the search on the relevant patterns, thus restricting the search space, while the probabilistic components may provide prior knowledge that can help avoid overfitting.

      Relational and logical abstraction have the potential to make statistical AI more robust and efficient. Incorporating uncertainty makes relational models СКАЧАТЬ