PANN: A New Artificial Intelligence Technology. Tutorial. Boris Zlotin
Чтение книги онлайн.

Читать онлайн книгу PANN: A New Artificial Intelligence Technology. Tutorial - Boris Zlotin страница 2

Название: PANN: A New Artificial Intelligence Technology. Tutorial

Автор: Boris Zlotin

Издательство: Издательские решения

Жанр:

Серия:

isbn: 9785006423817

isbn:

СКАЧАТЬ was generally between 6 and 10.

      • The number of inputs is the number of members of the digital sequence in question; for images in raster graphics, the number of pixels must be the same for all images under consideration. For example, at a resolution of 16 × 16, the number of inputs is I = 256; at a resolution of 32 × 32, the number of inputs is I = 1024. You can use any aspect ratio of rectangular images when working with images. It should be noted that for the effective recognition of different types of images, there are their own optimal resolutions, which are easy to determine with simple testing. At the same time, an unexpected property of PANN manifests itself – the optimal number of pixels for recognition is usually small; for example, for the recognition of various kinds of portraits, the best resolution can be 32 × 32.

      Fig. 2. Single-neuron two-level PANN network

      Fig. 3. Single-neuron multi-level PANN network

      2.2. PROGRESS NEURON TRAINING

      Training a PANN network is much easier than training any classical network.

      The difficulties of training classical neural networks are related to the fact that when training several different ones, some images affect the synaptic weights of other images and introduce distortions in training into each other. Therefore, one must select weights so their set corresponds to all images simultaneously. To do this, they use the gradient descent method, which requires many iterative calculations.

      A fundamentally different approach was developed to train the PANN network: «One neuron, one image,» in which each neuron trains its own image. At the same time, there are no mutual influences between different neurons, and training becomes fast and accurate.

      The training of the Progress neuron to a specific image boils down to the distributor determining the signal level (in the simplest case, its amplitude or RGB value) and closing the switch corresponding to the range of weights in which this value falls.

      Fig. 4. Trained single-neuron multi-level PANN network

      The above training procedure of the Progress neuron gives rise to several remarkable properties of the PANN network:

      1. Training does not require computational operations and is very fast.

      2. One neuron’s set of synaptic weights is independent of other neurons. Therefore, the network’s neurons can be trained individually or in groups, and then the trained neurons or groups of neurons can be combined into a network.

      3. The network can retrain – i.e., it is possible to change, add, and remove the necessary neurons at any time without affecting the neurons unaffected by these changes.

      4. A trained image neuron can be easily visualized using simple color codes linking the included weights’ levels to the pixels’ brightness or color.

      2.3. THE CURIOUS PARADOX OF PANN

      At first glance, the PANN network looks structurally more complex than classical Artificial Neural Networks. But in reality, PANN is simpler.

      The PANN network is simpler because:

      1. The Rosenblatt neuron has an activation factor; in other words, the result is processed using a nonlinear logistic (sigmoid) function, an S-curve, etc. This procedure is indispensable, but it complicates the Rosenblatt neuron and makes it nonlinear, which leads to substantial training problems. In contrast, the Progress neuron is strictly linear and does not cause any issues.

      2. The Progress neuron has an additional element called a distributor, which is a simple logic device: a demultiplexer. It switches the signal from one input to one of several outputs. In the Rosenblatt neuron, weights are multi-bit memory cells that can store numbers over a wide range, while in PANN, the most superficial cells (triggers) can be used, which can store only the numbers 1 and 0.

      3. Unlike classic networks, PANN does not require huge memory and processing power of a computer, so cheap computers can be used, and much less electricity is required.

      4. PANN allows you to solve complex problems on a single-layer network.

      5. PANN requires tens or even hundreds of times fewer images in the training set.

      Thus, it is possible to create full-fledged products based on PANN, using computer equipment that is not very expensive and economical in terms of energy consumption.

      Fig. 5. Long and expensive training vs. fast and cheap

      2.4. THE MATHEMATICAL BASIS OF RECOGNITION

      ON THE PROGRESS NEURON

      The linearity of the Progress neuron leads to the fact that the network built on these neurons is also linear. This fact ensures its complete transparency, the simplicity of the theory describing it, and the mathematics applied.

      In 1965, Lotfi Zadeh introduced the concept of «fuzzy sets» and the idea of «fuzzy logic.» To some extent, this served as a clue for our work in developing PANN’s mathematical basis and logic. Mathematical operations in PANN aim to compare inexactly matching images and estimate the degree of their divergence in the form of similarity coefficients.

      2.4.1. Definitions

      In 2009, an exciting discovery was made called the «Marilyn Monroe neuron» or, in other sources, «grandmother’s neuron.» In the human mind, knowledge on specific topics is «divided» into individual neurons and neuron groups, which are connected by associative connections so that excitation can be transmitted from one neuron to another. This knowledge and the accepted paradigm of «one neuron, one image» made building the PANN recognition system possible.

      Let’s introduce the «neuron-image» concept – a neuron trained for a specific image. In PANN, each neuron-image is a realized functional dependency (function) Y = f (X), wherein:

      X is a numerical array (vector) with the following properties:

      for X = A, f (A) = N

      for X ≠ A, f (A) <N

      A is a given value.

      N is the dimension of vector X, the number of digits in this vector.

      This format, called the Binary Comparison Format (BCF), is a rectangular binary digital matrix in which:

      • The number of columns is equal to the length N (the number of digits) of the array.

      • The number of rows equals the number of weight levels K selected for the network.

      • Each significant digit is denoted by one (1) in the corresponding line, and the absence of a digit is denoted by zero (0).

      • СКАЧАТЬ