Elements applications of artificial intelligence in transport and logistics. Vadim Shmal
Чтение книги онлайн.

Читать онлайн книгу Elements applications of artificial intelligence in transport and logistics - Vadim Shmal страница 3

СКАЧАТЬ organizations, such as The Institute for the Future, have a wealth of information on AI and other emerging technologies and design professions, as well as the talent required to work with those technologies.

      The definition of artificial intelligence has evolved since the concept was developed and it is currently not a black and white definition, but rather a continuum. From the 1950s to the 1970s, AI research focused on the automation of mechanical functions. Researchers such as John McCarthy and Marvin Minsky have explored the problems of general computing, general artificial intelligence, reasoning, and memory.

      In 1973, Christopher Chabris and Daniel Simons proposed a thought experiment called The Incompatibility of AI and Human Intelligence. The problem described was that if the artificial system was so smart that it was superior to humans or superior to human capabilities, the system could make whatever decisions it wanted. This can violate the fundamental human assumption that people should have the right to make their own choices.

      In the late 1970s and early 1980s, the field of activity changed from the classical orientation towards computers to the creation of artificial neural networks. Researchers began to look for ways to teach computers to learn rather than just perform certain tasks. This field developed rapidly during the 1970s and eventually moved from computing to a more scientific-oriented one, and its field of application expanded from computing to human perception and action.

      Many researchers in the 1970s and 1980s focused on defining the boundaries of human and computer intelligence, or the capabilities required for artificial intelligence. The boundary should be wide enough to cover the full range of human capabilities.

      While the human brain is capable of processing gigabytes of data, it was difficult for leading researchers to imagine how an artificial brain could process much larger amounts of data. At the time, the computer was a primitive device and could only process single-digit percentages of data on a human scale.

      During that era, artificial intelligence scientists also began work on algorithms to teach computers to learn from their own experience – a concept similar to how the human brain learns. Meanwhile, in parallel, a large number of computer scientists developed search methods that could solve complex problems by looking for a huge number of possible solutions.

      Artificial intelligence research today continues to focus on automating specific tasks. This emphasis on the automation of cognitive tasks is called «narrow AI». Many researchers working in this field are working on facial recognition, language translation, playing chess, composing music, driving cars, playing computer games, and analyzing medical images. Over the next decade, narrow AI is expected to develop more specialized and advanced applications, including a computer system that can detect early stages of Alzheimer’s disease and analyze cancers.

      The public uses and interacts with artificial intelligence every day, but the value of AI in education and business is often overlooked. AI has significant potential in almost all industries, such as pharmaceuticals, manufacturing, medicine, architecture, law and finance.

      Companies are already using artificial intelligence to improve services, improve product quality, lower costs, improve customer service, and save money on data centers. For example, with robotics software, Southwest Airlines and Amadeus can better answer customer questions and use customer-generated reports to improve their productivity. Overall, AI will affect nearly every industry in the coming decades. On average, about 90% of U.S. jobs will be affected by AI by 2030, but the exact percentage varies by industry.

      Artificial intelligence can dramatically improve many aspects of our lives. There is a lot of potential for improving health and treating illness and injury, restoring the environment, personal safety, and more. This potential has generated a lot of discussion and debate about its impact on humanity. AI has been shown to be far superior to humans in a variety of tasks such as machine vision, speech recognition, machine learning, language translation, computer vision, natural language processing, pattern recognition, cryptography, chess.

      Many of the fundamental technologies developed in the 1960s were largely abandoned by the late 1990s, leaving gaps in this area. Fundamental technologies that define AI today, such as neural networks, data structures, and so on. Many modern artificial intelligence technologies are based on these ideas and are much more powerful than their predecessors. Due to the slow pace of change in the tech industry, while current advances have produced some interesting and impressive results, there is little to distinguish them from each other.

      Early research in artificial intelligence focused on learning machines that used a knowledge base to change their behavior. In 1970, Marvin Minsky published a concept paper on LISP machines. In 1973, Turing proposed a similar language called ML, which, unlike LISP, recognized a subset of finite and formal sets for inclusion.

      In the decades that followed, researchers were able to refine the concepts of natural language processing and knowledge representation. This advance has led to the development of the ubiquitous natural language processing and machine translation technologies in use today.

      In 1978, Andrew Ng and Andrew Hsey wrote an influential review article in the journal Nature containing over 2,000 papers on AI and robotic systems. The paper covered many aspects of this area such as modeling, reinforcement learning, decision trees, and social media.

      Since then, it has become increasingly difficult to involve researchers in natural language processing, and new advances in robotics and digital sensing have surpassed the state of the art in natural language processing.

      In the early 2000s, a lot of attention was paid to the introduction of machine learning. Learning algorithms are mathematical systems that learn by observation.

      In the 1960s, Bendixon and Ruelle began to apply the concepts of learning machines to education and beyond. Their innovations inspired researchers to further explore this area, and many research papers were published in this area in the 1990s.

      Sumit Chintal’s 2002 article, Learning with Fake Data, discusses a feedback system in which artificial intelligence learns by experimenting with the data it receives as input.

      In 2006, Judofsky, Stein, and Tucker published an article on deep learning that proposed a scalable deep neural network architecture.

      In 2007, Rohit described" hyperparameters». The term "hyperparameter" is used to describe a mathematical formula that is used in computer learning. While it is possible to design systems with tens, hundreds, or thousands of hyperparameters, the number of parameters must be carefully controlled because overloading the system with too many hyperparameters can degrade performance.

      Google co-founders Larry Page and Sergey Brin published an article on the future of robotics in 2006. This document includes a section on developing intelligent systems using deep neural networks. Page also noted that this area would not be practical without a wide range of underlying technologies.

      In 2008, Max Jaderberg and Shai Halevi published «Deep Speech». In it was presented the technology «Deep Speech», which allowed the system to determine the phonemes of spoken language. The system entered four sentences and was able to output sentences that were almost grammatically correct, but had the wrong pronunciation of several consonants. Deep Speech was one of the first programs to learn to speak and had a great impact on research in the field of natural language processing.

      In 2010, Jeffrey Hinton describes the relationship between human-centered design and the field of natural language processing. The book was widely cited because it introduced the field of human-centered AI research.

      Around СКАЧАТЬ