Название: Machine Learning For Dummies
Автор: John Paul Mueller
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119724056
isbn:
https://www.youtube.com/watch?v=oRlwvLubFxg
), who is set to appear in a science fiction film. Her story appears on HuffPost at https://www.huffpost.com/entry/erica-japanese-robot-science-fiction-film:n_5ef6523dc5b6acab284181c3
. The point is, technology is just starting to get to the point where people may eventually be able to create lifelike robots and androids, but they don’t exist today.
Understanding the history of AI and machine learning
There is a reason, other than anthropomorphization, that humans see the ultimate AI as one that is contained within some type of android. Ever since the ancient Greeks, humans have discussed the possibility of placing a mind inside a mechanical body. One such myth is that of a mechanical man called Talos (http://www.ancient-wisdom.com/greekautomata.htm
). The fact that the ancient Greeks had complex mechanical devices, only one of which still exists (read about the Antikythera mechanism at http://www.ancient-wisdom.com/antikythera.htm
), makes it quite likely that their dreams were built on more than just fantasy. Throughout the centuries, people have discussed mechanical persons capable of thought (such as Rabbi Judah Loew's Golem, https://www.nytimes.com/2009/05/11/world/europe/11golem.html
).
AI is built on the hypothesis that mechanizing thought is possible. During the first millennium, Greek, Indian, and Chinese philosophers all worked on ways to perform this task. As early as the seventeenth century, Gottfried Leibniz, Thomas Hobbes, and René Descartes discussed the potential for rationalizing all thought as simply math symbols. Of course, the complexity of the problem eluded them (and still eludes us today, despite the advances you read about in Part 3 of this book). The point is that the vision for AI has been around for an incredibly long time, but the implementation of AI is relatively new.
The true birth of AI as we know it today began with Alan Turing’s publication of “Computing Machinery and Intelligence” in 1950 (https://www.csee.umbc.edu/courses/471/papers/turing.pdf
). In this paper, Turing explored the idea of how to determine whether machines can think. Of course, this paper led to the Imitation Game involving three players. Player A is a computer and Player B is a human. Each must convince Player C (a human who can’t see either Player A or Player B) that they are human. If Player C can’t determine who is human and who isn’t on a consistent basis, the computer wins.
A continuing problem with AI is too much optimism. The problem that scientists are trying to solve with AI is incredibly complex. However, the early optimism of the 1950s and 1960s led scientists to believe that the world would produce intelligent machines in as little as 20 years. After all, machines were doing all sorts of amazing things, such as playing complex games. AI currently has its greatest success in areas such as logistics, data mining, and medical diagnosis.
Exploring what machine learning can do for AI
Machine learning relies on algorithms to analyze huge datasets. Currently, machine learning can’t provide the sort of AI that the movies present. Even the best algorithms can’t think, feel, present any form of self-awareness, or exercise free will. What machine learning can do is perform predictive analytics far faster than any human can. As a result, machine learning can help humans work more efficiently. The current state of AI, then, is one of performing analysis, but humans must still consider the implications of that analysis — making the required moral and ethical decisions. The “Considering the Relationship between AI and Machine Learning” section of this chapter delves more deeply into precisely how machine learning contributes to AI as a whole. The essence of the matter is that machine learning provides just the learning part of AI, and that part is nowhere near ready to create an AI of the sort you see in films.
The main point of confusion between learning and intelligence is that people assume that simply because a machine gets better at its job (learning) it’s also aware (intelligence). Nothing supports this view of machine learning. The same phenomenon occurs when people assume that a computer is purposely causing problems for them. The computer can’t assign emotions and therefore acts only upon the input provided and the instruction contained within an application to process that input. A true AI will eventually occur when computers can finally emulate the clever combination used by nature:
Genetics: Slow learning from one generation to the next
Teaching: Fast learning from organized sources
Exploration: Spontaneous learning through media and interactions with others
Considering the goals of machine learning
At present, AI is based on machine learning, and machine learning is essentially different from statistics. Yes, machine learning has a statistical basis, but it makes some different assumptions than statistics do because the goals are different. Table 1-1 lists some features to consider when comparing AI and machine learning to statistics.
TABLE 1-1: Comparing Machine Learning to Statistics
Technique | Machine Learning | Statistics |
---|---|---|
Data handling | Works with big data in the form of networks and graphs; raw data from sensors or the web text is split into training and test data. | Models are used to create predictive power on small samples. |
Data input | The data is sampled, randomized, and transformed to maximize accuracy scoring in the prediction of out-of-sample (or completely new) examples. | Parameters interpret real-world phenomena and provide a stress on magnitude. |
Result | Probability is taken into account for comparing what could be the best guess or decision. | The output captures the variability and uncertainty of parameters. |
Assumptions | The scientist learns from the data. | The scientist assumes a certain output and tries to prove it. |
Distribution | The distribution is unknown or ignored before learning from data. | The scientist assumes a well-defined distribution. |
Fitting | The scientist creates a best fit, but generalizable, model. |
The result is fit to the present data distribution.
СКАЧАТЬ
|