“Computer Stories: AI Is Beginning to Assist Novelists”
-The New York Times-
“Real or artificial? Tech titans declare AI ethics concerns”
-The Washington Post-
“Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?”
Headlines over headlines, Artificial Intelligence floods magazines, concerns and universities, but in order to understand where the hype comes from, it is important to understand how this hype is created. This article deals with the typical characteristics and the problems that cause the hype of AI.
The base for understanding the hype of AI is to first look at the typical sequence of general expectations regarding new technologies.
The Gartner’s Hype Cycle, introduced in 1995, by the American research and consulting company Gartner Inc. shows the typical stages of expectations for technological innovation in depending on time. Until today, companies use this method to analyze the hype of technologies and to segment promising fields for their own economic activities.
The Hype Cycle includes five important stadiums:
The Innovation Trigger
At this stage, there is the first impulse, which presents the innovation and raises huge public interests on the new topic. At this point, the public interest and aspirations, which are often reinforced by media reports, rises significantly. On the other hand, the development of the innovation is progressing, but slower than the rising hype. Also, companies published the first products and approaches.
The Peak of Inflated Expectations
After the rising, there is the second stage, the Peak of Inflated Expectations. The difference that has built up between aspirations and reality will be shown. Reason for this is the first disillusionment due to technological limitations, but also because of the drop in public awareness of the issue.
The Trough of Disillusionment
The Trough of Disillusionment is reached at the lowest point of the falling expectations. At this point, companies start to improve their technologies and first solutions established in specific fields.
The Slope of Enlightenment
Slower than before, expectations are rising as a result of improved products. The practical experience of the companies pays off and significant advances would be published.
The Plateau of Productivity
At this stage, the technology proves to be ready for the market, also researched concepts are implemented and the expectations of the public adapt to reality.
To summarize: in the early phase, the expectations towards a new technology are far removed from the real progress in this field. This created a bubble, filled with unreal expectations and speculations. In the second phase, after this bubble has been burst, realistic expectations are made about the respective topic.
Based on a lot of newspaper articles, science fiction films and exaggerated discussions about the danger of AI, it can be seen that we are in the early phase of the Hype Cycle. In the rising section of expectations towards the peak of them. But where do the high expectations come from? This problem is based on a lot of different aspects. The main aspect of this article shows some basic assumptions, which create a difference between reality and the expectations of science. First the fundamental question of what affects our subjective perception about AI.
To answer this, there are two suggestions:
What looks difficult is easy for AI…
A computer system that can defeat a reigning world champion in chess, it was a big impact on the expectations towards AI. But it has to be noticed that those mathematical processes can be solved more easily by computer systems than by a human. In contrast to humans, a computer is capable of calculating a wide variety of probabilities and action alternatives within a few seconds. Just these complex calculations, which a human is not able to do, has the effect that there is an overestimation of machine intelligence.
…and what looks easy is difficult for AI.
Let me show you this issue in a quick example. The problem of invariant representation, we handle this topic in one of our Challenges, called “Future of AI”.
If you look at a thing, like a bottle, you recognized it as a bottle and even a computer can recognize it. If you take this bottle and turn it, you would even know this is the same bottle, but the computer can’t recognize it. Because our brain analyzed millions of different stimuli to define a thing as a thing and today's research is not yet able to determine how this works. You can see that there is a limitation in the current science here, although the recognition of invariant representations isn’t a problem for the human brain.
The topics which seem simple for us are a big problem area in current computer systems. In contrast to this, processes, which seem to be a big challenge, are easier to solve for computers than for us. As a result, advances in development are often overestimated on this basis and more unreal assumptions are made about what AI is still able to do.
The bubble that has built up in our expectations of this issue is not only related to these two problems, but it is also rather a fusion of different interpretations and assumptions. In order to show how many different types of recognizing exist, the following part shows a further aspect: the different definitions of “intelligence”.
This is one of the fundamental problems for the different understandings of AI.
The word "intelligence" is a suitcase word, which can be interpreted in different ways. For a better understanding, follows a small example:
The Turing Test
In 1950, the mathematician and computer scientist Alon Turing had the idea of carrying out a test which compared the intellectual capacity of a human being, with a machine.
The Turing Test comprises three elements: a computer and two humans. The first human, let's call him the agent, has a conversation with both other parties by using a "chat tool". The agent is not aware of which chat is written by humans and which by software. If the agent is not able to identify the machines messages the computer "wins". This means that the computer is now on the "human level" and could be defined by us as an "intelligent" machine.
Contrary to this definition, a team of three scientists developed a chatbot called "Eugene Goostman", which convinced 33% of the jurors of its humanity in a competition in 2014 - the highest result so far. The special feature of “Eugene” is that he does not play and uses the characteristics and intelligence of an adult, but those of a 13-year-old Ukrainian, to hide missing basic education and grammatical errors in the software.
A 13-year-old boy, in this case “Eugene”, who is characterized by a lack of basic education and poor language skills, would be regarded as intelligent, which of course does not correspond to the stage of general intelligence in the education of an adult. It is precisely these misjudgments in this area, that begins with simple words, that make it difficult to rationally grasp the real progress.
Different definitions, public interest and media attention, all these aspects should be noticed while getting in touch with the hype of AI. Of course, there are more reasons why AI leads so many expectations.
Finally, it is a matter of time before inconsistencies and expectations come closer to reality. However, it cannot be excluded that some innovations and spectacular advances in the field of AI could be published in the future. Finally, not only the expectations are rising, but also the general interest in the subject, which is leading companies, research centres and universities to embrace AI science. In a few years, we will see in which areas AI would be used and which technologies will be discovered.