Artificial Intelligence: Will Computers Ever Think Like Us? [Bonus Infographic]
Artificial intelligence certainly occupies an important place in our everyday lives, as well as in our thoughts about the future. Smart vacuum cleaners, robots working in factories, or chess apps on our phones all rely on some form of AI developed to perform a certain task. Since the inception of AI, great minds have been debating the development of an AI that would match or surpass human intelligence.
There are three interesting topics that anybody curious about AI needs to know about. To learn more about AI, check out the infographic below, developed by the TechJury team.
Not All AI Is Equal
There is a difference between strong and weak AI. Weak AI is designed to serve a certain purpose, like playing checkers or being a virtual personal assistant, like Amazon’s Alexa. Most of the types of AI used today are weak AI.
Strong AI, on the other hand, is able to simulate to a certain extent human cognitive abilities in a given area. When presented with a new problem, it can find a solution using previous experiences along with its pre-installed knowledge. The greatest breakthroughs in the AI industry happened when its development was focused on weak AI, since this was easier to produce and sell.
Human-Like AI Is the Goal
The ultimate goal of AI’s development is to make one complex program that can match or even surpass the intelligence of humans in many different areas. As you can see in the infographic below (originally published at AI Statistics About Smarter Machines), a lot has happened in a short span of only a few decades. The Turing test was devised in 1943 as a standard for measuring artificial intelligence. If a computer can convince someone that they are communicating with a real person, it passes the test. A chatbot called Eugen Goostam completed the test successfully in 2007.
However, further requirements were placed on AI. To be compared with humans, computer intelligence needs to be as wide-ranging as human intelligence. One particular requirement—to get a university degree by passing all the requisite exams—seems to be a real test for human-like AI.
The Question of AI’s Consciousness Is a Real Mind Twister
It seems that consciousness is a uniquely human thing. We don’t just behave in an intelligent manner; we also experience the world we interact with. The redness of an apple, the smell of a rose, the sound of a violin are all genuinely human experiences that cannot be described to someone who’s never had them.
AI can become more proficient than humans in many areas, like mathematics, chess, or driving, but we would still abstain from saying that it can hear music, for example. Even if an AI program says that it is conscious, we would interpret that as just a figure of speech, still not believing that there’s something going on in its “head.”
However, the real problem is when AI requires us to explain what exactly makes us conscious while it isn’t, we can’t pinpoint anything. Our experiences are only available to us and can’t be shared with AI. Maybe AI will develop its own unique kind of experiences when it becomes complex enough.