"Hollywood’s dark vision of machines taking over belies how far AI is from meaningful reality—and what it will look like when it gets there"
Reboot to web site |
"Elon Musk's new plan to go all-in on self-driving vehicles puts a lot of faith in the artificial intelligence needed to ensure his Teslas can read and react to different driving situations in real time. AI is doing some impressive things—last week, for example, makers of the AlphaGo computer program reported that their software has learned to navigate the intricate London subway system like a native. [Getting between Brent Cross station and the shopping centre would stump it.] Even the White House has jumped on the bandwagon, releasing a report days ago to help prepare the U.S. for a future when machines can think like humans.
"But AI has a long way to go before people can or should worry about turning the world over to machines, says Oren Etzioni, a computer scientist who has spent the past few decades studying and trying to solve fundamental problems in AI. Etzioni is currently the chief executive officer of the Allen Institute for Artificial Intelligence (AI2), an organization that Microsoft co-founder Paul Allen formed in 2014 to focus on AI’s potential benefits—and to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race.
"... Is there a rift among AI researchers
over the best way to develop the
technology?"
"Some people have gotten a little bit ahead of themselves. We’ve had some real progress in areas like speech recognition, self-driving cars (or at least the limited form of that) and of course AlphaGo. All these are very real technical achievements. But how do we interpret them? Deep learning is clearly a valuable technology, but we have many other problems to solve in creating artificial intelligence, including reasoning (meaning a machine can understand and not just calculate that 2 + 2 = 4), and attaining background knowledge that machines can use to create context."... You've mentioned that human-level
Natural language understanding is another example. Even though we have AlphaGo, we don’t have a program that can read and fully understand a paragraph or even a simple sentence."
AI is at least 25 years away.
What do you mean by human-level AI,
and why that time frame?"
"The true understanding of natural language, the breadth and generality of human intelligence, our ability to both play Go and cross the street and make a decent omelet—that variety is the hallmark of human intelligence and all we’ve done today is develop narrow savants that can do one little thing super well. To get that time frame I asked the fellows of the Association for the Advancement of AI when we will achieve a computer system that's as smart as people are in the broad sense.
Nobody said this was happening in the next 10 years, 67 percent said the next 25 years and beyond, and 25 percent said 'never'. Could they be wrong? Yes. But who are you going to trust, the people with their hands on the pulse, or Hollywood?"
No comments:
Post a Comment