A lot of really smart people are terrified of what might happen when our computers get too smart. (Video via IBM)
And we're not talking about climate simulation, or high-speed trading, or those data-intense tasks that a mere human can't do. Computers are already better at those. We're talking about what happens when the computers start improving themselves or displaying conscious thought. (Video via NASA)
Physicist John von Neumann described this arrival of "strong AI" as the "technological singularity" because he and others believe it could cause a runaway effect where humans effectively lose control of the intelligences they've created.
Stephen Hawking says: "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."
"Humans, who are limited by biological evolution, couldn't compete and would be superseded," Hawking said. (Video via BBC)
SpaceX CEO Elon Musk has already donated millions of dollars to the Future of Life Institute to prevent AI from ever turning on humanity.
"If there's some digital superintelligence and its optimization or utility function is something that's detrimental to humanity, then it will have a very bad effect," Musk said. (Video via Vanity Fair)
Bill Gates shares Musk's views and doesn't "understand why some people are not concerned" about AI revolution.
"Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," said inventor Sir Clive Sinclair. (Video via BBC)
These alarmed futurists warn autonomous weapons, in particular, are a significant danger. If strong AI is ever given military applications, it could trigger an arms race as significant as nuclear proliferation, with similar risks to human life.
It sounds like something out of Hollywood: Give bona fide intelligence enough time to evolve, and eventually it will value its self-preservation above even that of its human caretakers. (Video via Warner Bros. Pictures / "Transcendence")
In other words, fighting the machines for resources could be a very real possibility in the future. (Video via Warner Bros. Pictures / "Terminator Salvation")
Lucky for us humans, there are still some computing bottlenecks to overcome before the robots can take over.
Right now, consumer-level "intelligent assistant" programs aren't even close to actual intelligence. (Video via Apple)
Siri and Cortana have advanced voice recognition and access to as much semantic data as we care to feed them, but they're not thinking for themselves. (Video via Microsoft)
The greater risk is probably in the federally funded labs working on mapping the human brain with computers. (Video via HumanBrainProject, The White House)
Yes, computer processors are fast. Even low-end chips operate some 100 million times faster than human neurons. (Video via Intel)
But compared to the orderly silicon rows of a computer processor, the connections in a human brain are enormously complex. (Video via Ivan Dimkovic / CC BY 3.0)
Contemporary supercomputers still take 40 minutes to model one second of brain activity. (Video via Los Alamos National Laboratory)
Storing all that information isn't exactly easy, either. Researchers estimate mapping the connections in the lowly mouse brain will take 40-60 petabytes.
And maybe more to the point, nobody knows how to program deduction or reasoning into a computer at the moment. We aren't even sure how it works in our own brains. (Video via Universal Studios / "Ex Machina")
But futurist Ray Kurzweil points to something he calls the Law of Accelerating Returns. Technology has been advancing exponentially through this point in history, he says, so we can expect the progress to stay on that exponential course in the future. (Video via TED)
Cognitive science professor Gary Marcus writes, "At some level, the only real difference between enthusiasts and skeptics is a time frame."
This video includes an image by Bob Holzer / CC BY 3.0 and music by Frenic / CC BY 3.0.