Bill Gates made some comments in a Reddit AMA session about where he thinks technology is headed in the future, which you can ready a summary of right here. In the session, the Microsoft co-founder and CEO revealed his thoughts on the possibility of artificial intelligence becoming “super intelligent,” and the potential dangers that could present to humanity.
Gates confirmed that he shares the same opinion as the SpaceX and Tesla CEO Elon Musk, who recently made a bold (and rather worrying) prediction that we could be facing a very real problem with AI within the near future.
Gates, who said that he doesn’t understand “why some people are not concerned,” admitted he was worried about super intelligence, saying: “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”
This echoes the thoughts of Musk, who said: “The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.
“I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”
Stephen Hawking is another expert who has fears regarding the rise of super intelligent AI.
Another expert who threw his hat into the debate was renowned theoretical physicist Stephen Hawking, who previously said: “The development of full artificial intelligence could spell the end of the human race.”
He continued: “[Artificial intelligence] would take off on its own, and re-design itself at an ever increasing rate.
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
However, Microsoft’s own Eric Horvitz, who has been awarded with the prestigious AAAI Feigenbaum Prize for his contributions to AI research, stated that he does not believe this is something we should be worried about. “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Horvitz previously said, adding: “[but] I fundamentally don’t think that’s going to happen”.
“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”
Photos: Getty Images