After Elon Musk joined his voice to a collection of notable scientists warning of potential future dangers involved with the increasing use of AI hit the media, Bill Gates added his own call for caution in a Reddit interview when asked about his view of the existential dangers of using the technology. His comments presented a diametrically opposite opinion from that expressed by one of the chief researchers at Microsoft. Eric Horvitz, when asked a similar question to that posed for Gates, stated that he had confidence that controls would be developed as the technology advanced, and that he was fundamentally not convinced that there was a significant future threat looming. He may have been one of the people Gates was referring to when he said that he was surprised that some people did not see the danger as clearly as he did.
Gates did say that his concerns were not for the short term. He feels, however, that as the AI programs continue to grow and learn more and at a faster pace, that it could grow within decades into a situation where these super intelligences might evolve to a point where they could no longer be controlled by human beings. Musk, along with the others voicing concern that there were not currently what they felt were sufficient safeguards and oversight on research and development of AI, called for the establishment in the present day of increased monitoring and controls while they might still be effective and prevent future issues. This group is of the opinion that things are already moving quickly, and that there is a genuine need to step in at this stage. Gates said that he was in agreement with Musk and the other scientists. .
For those who use todays limited AI technology, like Siri on their iPhones or similar electronic assistants, the limitations make it difficult to imaging them ever getting to the point of taking over the world like in the doomsday scenarios from films like The Matrix and Terminator. It may not be as large a leap as it appears, however. AI is developing at a rapid pace, and the world is already seeing technology which is light-years ahead of what people knew only a decade ago.
There are programs now performing research in the medical field. In one case, AI is even helping with providing diagnosis, or at least assisting doctors with that task. Where in the past, people only had to worry about not being able to best a computer at chess or in answering Jeopardy questions, now AI have even learned to win against all comers at games like poker. That might not help them to take over the world, but the development represents an increase in functionality which is a major jump forward. It is the extrapolation of that jump which has Gates and Musk concerned. They may actually be looking forward to a worst-case possibility, but the possibility remains and has become something that many are beginning to seriously consider.
At this point, neither Gates nor Musk are suggesting that it is appropriate to become consumed with anxiety. They are, though, facing a market which has ever-increasing demands, and AI provides the potential to meet many of those demands in a short amount of time. In that climate, the warnings that are being given and the limitations are being seen by some as alarmist. In the opinion of yet another visionary, Stephen Hawking, however, there may be cause for long-term concern. He said that the short-term would determine what controls were necessary, but that the long term would determine whether control was even possible.
While Gates and Musk calling for caution is causing the question to be considered by many people, there is little pressure from the business sector to curtail development. The market drives even scientific pursuits to an extent, so it is questionable whether those expressed concerns will be taken seriously and acted upon. Amateur scientists everywhere, however, are starting to wonder whether it might be wisdom to listen when men respected for their insight and vision are trying to tell people something.
By Jim Malone