The idea of human-created “intelligence” supplanting and ruling over its creators is a recurring theme of popular science fiction. Dave had to overcome bossy HAL in 2001: A Space Odyssey, and the Terminator movies were all about humans struggling against technology that got out of humans’ command. Fans of TV’s The Big Bang Theory will recall that Sheldon and Leonard’s Roommate Agreement has a “SkyNet clause” that comes into effect in the event of artificial intelligence taking over the world like in the Terminator movies.
But have you noticed how SkyNet-type scenarios are now being taken seriously in real life?
Heavyweights in science and philosophy at Cambridge University – yes, that prestigious Cambridge in England – plan to open a Centre for the Study of Existential Risk to consider the threats “artificial intelligence” could pose to us homo sapiens.
A Cambridge philosophy prof named Huw Price is quoted in a recent news story as saying “it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology” – in other words, beyond what we mortals are capable of.
When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.” …
Price acknowledged that many people believe his concerns are far-fetched, but insisted the potential risks are too serious to brush away.
“It tends to be regarded as a flakey concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous. What we’re trying to do is to push it forward in the respectable scientific community,” he said. [source]
Price and others, including a co-founder of Skype, contend that technology could open a “Pandora’s box” of problems if we’re not careful.
The U.S. Defense Department addressed SkyNet concerns this fall to reassure the public that humans and only humans will make the decisions on when and where those flying robots called drone aircraft will kill people. Spencer Ackerman reported on this development at Wired:
Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage. …
The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it. [source]
Phew. So, it’ll always be dudes and dudettes making the decision to shoot explosives at a house and then do it again to complete the “double tap“? What a relief.