“Existential” Risk Management for Artificial Intelligence (or just a little too much navel gazing?)

One of the great themes in science fiction is the fear that the machines we humans have created will someday lead to our own enslavement or even extinction. As research into artificial intelligence (AI) exploded following Turing’s breakthroughs, the concept of self-awareness in non-organic manmade objects elicited a sort of existential dread.

Update: See below for yet another take on the existentialist implications of future AI developments…

If advanced reasoning, self-awareness and abstract thought are considered to be primary distinctions between the cognitive abilities of humans and animals, what would it mean if our machines attained similar function? In my mind, this mental exploration is similar to what the existential impact of receiving clear and unambiguous evidence of intelligent extraterrestrial life.

A lot of other folks have been wrestling with these issues, and a 2014 academic paper by Oxford University professors Nick Bostrom and Vincent Müller presented the results of a survey on the topic of four different groups of experts. The paper is titled, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion” and is available for download as a PDF from Mr. Bostrom’s website.

Jonathan Zhou, writing for Epoch Times, reports on the recent warnings from some high-profile leaders in the tech industry:

In January, Elon Musk, Bill Gates, Stephen Hawking, and other sundry academics and researchers wrote an open letter calling for safety standards in industries dealing with artificial intelligence.

In particular, they called for the research and development of fail-safe control systems that might prevent malfunctioning AI from doing harm — possibly existential harm — to humanity.

“In any of these cases, there will be technical work needed in order to ensure that meaningful human control is maintained,” the letter reads.

Source: Epoch Times

Here is a video of Musk pontificating on why AI represents the most significant existential risk to humanity:

An End To Suffering… Or Something Worse?

After first publishing this post, I came across a fascinating article in the Daily Telegraph highlighting the work of Yuval Noah Harari, a professor at the Hebrew University of Jerusalem. The professor examines human evolution through the lens of the historian in his highly regarded book, Sapiens: A Brief History of Humankind.

He argued that humans as a race were driven by dissatisfaction and that we would not be able to resist the temptation to ‘upgrade’ ourselves, whether by genetic engineering or technology.

“We are programmed to be dissatisfied, “ said Prof Harari. “Even when humans gain pleasure and achievements it is not enough. They want more and more.

“I think it is likely in the next 200 years or so homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs, part organic part non-organic. It will be the greatest evolution in biology since the appearance of life. Nothing really has changed in four billion years biologically speaking. But we will be as different from today’s humans as chimps are now from us.”

Source: Daily Telegraph


 

Caught_Coding_(9690512888)

Image courtesy Wikimedia

search previous next tag category expand menu location phone mail time cart zoom edit close