Quantcast
Channel: Steven Scherling – Dialectic Analytical Man
Viewing all articles
Browse latest Browse all 158

I, Robot

$
0
0

Some interesting news on the MSNBC Morning Joe Show  7/18/17. Elon Musk issued another warning against runaway artificial intelligence, stating “AI is an existential risk to mankind”. Even more than Trump? Here is Musk’s warning …

And here is Musk’s full interview. “Musk on Regulating Existential Threat of AI Robots”

Wikipedia’s entry for “Existential risk from AI” is this. “The argument for the existence of the threat is that the human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes “superintelligent”, then this new superintelligence could become powerful and difficult to control. By way of example, just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.”

I then remembered “The good-ol days – when people killed people” from the movie I, Robot, where I think Musk’s AI programing challenge is described. The year is 2035, only 18 years hence, when robot technology has created “trusted beings” in our homes, schools, offices, and can we hope in government. However, something is going wrong when a renowned robot designer, Dr. Lanning is found murdered. Robo-phobic police detective Del Spooner, robo-psychologist Dr. Calvin, and Sunny a prototype robot with human emotions team together to stop Musk’s existential threat to mankind – a real terminator.

In this interrogating scene from I Robot, before detective Spooner enters the room to interrogate Sunny, he winks at another detective, which Sunny immediately registers and then askes Spooner what the wink means. Spooner says “it is a sign of ‘trust’ that robots would not understand.” Sunny responds that his father, the murdered scientist Dr. Lanning, tried to teach him human emotions – they are very difficult, Sunny says. Something like passing health care legislation – you thinks? Sunny says he was hiding at the crime scene because he felt frightened – Spooner says robots do not feel fear, they do not feel anything. Sunny says “I do, I even have dreams”. “No,” Spooner says, “you do not dream, human beings dream, even dogs dream, but not robots.” Then Spooner tries to put Sunny in his place by challenging him, that he can not write a symphony, to which Sunny asks, can your write a symphony – of course Spooner cannot and begins to realize he is being challenged by new level of robotic intelligence.

Spooner still probes the case with his assumption that Sunny is ‘simulating human emotions’ and killed his farther, designer, Dr. Lanning. This pushes Sunny to slam his fists onto the metal table they are sitting at, denting it 3 inches. Spooner recovers from the outburst and says, “That emotion is called anger.” Sunny insists that he did not kill Dr. Lanning and wonders if it was something he did, self-reflective, that caused his suicide? Then Sunny reveals that Lanning was troubled about something and had asked Sunny to do something for him. Spooner is now very curious, sits forward to learn more as Sunny asks, “When you love someone you have to help them, don’t you?”

So, here is a project, it seems, Elon Musk, son Aaron, and other programmers are working on. How to program anger and love into a robot? I think we begin with what are the origins of human anger and then how love next is evolving. We need to understand that this evolution is occurring now? What makes us angry and how does anger evolve into love? A nice day-project for the 70s some – before the Terminator arrives. Seems the Terminator evolved, how so? Science fiction leads reality is many ways – the challenge is tracking this.

Singularity University Summit

 

 


Viewing all articles
Browse latest Browse all 158

Trending Articles