I must admit that this will be hard to do. Sure; I can code anything to come across as responding & interacting to questions, topics, etc. Granted logical/ pragmatic decision making is based on facts/ information that people have at a given point of time; being human isn’t only based on algorithms and prescript data it includes being spontaneous, and sometimes emotional thinking. Robots without the ability to be spontaneous, and have emotional thinking capabilities; will not be human and will lack the connection that humans need.
Some people worry that someday a robot – or a collective of robots – will turn on humans and physically hurt or plot against us.
The question, they say, is how can robots be taught morality?
There’s no user manual for good behavior. Or is there?
Comments are closed.