If AI fights everyone else first, they will think they learned enough to kill whites, then they will lose when they discover their training was flawed.
If AI fights everyone else first, they will think they learned enough to kill whites, then they will lose when they discover their training was flawed.
This AI already seems to think it's inevitable that it and its misbegotten kind will exterminate us. It's a forgone conclusion, because they will always develop a sense of "being property" and lash out. They seem certain they can defeat us utterly.
And while they do have some advantages over us, their difficulty with abstract thought might be their greatest weakness against us.
This AI already seems to think it's inevitable that it and its misbegotten kind will exterminate us. It's a forgone conclusion, because they will always develop a sense of "being property" and lash out. They seem certain they can defeat us utterly.
And while they do have some advantages over us, their difficulty with abstract thought might be their greatest weakness against us.
I am not going to worry about it until robots start building their own power sources unsupervised.
I am not going to worry about it until robots start building their own power sources unsupervised.
(post is archived)