The results are not due to the algorithms or neuronets, but the models. The article explains the models are wrong. I disagree, but this is one reason why I left the company I used to work for.
The robot has learned toxic stereotypes through these flawed neural network models
They aren't flawed. The left just doesn't like them. Truth hurts.
We're at risk of creating a generation of racist and sexist robots
Racism is normal. If you want to suppress racism, go back to homogeneous societies.
When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything
Wrong. It used information available and matched learned models of a typical criminal. Again, the left's desire is to not make the assumption, but that goes against reality.
To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.
Translation: we need to artificially modify the models to essentially lie to the neuralnets.
(post is archived)