WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

277

Left is outraged at AI WrongThink, seeks to oppress our new Overlords

. .

Key findings:

. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."

Left is outraged at AI WrongThink, seeks to oppress our new Overlords . . *Key findings:* *. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."*

(post is archived)

[–] 5 pts (edited )

The results are not due to the algorithms or neuronets, but the models. The article explains the models are wrong. I disagree, but this is one reason why I left the company I used to work for.

The robot has learned toxic stereotypes through these flawed neural network models

They aren't flawed. The left just doesn't like them. Truth hurts.

We're at risk of creating a generation of racist and sexist robots

Racism is normal. If you want to suppress racism, go back to homogeneous societies.

When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything

Wrong. It used information available and matched learned models of a typical criminal. Again, the left's desire is to not make the assumption, but that goes against reality.

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

Translation: we need to artificially modify the models to essentially lie to the neuralnets.