WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

964

Left is outraged at AI WrongThink, seeks to oppress our new Overlords

. .

Key findings:

. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."

Left is outraged at AI WrongThink, seeks to oppress our new Overlords . . *Key findings:* *. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."*

(post is archived)

[–] 1 pt

"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said. "Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently."

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

"While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," said coauthor William Agnew of University of Washington.

They're going to Jew the bots. That sucks.