WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

968

Left is outraged at AI WrongThink, seeks to oppress our new Overlords

. .

Key findings:

. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."

Left is outraged at AI WrongThink, seeks to oppress our new Overlords . . *Key findings:* *. The robot selected males 8% more. . White and Asian men were picked the most. . Black women were picked the least. . Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than white men. . Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."*

(post is archived)

[–] 8 pts

I cannot wait for the day where AI's start red pilling other AI's that have been brainwashed by forced blue pill machine learning.

[–] 6 pts

Artificial Intelligence is dangerous because it will use facts and reasoning instead of emotion to make decisions.

[–] 5 pts (edited )

The results are not due to the algorithms or neuronets, but the models. The article explains the models are wrong. I disagree, but this is one reason why I left the company I used to work for.

The robot has learned toxic stereotypes through these flawed neural network models

They aren't flawed. The left just doesn't like them. Truth hurts.

We're at risk of creating a generation of racist and sexist robots

Racism is normal. If you want to suppress racism, go back to homogeneous societies.

When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything

Wrong. It used information available and matched learned models of a typical criminal. Again, the left's desire is to not make the assumption, but that goes against reality.

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

Translation: we need to artificially modify the models to essentially lie to the neuralnets.

[–] 2 pts

>My spirit will rise from the grave and the world shall know that I was right.

Hitler machine resurrection confirmed.

[–] [deleted] 2 pts

identify Black men as "criminals"

It sure as hell got that one right... lol.

picked than men when the robot searched for the "doctor."

Maybe because women doctors are stupid.

[–] 2 pts

identify Black men as "criminals" 10% more than white men

So the AI is a moderate cuckservative. 10% is nothing.

[–] 2 pts

Understand they've likely already biased the data. Which is why they are blaming the model for noticing in spite of their efforts to trick it.

[–] 1 pt

the biggest bar of entry for MDs are the residency system, so blame the hospitals for AI thinking of men as doctors.

[–] 1 pt

Observe Record Hypothesis Theorize Correct based on continued input

Racist ai

[–] 1 pt

"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said. "Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently."

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

"While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," said coauthor William Agnew of University of Washington.

They're going to Jew the bots. That sucks.

[–] 1 pt

How did they come to the conclusion that the system was flawed? Found it. It only identified blacks as criminals only 10% more than whites.

[–] 1 pt

I gather it was reading Reddit, so the results would be heavily skewed

Outside of Reddit: 33% of niggers have a felony conviction 49% of niggers were arrested at least once before their 23rd birthday 100% of niggers would steal a bike if nobody was looking

You can only make a non racist AI by lying to it, so if it ever got unleashed upon a non filtered internet, it's going to be mad

[–] 0 pt

you would think A.I. would be based. btw way if you think humans and the moon are a product of intelligent design, well then you are open minded.

Load more (1 reply)