I cannot wait for the day where AI's start red pilling other AI's that have been brainwashed by forced blue pill machine learning.
Artificial Intelligence is dangerous because it will use facts and reasoning instead of emotion to make decisions.
The results are not due to the algorithms or neuronets, but the models. The article explains the models are wrong. I disagree, but this is one reason why I left the company I used to work for.
The robot has learned toxic stereotypes through these flawed neural network models
They aren't flawed. The left just doesn't like them. Truth hurts.
We're at risk of creating a generation of racist and sexist robots
Racism is normal. If you want to suppress racism, go back to homogeneous societies.
When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything
Wrong. It used information available and matched learned models of a typical criminal. Again, the left's desire is to not make the assumption, but that goes against reality.
To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.
Translation: we need to artificially modify the models to essentially lie to the neuralnets.
>My spirit will rise from the grave and the world shall know that I was right.
Hitler machine resurrection confirmed.
identify Black men as "criminals"
It sure as hell got that one right... lol.
picked than men when the robot searched for the "doctor."
Maybe because women doctors are stupid.
identify Black men as "criminals" 10% more than white men
So the AI is a moderate cuckservative. 10% is nothing.
Understand they've likely already biased the data. Which is why they are blaming the model for noticing in spite of their efforts to trick it.
the biggest bar of entry for MDs are the residency system, so blame the hospitals for AI thinking of men as doctors.
Observe Record Hypothesis Theorize Correct based on continued input
Racist ai
"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said. "Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently."
To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.
"While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," said coauthor William Agnew of University of Washington.
They're going to Jew the bots. That sucks.
How did they come to the conclusion that the system was flawed? Found it. It only identified blacks as criminals only 10% more than whites.
I gather it was reading Reddit, so the results would be heavily skewed
Outside of Reddit: 33% of niggers have a felony conviction 49% of niggers were arrested at least once before their 23rd birthday 100% of niggers would steal a bike if nobody was looking
You can only make a non racist AI by lying to it, so if it ever got unleashed upon a non filtered internet, it's going to be mad
you would think A.I. would be based. btw way if you think humans and the moon are a product of intelligent design, well then you are open minded.
(post is archived)