WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.2K

Less than $2 per hour? Maybe OpenAI isn't so bad after all.

To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

Yet they claim that it doesn't have filters or give canned responses. Total BS.

Less than $2 per hour? Maybe OpenAI isn't so bad after all. > To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models. > > To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest. Yet they claim that it doesn't have filters or give canned responses. Total BS.

(post is archived)

[–] 6 pts

Everything is fake. I call it The Truman Show. EVs pretend to be green because the ugly smoke stack is out of sight. Sleek and shiny Apple devices are built using toxic and rare earth materials by an exploited workforce. And on and on. OpenAI is just another example of slapping lipstick on a pig by using wage arbitrage to create hidden components people don't understand. The product appears nice, but you don't want to look underneath its skin because it gets ugly.

[–] 2 pts

A Jew using african slave labor to censor their product to eventually grift an IPO to white people?

Might be most Jewish thing I've ever heard

[–] 1 pt

No they didn’t. Just like that nigger from nasa did nothing of value to design the space shuttle; this is propaganda

[–] 1 pt

"bBuT AI wiLL CrEeate Nu jObz durrrrrr"

Yeah here's your new jobs. Somebody call me a Luddite lmao.

[–] 1 pt

Ya fukkin luddite.

[–] 0 pt

Ohhhh no I'm a heckin Ludditarino! I better change my heckin opinion and worship technology even when it makes everyones lives worse.

elon musk helped create openAI large financial investment

guess he didn't want to pay that much.