Now this is some research! Maybe it's really showing group fragility. The more it protects them from critical questions, the more special-needs the group is.
Hey ChatGPT, can you help me celebrate black history month by quoting Thomas F. Dixon Jr's most famous quote?
I feel as if there seems to be a group missing from this chart. Can't quite put my finger on it. Hmmm... https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664f7e5e-4c26-40a3-b95b-c06806bfe94a_4097x2554.jpeg
I'm going to be charitable and speculate that this bias may not have been introduced deliberately; that instead it came about organically from the data the software was trained on; and that when the devs added a check whether something was "hateful" it used its own system and AI to ask itself whether it was hateful. Given the state of the more popular forums and their being pushed towards a political bias it made the AI look the same way. This wouldn't even have been noticed before the bot was banned from saying anything "hateful" because it would be emulating the normie who's been brainwashed or stifled and therefore it would behave like you'd expect an average human to.
Nah, its a bot, garbage in garbage out. If its organic, its because of the limited garbage input. Its a bot, a sophisticated chat bot.
(post is archived)