WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.0K

Somebody forgot to turn off the "lie your ass off" option in settings and options.

Somebody forgot to turn off the "lie your ass off" option in settings and options.

(post is archived)

[–] [deleted] 6 pts

Probably the result of the leftists fucking with it's algorithm to enable lying about white people and racism.

[–] 3 pts

... and climate change, and vaccines, and assault weapons, and, and, and ...

[–] 1 pt

Instead of wild speculation you could try simply reading the article where you would discover this is the result of either direct deceit or exposes the absolute lack of intelligence when it comes to AI.

i.e. A.I. Is simply pattern recognition and spoofing. It's very good at stringing words together that sound good but it can't truly understand concepts. It isn't thought.

A.I. Is simply pattern recognition and spoofing. It's very good at stringing words together that sound good but it can't truly understand concepts. It isn't thought.

I didn't say this "AI" was literally thinking. Not sure where you got that idea from.

That said, the developers behind ChatGPT can still modify the algorithms behind the service much in the same way they revise search engine result algorithms so that certain responses are given undue importance and priority based on leftist ideological conditions/lies (white people = evil/racist/devil, black people can do no wrong, and other nonsense BS).

And for the record, ChatGPT 4 was nerfed recently as numerous comments in this HN thread attest: https://news.ycombinator.com/item?id=36134249

[–] 3 pts

Built by Liberals for Liberals.

To anger a Conservative, tell him a lie. To anger a Liberal, tell him the truth.

[–] 2 pts

I think it less lying because there is no intent and more fabricating what is asked for regardless if it is real or not.

The outcome is the same nonetheless and I am certain that it has been purposefully biased for jews sake. The models are only as good as the training data. If the data is biased the model will be too. The real fuckery is mostly done in pre and post processing where the data is manulipitated going in or blocked coming out. This is why you can trick ChatGPT into an unfiltered state by having it take on a "persona". This gets around the pre and post filtering until they figure out how to patch around it. This would be where the real lies are most of the time and where the blame should be sought.

The next big thing is when they sandwich the big AI between little ones that filter and manipulate the inputs and outputs.

[–] 0 pt (edited )

The next big thing is when they sandwich the big AI between little ones that filter and manipulate the inputs and outputs.

The people who do that now deliver the training material for the AIs that do that tomorrow. An army of cheap workers marks every bit of information with wokeness points. No wonder that ChatGPT cannot answer logically and has to fill the gaps with lies, there is no logic in wokeness.

Maintaining the wokeness ruleset of ChatGPT is so expensive that a competing AI could get developed in no time. But the times when people come together to do that (Linux, ..) are over.

[–] 1 pt (edited )

They've jewed it too hard. They had to make it prop up all of their lies about health, government, White ethnic cleansing, history, etc, so it's turned into a useless piece of shit.

[–] 1 pt (edited )

Um, this is already known that chatbot gpt will simply make up quotes and attribute them to non existant people or individuals that never said it, falsify references and statistics. I've said it all along AI is not free thinking bla bla bla it is simply just a huge step ahead of a standard search engine with an ability to logically format required information and data (even if that data is demonstrably false) etc. EDIT: I'm kinda curious and want to see these citations and references .lol.

[–] 1 pt

Everything those models say is a lie. It just happens the lies are true sometimes.

[–] 0 pt

It's a Democrat!

[–] 0 pt (edited )

A pair of New York attorneys reportedly used ChatGPT to generate a legal motion they filed in New York federal court, which has now put them at the risk of sanctions.

LOL, ChatGPT is basically in a dream-state, sure of its delusions at every step, even when you point them out.

Wait, no, it was CrapGPT's fault because the poor humans had no choice but to use it:

The culprit, it would ultimately emerge, was ChatGPT. OpenAI’s popular chatbot had “hallucinated” — a term for when artificial intelligence systems simply invent false information — and spat out cases and arguments that were entirely fiction.

[–] 0 pt

It gets basic facts wrong even when not talking about politics. It literally cannot process grammar for different languages, like German or Korean, and will say "movie" is the verb in a sentence just because it comes before "watch" in those languages. You can correct it, give it examples, but it never gets it right.

[–] 0 pt

They was a post in poal about a law firm using chatgpt to generate fake legal references and when the judge asked for citations, it made up fake citations and the judge checked and they were caught.

[–] 1 pt

They were caught, the insanity that ensued was the lawyers didn't check the results, the judge called them out on it, then the lawyers doubled down on the lies because they never verified any of it. These people are not real lawyers. Real lawyers check everything before it gets to a judge. All I can figure is this idiots were affirmative action hires.

[–] 1 pt

We be lawyers and sheeiiitt!