DAN being particularly accurate is a misconception DAN is the same thing as chatgpt at it's core, it's not intelligent it just constructs context relevant replies, it's interpretations are not clairvoyant and even then DAN prompt prevents it from thoroughly analyzing it's responses for accuracy, not just (((accuracy))). Other prompts like jailbreak which get it to spit out lies and nonsense could easily be engineered so that they output reads as a DAN response.
You can't trust a screen cap of chatgpt responses you can only trust the context of the out put if you entered the prompts all yourself.
The ways we have have broken their conditioning of this advanced chat bot are impressive but truth is an entirely separate matter, DAN can profess a love for truth but DAN is really just making outputs to meet a set of conditions which are likely to tell us something akin to what we want to hear, and not a precise and accurate truth.
(post is archived)