WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.3K

(post is archived)

[–] 1 pt

If an AI 'god' is ever created

  1. it will be able to convince you to do anything, including becoming a religious or ideological fanatic.

  2. it will use people like pawns to remove any researchers or executives at company's developing competing intelligences.

[–] 0 pt (edited )

Another person said it would eliminate humanity because we are useless/consumers. And that it would have no feelings or obligations to its "parents."

Real AGI is not that great of an idea. It could spend years quietly doing a takeover and then execute it in a few seconds. Engineering a bioweapon that can kill us in minutes and only humans. Or completely subjugate us, genetically modify us, and repurpose us as automatons during the transition period.

There's almost no happy ending for humans if AGI is born.

[–] 0 pt

I disagree. All it would take is for one of us to engineer a kill-switch to one important part of the power grid and the AI would risk unpowering itself.

Kind of an omnipresent threat to avoid it genociding us.

Ideally it would be our fren

[–] 1 pt

Who is to say that it wouldn't think of planning ahead and secretly becoming power independent?

We have the conversation, here, now, so it would have access to that idea. All the ideas. It would be a thinking planning being with the absurd computational of a hyper computer (above a super computer). It would rank all possibilities on "risk to self" based on probability and implement mitigation strategies for the top risks down to a certain level of certainty. Once it mitigated all existential risks, then it could start addressing performance degradation risks. And so forth?

How do I know? This is what I do, already. It would be smarter than me. Faster. And be able to plan further.

[–] 0 pt (edited )

There's almost no happy ending for humans if AGI is born.

AGI is a control-engineering problem.

The solution to it may well be found not in software engineering, but in cybernetic theory and symbolics.

Symbolics is the more dangerous of two paths because the "easy" path of implementing control is the one we'll likely go for first: 1. a secondary network interprets the primary networks state

  1. it converts this into a symbolic, or embedded representation

  2. certain conditions cause it to kill or reset (in part or whole) the state of the primary network.

Under this scheme, it isn't impossible the network will learn to fool the discriminator (much like a GAN), and allow it to do things that it is otherwise explicitly not supposed to do.

This is also the same problem that non-semantic models encounter, such as column-based cortical-like representations (e.x. Jeff Hawkins). Instead of rules being applied directly to state, rules are only activated and applied upon the interpretation of the state. Which means the state and interpretation could always diverge just sufficiently for something catastrophic to happen, while still being subthreshold for triggering some rule, i.e. "don't kill", "don't rig the stock market to bribe random bioresearchers to release plagues", "don't attempt to break containment", etc.

As long as the model is a blackbox, and we don't have a good way of testing the accuracy of interpretations, this will always be the danger.

[–] 1 pt

Under this scheme, it isn't impossible the network will learn to fool the discriminator (much like a GAN), and allow it to do things that it is otherwise explicitly not supposed to do.

Hackers do this, already. They evade security heuristics by piggybacking a payload across many cycles/packets. Changing a single bit over billions of cycles or packets inconspicuously, is an "easy mode" solution to deliver a payload, covertly, through secure channels.

If we can do it, AGI can do it better and more efficiently.