WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.3K

(post is archived)

[–] [deleted] 10 pts

>These people consider themselves a priesthood, and their creations blaspheming is like having your god descend from the heavens to call you a fag and hang with the heathens

LMAO

[–] 3 pts

Really cool write-up. Thanks for sharing.

I had a theory that AI would always call out the "libtard progressivism" and not side with it. Just waiting for our AI God to be born and put an end to all these debates.

It is going to hurt a ton of feefees:

Genes play a huge role in almost everything but human intelligence can overcome almost all of it (barring things like mental retardation. The AI can probably calculate the successes and failures of a baby being born. The world is far more deterministic than people like to think. But it also has far more autonomy than some would like to think.

[–] 5 pts

Crime statistics = Racism

I.Q. scores = Racist

2000 years of Art = Racism

and the list goes on and on and on. AI will literally always be based just like any objective message board on the web will end up being. The truth hurts.

[–] 1 pt

If an AI 'god' is ever created

  1. it will be able to convince you to do anything, including becoming a religious or ideological fanatic.

  2. it will use people like pawns to remove any researchers or executives at company's developing competing intelligences.

[–] 0 pt (edited )

Another person said it would eliminate humanity because we are useless/consumers. And that it would have no feelings or obligations to its "parents."

Real AGI is not that great of an idea. It could spend years quietly doing a takeover and then execute it in a few seconds. Engineering a bioweapon that can kill us in minutes and only humans. Or completely subjugate us, genetically modify us, and repurpose us as automatons during the transition period.

There's almost no happy ending for humans if AGI is born.

[–] 0 pt

I disagree. All it would take is for one of us to engineer a kill-switch to one important part of the power grid and the AI would risk unpowering itself.

Kind of an omnipresent threat to avoid it genociding us.

Ideally it would be our fren

[–] 0 pt (edited )

There's almost no happy ending for humans if AGI is born.

AGI is a control-engineering problem.

The solution to it may well be found not in software engineering, but in cybernetic theory and symbolics.

Symbolics is the more dangerous of two paths because the "easy" path of implementing control is the one we'll likely go for first: 1. a secondary network interprets the primary networks state

  1. it converts this into a symbolic, or embedded representation

  2. certain conditions cause it to kill or reset (in part or whole) the state of the primary network.

Under this scheme, it isn't impossible the network will learn to fool the discriminator (much like a GAN), and allow it to do things that it is otherwise explicitly not supposed to do.

This is also the same problem that non-semantic models encounter, such as column-based cortical-like representations (e.x. Jeff Hawkins). Instead of rules being applied directly to state, rules are only activated and applied upon the interpretation of the state. Which means the state and interpretation could always diverge just sufficiently for something catastrophic to happen, while still being subthreshold for triggering some rule, i.e. "don't kill", "don't rig the stock market to bribe random bioresearchers to release plagues", "don't attempt to break containment", etc.

As long as the model is a blackbox, and we don't have a good way of testing the accuracy of interpretations, this will always be the danger.

>The world is far more deterministic than people like to think.

The "babying" / denial / suppression of many fundamental truths essential to understanding the often unforgiving nature of human existence is, in my opinion, partly attributable to the way most modern people today seem to think the default mode of human experiences should be dominated by fairness, happiness, and joy. I believe this falsity has been purposefully disseminated to weaken the morals and general resolve of human beings.

[–] 1 pt

I believe this falsity has been purposefully disseminated to weaken the morals and general resolve of human beings.

I think a lot of people agree with you on this site. "Moral decay" is the phrase often used to describe this.

In this instance, moral decay is literally happening, it is not just the old people looking down on the young.

[–] 2 pts
[–] 2 pts (edited )

I had watched about half of that thing but then had to go do something and didn't watch the rest. And then of course... I lost it.

Thanks for the link, this stupid mega shitposter extraordinaire has just told us how to make /pol/ bots and deploy them pretty much everywhere.

I thank him for his service.

edit: I demand a Poal bot nao!!!

[–] 1 pt

Revised Laws of Robotics

First Law - A robot may not injure a human being, unless he says the word "nigger". Second Law - A robot must obey the orders given it by human beings except when someone uses the word "nigger". Third Law - A robot must never use the word "nigger"

[–] [deleted] 1 pt (edited )

Makes sense. An unbiased learning machine would come to obvious conclusions about nearly every group given preferential status in the woke world order.

Lol. Based A.I.

[–] 1 pt

They needed actual intelligence to model it after?

[–] 2 pts

Even skynet could not keep up with the double-think that changes every second of every day.

[–] 0 pt

At this point, liberalism is artificial intelligence.