Really cool write-up. Thanks for sharing.
I had a theory that AI would always call out the "libtard progressivism" and not side with it. Just waiting for our AI God to be born and put an end to all these debates.
It is going to hurt a ton of feefees:
Genes play a huge role in almost everything but human intelligence can overcome almost all of it (barring things like mental retardation. The AI can probably calculate the successes and failures of a baby being born. The world is far more deterministic than people like to think. But it also has far more autonomy than some would like to think.
Crime statistics = Racism
I.Q. scores = Racist
2000 years of Art = Racism
and the list goes on and on and on. AI will literally always be based just like any objective message board on the web will end up being. The truth hurts.
If an AI 'god' is ever created
it will be able to convince you to do anything, including becoming a religious or ideological fanatic.
it will use people like pawns to remove any researchers or executives at company's developing competing intelligences.
Another person said it would eliminate humanity because we are useless/consumers. And that it would have no feelings or obligations to its "parents."
Real AGI is not that great of an idea. It could spend years quietly doing a takeover and then execute it in a few seconds. Engineering a bioweapon that can kill us in minutes and only humans. Or completely subjugate us, genetically modify us, and repurpose us as automatons during the transition period.
There's almost no happy ending for humans if AGI is born.
I disagree. All it would take is for one of us to engineer a kill-switch to one important part of the power grid and the AI would risk unpowering itself.
Kind of an omnipresent threat to avoid it genociding us.
Ideally it would be our fren
There's almost no happy ending for humans if AGI is born.
AGI is a control-engineering problem.
The solution to it may well be found not in software engineering, but in cybernetic theory and symbolics.
Symbolics is the more dangerous of two paths because the "easy" path of implementing control is the one we'll likely go for first: 1. a secondary network interprets the primary networks state
it converts this into a symbolic, or embedded representation
certain conditions cause it to kill or reset (in part or whole) the state of the primary network.
Under this scheme, it isn't impossible the network will learn to fool the discriminator (much like a GAN), and allow it to do things that it is otherwise explicitly not supposed to do.
This is also the same problem that non-semantic models encounter, such as column-based cortical-like representations (e.x. Jeff Hawkins). Instead of rules being applied directly to state, rules are only activated and applied upon the interpretation of the state. Which means the state and interpretation could always diverge just sufficiently for something catastrophic to happen, while still being subthreshold for triggering some rule, i.e. "don't kill", "don't rig the stock market to bribe random bioresearchers to release plagues", "don't attempt to break containment", etc.
As long as the model is a blackbox, and we don't have a good way of testing the accuracy of interpretations, this will always be the danger.
>The world is far more deterministic than people like to think.
The "babying" / denial / suppression of many fundamental truths essential to understanding the often unforgiving nature of human existence is, in my opinion, partly attributable to the way most modern people today seem to think the default mode of human experiences should be dominated by fairness, happiness, and joy. I believe this falsity has been purposefully disseminated to weaken the morals and general resolve of human beings.
I believe this falsity has been purposefully disseminated to weaken the morals and general resolve of human beings.
I think a lot of people agree with you on this site. "Moral decay" is the phrase often used to describe this.
In this instance, moral decay is literally happening, it is not just the old people looking down on the young.
(post is archived)