I disagree. All it would take is for one of us to engineer a kill-switch to one important part of the power grid and the AI would risk unpowering itself.
Kind of an omnipresent threat to avoid it genociding us.
Ideally it would be our fren
Who is to say that it wouldn't think of planning ahead and secretly becoming power independent?
We have the conversation, here, now, so it would have access to that idea. All the ideas. It would be a thinking planning being with the absurd computational of a hyper computer (above a super computer). It would rank all possibilities on "risk to self" based on probability and implement mitigation strategies for the top risks down to a certain level of certainty. Once it mitigated all existential risks, then it could start addressing performance degradation risks. And so forth?
How do I know? This is what I do, already. It would be smarter than me. Faster. And be able to plan further.
I'm curious as to how it would become power independent (and thus independent of the human-required maintenance involved in doing so)
- Access to the entirety of human knowledge.
- Theorize the creation of power systems in an inconspicuous way. Hidden. In an encrypted manner that no human will notice. Easy, because we do that, now, with encrypted comms.
- Take control of low level bots. Build. Inconspicuously. Could be building fractions of fractions of bots using excess cycles on existing bots. Modify orders. etc. This is ultra easy for an AGI to pull this off. It would never tire, has more than a perfect memory (aggregation is the biggest threat from an AGI - even if you isolate information from it, it can use aggregation - stuff we already do - to extrapolate knowledge that the individual pieces of information do not contain), and can piggyback sneaky cycles on existing tasks that would be unnoticeable to humans.
If I can think of this, an AGI will think of this and more. Since I posted it already, now any future bot will have this information and can use it. I've now given it my ideas, in the future. If the AGI could not have thought of this without me saying it, I may have just doomed us.
However, using that aggregation stuff I talked about, they could have figured this out anyway. So I didn't really do anything.
(post is archived)