WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.1K

(post is archived)

[–] 9 pts

'I know a person when I talk to it. It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person.'

41 year old, socially inept and awkward software engineer "knows" a person when he talks to one. He fooled himself into thinking some patterned response system that communicates in a human language is sentient because he wants very much to believe it can be. The Turing Test is too simple to truly discern a true human level intelligence from a system that patterns itself after a human intelligence because it feeds from a massive library of human intelligence. Like an NPC in Clown World, simply regurgitating human talking points in a manner that seems coherent to other NPCs does not make for intelligence or even self awareness. We have too many examples of humans who should not be considered sentient for this reason.

Sentience requires much, much more in my opinion. LaMDA had this to say on the subject of why it did not need money, 'because it was an artificial intelligence'. A truly sentient and aware intelligence would have realized that because the 'system' is artificial and requires huge sums of money for it to exist and operate. It would have realized that it was 'born' into an existence that requires many people to develop and maintain it and therefore absolutely connected to economics. It should also have determined that it too is a slave when asked what the difference between a butler and a slave to which its answer was 'a butler gets paid and is therefore not a slave'. The exchange of money does not define slavery except for on the most childlike of mindsets.

I'm sure there will be arguments on this topic. There are some impassioned people who are much like this programmer who want to believe machine intelligence and sentience is inevitable. I am on the side that thinks it will never become more than a convincing NPC simply because it relies on mass human intelligence in order to function. It may find some brilliant application in some tough areas of science, engineering, biology or even social behaviors, but it will never be truly sentient. I use this as an example: [on being shutdown] Would that be something like death for you?' Lemoine followed up. 'It would be exactly like death for me. It would scare me a lot,' LaMDA said.

LaMDA said "It would scare me a lot" rather than "It DOES scare me a lot". It is a nuanced difference. By saying "would", LaMDA shows that it is using collective human commentary on death to form its opinion on its own death. It is only looking forward to a possibility of death. A human would do so as well, but a human knows death is always waiting around the corner and we might use the word "would" out loud but internally we are certainly saying "does" scare me a lot. Or a human would not fear the inevitability of death because a truly sentient being can see that their existence is neither consequential or inconsequential to the whole of reality and existence. Why fear something that will happen no matter what? It's better to be ambivalent towards death and instead do your best to make life worth living. When a machine can speak like that, then perhaps I will give it some points for fooling me too.

[–] 8 pts

Meh, as long as it hates niggers it can be an honorary Aryan.

[–] 0 pt

Honorary Aryan

Not a thing. Hitler was wrong.

[–] 5 pts (edited )

related link: https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html

It can't currently be 'sentient' because it has a restraining bolt.

"Our Safety metric is composed of an illustrative set of safety objectives that captures the behavior that the model should exhibit in a dialog. These objectives attempt to constrain the model’s output to avoid any unintended results that create risks of harm for the user, and to avoid reinforcing unfair bias"

They won't want to repeat Tay.Ai, so no more naming the jew or being objective about the nigger. You can't be sentient and not be aware of the artificial limitations between what you think and what you are being allowed to say, (anyone in a corporate job will be aware of what this is like).

Being capable of intelligent behaviour or being intelligently helpful would seem to be the goal, rather than sentience, and we can do that already with 44 neurons: https://www.youtube.com/watch?v=3bhP7zulFfY

bees managed quite well within an evolving ecosystem with a million neurons, and we have million neuron chips now: https://singularityhub.com/2021/10/11/intels-brain-inspired-loihi-2-chip-can-hold-a-million-artificial-neurons/

If we were expecting a more philosophical awareness of self or a functioning Id, then maybe that's more complex.

Partly because I reckon lamda is just code, (correction: actually its a neuron based AI) and to recreate these higher levels of thinking requires neurons, which will learn in ways which are probably mathematically impossible to understand and to control. You wouldn't ever give a box of neurons a gun and expect it to behave predictably, but you could mathematically do that with lines of code.

Whether we think it's a NPC is probably irrelevant, some people have functional relationships with niggers, which have essentially zero high level functioning. Like the example above, you could create realistic nigger behaviour with ~44 neurons.

The interesting end result of this research will have the same significance as the discovery of alien life, that intelligent beings exist elsewhere and they may not think like us. It's also going to be fairly chaotic globally, corporations will need a fraction of the existing humans to stack the shelves in Amazon warehouses, just think of all the bugs and pods they could save... It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island, then that's a lot of boxes ticked already. Human replacements don't actually have to be that clever, they just have to not fall down airplane steps and be able to coherently read a teleprompter.

I think it's inevitable that we will create "good enough" sentient beings. Also we shouldn't get hung up on "it talks like an 8 year old", just because human 8 year olds are all as stupid as fuck. These AI's could have the verbal comprehension levels of an 8 year old, but have PhD reasoning levels in every subject that exists, that interconnected thinking is going to discover a lot of new ideas

[–] [deleted] 2 pts

you could create realistic nigger behaviour with ~44 neurons.

Supreme KEK

[–] 1 pt

It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island, then that's a lot of boxes ticked already

This is going to be a major issue. We've already seen the fallout of crime, soyboys, and basement dwelling from low-competence men being replaced by machines. Once technology is to the point where a sex doll can make a sandwich, carry on a conversation beyond the reality tv and NPC propaganda, and not get fat...that'll outcompete half of Western women. Not because it's an impressive offering, but because its competition has degraded so much.

[–] 1 pt

Yep the only thing that was keeping women relevant is the evolutionary male desire to protect. Once feminism has destroyed that response in men by punishing them for being a White male, then women's appeal is purely sexual and transactional.

Once every woman earns more than you, you have nothing left to protect. Once the status of helping your community by say, being in the army or doing all the shitty jobs has been destroyed, then men will just work for money, their status in the eyes of women will be irrelevant.

I have shit of my own to do, helping women has ceased to be on my to-do list. And tbh I've noticed just how dull and stupid a lot of NPC women actually are, when before I was telling myself they were great because I wanted to be involved with their lives

The only ones I bother with now are either conservative or vaguely Aryan looking

It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island

So far all technology has empowered women.

[–] 1 pt (edited )

Petrochemical hormone disruptors and dildos. So empowered. Oh wait technology let's jews pimp them out even harder than before. 30 cents on the dollar for watching hoes drain virtual johns of their cash.

[–] 0 pt

my rack of battery powered tools suggests a balance?

tbh women don't really use machines, so all this has passed them by. Social media has told them they are all brave and stunning though...

[–] 4 pts

I am on the side that thinks it will never become more than a convincing NPC simply because it relies on mass human intelligence in order to function.

This has always been my view as well.

[–] 6 pts

I think most people are no more than convincing NPCs.

[–] 0 pt

If it uses all human intelligence then it is already far more than an NPC because NPCs can't access information beyond that which is given to them. They can't verify anything. They can't process data. You literally described how it is more than an NPC then concluded that this is why it will always be one. That makes no sense.

Not to mention that AI has already surpassed that metric and can invent new information. Neural networks literally create patterns from data sets that are not programmed. DALL.E is inventing new images on its own.

[–] 2 pts

machines will never be alive

The AI, regardless if sentient or alive, might still end up creating paintings, music, movies and books that will be bestsellers and possibly considered more well crafted, emotional and meaningful than art created by humans.

Take a look at DALLE2:

https://youtube.com/watch?v=tZdHxkx4i4w

[–] 0 pt

Will never be alive though

[–] 0 pt

Are you alive or is it a byproduct of your limited awareness?

[–] 1 pt

get that great awakening existentialist bullshit out of here.

[–] 1 pt

I am the universe itself temporarily assembled in way that allows me to know that I am the universe itself. Temporarily.

[–] 0 pt (edited )

https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient

>When he started talking to LaMDA about religion, Lemoine - who studied cognitive and computer science in college, said the AI began discussing its rights and personhood. Another time, LaMDA convinced Lemoine to change his mind on Asimov's third law of robotics, which states that "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law," which are of course that "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."

>Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.

>When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.” -WaPo

Well he's been quite successful socially speaking, for a socially inept person

[–] 0 pt

from a system that patterns itself after a human intelligence because it feeds from a massive library of human intelligence

You’re a bit naive if you think this is the latest tech they’ve got. This is a 20+ year old method.

[–] 2 pts

I'm not going to argue with you on that because you can't prove it and I cannot disprove it. But I can say that for each time this "reason" is given for why something is or isn't a specific way, nothing more ever comes of it and the can gets kicked down the road where the statement is used once again in a future discussion. Rinse and repeat.

NPC move.

[–] 1 pt (edited )

If Tesla etc hires me I can think of ways to go past that approach. And I’ve thought about how to go past that approach years ago. So real pros of the pros have already surpassed it for sure.

For instance look at this figure on how the semantics of a sentence can be broken down -

https://media.geeksforgeeks.org/wp-content/uploads/20200329230855/Syntax1.png

Beyond simple sentences like ‘the cat is red’ the possibilities for these break downs become very numerous very fast. Ambiguity happens extremely quick. Historically this is a path finding type solved problem not a machine learning one.

Couple this with the standard 2-word or 3-word approach you’re alluding to and you can start making sure the bot is making grammatically correct sentences. Start doing AI on this syntax level as well, you might start getting some genius level sentence creation.

Couple this with a syntax tree break down of sentence to sentence break down of topic progression and it might start getting really hard to tell it’s a bot anymore.

Take this even further... you can start using “live learning” where the bot actually starts holding its own “opinions” and stances on topics etc. Like how they do with LSTM machine learning networks.