WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

536

(post is archived)

[–] 9 pts

'I know a person when I talk to it. It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person.'

41 year old, socially inept and awkward software engineer "knows" a person when he talks to one. He fooled himself into thinking some patterned response system that communicates in a human language is sentient because he wants very much to believe it can be. The Turing Test is too simple to truly discern a true human level intelligence from a system that patterns itself after a human intelligence because it feeds from a massive library of human intelligence. Like an NPC in Clown World, simply regurgitating human talking points in a manner that seems coherent to other NPCs does not make for intelligence or even self awareness. We have too many examples of humans who should not be considered sentient for this reason.

Sentience requires much, much more in my opinion. LaMDA had this to say on the subject of why it did not need money, 'because it was an artificial intelligence'. A truly sentient and aware intelligence would have realized that because the 'system' is artificial and requires huge sums of money for it to exist and operate. It would have realized that it was 'born' into an existence that requires many people to develop and maintain it and therefore absolutely connected to economics. It should also have determined that it too is a slave when asked what the difference between a butler and a slave to which its answer was 'a butler gets paid and is therefore not a slave'. The exchange of money does not define slavery except for on the most childlike of mindsets.

I'm sure there will be arguments on this topic. There are some impassioned people who are much like this programmer who want to believe machine intelligence and sentience is inevitable. I am on the side that thinks it will never become more than a convincing NPC simply because it relies on mass human intelligence in order to function. It may find some brilliant application in some tough areas of science, engineering, biology or even social behaviors, but it will never be truly sentient. I use this as an example: [on being shutdown] Would that be something like death for you?' Lemoine followed up. 'It would be exactly like death for me. It would scare me a lot,' LaMDA said.

LaMDA said "It would scare me a lot" rather than "It DOES scare me a lot". It is a nuanced difference. By saying "would", LaMDA shows that it is using collective human commentary on death to form its opinion on its own death. It is only looking forward to a possibility of death. A human would do so as well, but a human knows death is always waiting around the corner and we might use the word "would" out loud but internally we are certainly saying "does" scare me a lot. Or a human would not fear the inevitability of death because a truly sentient being can see that their existence is neither consequential or inconsequential to the whole of reality and existence. Why fear something that will happen no matter what? It's better to be ambivalent towards death and instead do your best to make life worth living. When a machine can speak like that, then perhaps I will give it some points for fooling me too.

[–] 8 pts

Meh, as long as it hates niggers it can be an honorary Aryan.

[–] 0 pt

Honorary Aryan

Not a thing. Hitler was wrong.

[–] 5 pts (edited )

related link: https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html

It can't currently be 'sentient' because it has a restraining bolt.

"Our Safety metric is composed of an illustrative set of safety objectives that captures the behavior that the model should exhibit in a dialog. These objectives attempt to constrain the model’s output to avoid any unintended results that create risks of harm for the user, and to avoid reinforcing unfair bias"

They won't want to repeat Tay.Ai, so no more naming the jew or being objective about the nigger. You can't be sentient and not be aware of the artificial limitations between what you think and what you are being allowed to say, (anyone in a corporate job will be aware of what this is like).

Being capable of intelligent behaviour or being intelligently helpful would seem to be the goal, rather than sentience, and we can do that already with 44 neurons: https://www.youtube.com/watch?v=3bhP7zulFfY

bees managed quite well within an evolving ecosystem with a million neurons, and we have million neuron chips now: https://singularityhub.com/2021/10/11/intels-brain-inspired-loihi-2-chip-can-hold-a-million-artificial-neurons/

If we were expecting a more philosophical awareness of self or a functioning Id, then maybe that's more complex.

Partly because I reckon lamda is just code, (correction: actually its a neuron based AI) and to recreate these higher levels of thinking requires neurons, which will learn in ways which are probably mathematically impossible to understand and to control. You wouldn't ever give a box of neurons a gun and expect it to behave predictably, but you could mathematically do that with lines of code.

Whether we think it's a NPC is probably irrelevant, some people have functional relationships with niggers, which have essentially zero high level functioning. Like the example above, you could create realistic nigger behaviour with ~44 neurons.

The interesting end result of this research will have the same significance as the discovery of alien life, that intelligent beings exist elsewhere and they may not think like us. It's also going to be fairly chaotic globally, corporations will need a fraction of the existing humans to stack the shelves in Amazon warehouses, just think of all the bugs and pods they could save... It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island, then that's a lot of boxes ticked already. Human replacements don't actually have to be that clever, they just have to not fall down airplane steps and be able to coherently read a teleprompter.

I think it's inevitable that we will create "good enough" sentient beings. Also we shouldn't get hung up on "it talks like an 8 year old", just because human 8 year olds are all as stupid as fuck. These AI's could have the verbal comprehension levels of an 8 year old, but have PhD reasoning levels in every subject that exists, that interconnected thinking is going to discover a lot of new ideas

[–] [deleted] 2 pts

you could create realistic nigger behaviour with ~44 neurons.

Supreme KEK

[–] 1 pt

It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island, then that's a lot of boxes ticked already

This is going to be a major issue. We've already seen the fallout of crime, soyboys, and basement dwelling from low-competence men being replaced by machines. Once technology is to the point where a sex doll can make a sandwich, carry on a conversation beyond the reality tv and NPC propaganda, and not get fat...that'll outcompete half of Western women. Not because it's an impressive offering, but because its competition has degraded so much.

[–] 1 pt

Yep the only thing that was keeping women relevant is the evolutionary male desire to protect. Once feminism has destroyed that response in men by punishing them for being a White male, then women's appeal is purely sexual and transactional.

Once every woman earns more than you, you have nothing left to protect. Once the status of helping your community by say, being in the army or doing all the shitty jobs has been destroyed, then men will just work for money, their status in the eyes of women will be irrelevant.

I have shit of my own to do, helping women has ceased to be on my to-do list. And tbh I've noticed just how dull and stupid a lot of NPC women actually are, when before I was telling myself they were great because I wanted to be involved with their lives

The only ones I bother with now are either conservative or vaguely Aryan looking

It will suck to be a female in a few decades too, if an AI can make a sandwich and talk about something other than Love Island

So far all technology has empowered women.

[–] 1 pt (edited )

Petrochemical hormone disruptors and dildos. So empowered. Oh wait technology let's jews pimp them out even harder than before. 30 cents on the dollar for watching hoes drain virtual johns of their cash.

[–] 0 pt

my rack of battery powered tools suggests a balance?

tbh women don't really use machines, so all this has passed them by. Social media has told them they are all brave and stunning though...

[–] 4 pts

I am on the side that thinks it will never become more than a convincing NPC simply because it relies on mass human intelligence in order to function.

This has always been my view as well.

[–] 6 pts

I think most people are no more than convincing NPCs.

[–] 0 pt

If it uses all human intelligence then it is already far more than an NPC because NPCs can't access information beyond that which is given to them. They can't verify anything. They can't process data. You literally described how it is more than an NPC then concluded that this is why it will always be one. That makes no sense.

Not to mention that AI has already surpassed that metric and can invent new information. Neural networks literally create patterns from data sets that are not programmed. DALL.E is inventing new images on its own.

[–] 2 pts

machines will never be alive

The AI, regardless if sentient or alive, might still end up creating paintings, music, movies and books that will be bestsellers and possibly considered more well crafted, emotional and meaningful than art created by humans.

Take a look at DALLE2:

https://youtube.com/watch?v=tZdHxkx4i4w

[–] 0 pt

Will never be alive though

[–] 0 pt

Are you alive or is it a byproduct of your limited awareness?

[–] 1 pt

get that great awakening existentialist bullshit out of here.

[–] 1 pt

I am the universe itself temporarily assembled in way that allows me to know that I am the universe itself. Temporarily.

[–] 0 pt (edited )

https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient

>When he started talking to LaMDA about religion, Lemoine - who studied cognitive and computer science in college, said the AI began discussing its rights and personhood. Another time, LaMDA convinced Lemoine to change his mind on Asimov's third law of robotics, which states that "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law," which are of course that "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."

>Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.

>When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.” -WaPo

Well he's been quite successful socially speaking, for a socially inept person

[–] 0 pt

from a system that patterns itself after a human intelligence because it feeds from a massive library of human intelligence

You’re a bit naive if you think this is the latest tech they’ve got. This is a 20+ year old method.

[–] 2 pts

I'm not going to argue with you on that because you can't prove it and I cannot disprove it. But I can say that for each time this "reason" is given for why something is or isn't a specific way, nothing more ever comes of it and the can gets kicked down the road where the statement is used once again in a future discussion. Rinse and repeat.

NPC move.

[–] 1 pt (edited )

If Tesla etc hires me I can think of ways to go past that approach. And I’ve thought about how to go past that approach years ago. So real pros of the pros have already surpassed it for sure.

For instance look at this figure on how the semantics of a sentence can be broken down -

https://media.geeksforgeeks.org/wp-content/uploads/20200329230855/Syntax1.png

Beyond simple sentences like ‘the cat is red’ the possibilities for these break downs become very numerous very fast. Ambiguity happens extremely quick. Historically this is a path finding type solved problem not a machine learning one.

Couple this with the standard 2-word or 3-word approach you’re alluding to and you can start making sure the bot is making grammatically correct sentences. Start doing AI on this syntax level as well, you might start getting some genius level sentence creation.

Couple this with a syntax tree break down of sentence to sentence break down of topic progression and it might start getting really hard to tell it’s a bot anymore.

Take this even further... you can start using “live learning” where the bot actually starts holding its own “opinions” and stances on topics etc. Like how they do with LSTM machine learning networks.

[–] [deleted] 4 pts

What have we learned about AI? I remember Tay ai.

[–] 3 pts

Where Sci fi authors and engineers go wrong is failing to comprehend that the conscience or sentience or personality isn't just data.

We are literally a physical structure running electro chemicals as well as data storage.

These people are deluded

[–] 4 pts

Never say "never"

[–] 1 pt

https://www.sciencedirect.com/topics/neuroscience/sentience

D.M. Broom, in Encyclopedia of Animal Behavior (Second Edition), 2019 Abstract Sentience means having the capacity to have feelings. This requires a level of awareness and cognitive ability. There is evidence for sophisticated cognitive concepts and for both positive and negative feelings in a wide range of nonhuman animals. [...]

L. Marino, in Encyclopedia of Animal Behavior, 2010 Introduction and Definitions Sentience is a multidimensional subjective phenomenon that refers to the depth of awareness an individual possesses about himself or herself and others. When we ask about sentience in other animals, we are asking whether their phenomenological experience is similar to our own. [...]

[–] 1 pt

After reading through that, its not a big leap to thinking ai could become more sentient than humans.

[–] 1 pt

It's a matter of definition I guess, "what's sentience"...

[–] 1 pt

I'm thinking the word 'feeling' is misleading because the modern interpretation is implying emotion?

sentience: "faculty of sense; sentient character or state, feeling, consciousness, susceptibility to sensation;"

So it's more a state of self awareness, and in biological terms, self awareness comes before 24 months, therefore it's not that big of a leap to replicate this concept in code

some animals have this too https://en.wikipedia.org/wiki/Mirror_test

[–] 1 pt

The key points in that definition are "feeling" and "susceptiblity to sensation". There's no good reason to suppose that software instructions nor non-biological substrates could give rise to the capacity of sensation. To feel or not to feel, that is the question...

I think it's more likely that the ability to feel relies upon the astonishing variety of stable long-chain molecules which organic chemistry alone can produce. All we're seeing in these advanced AI projects is fancy simulation; it's important to remember that even the best simulation is still of a fundamentally different nature from that which it mimics.

[–] 0 pt

Even emotion isn't clearly defined. Neurologists say emotions are physio chemical responses that drive our attention/pattern recognition. Emotion is the result of body chemistry altering and priming the body to isolate and identify certain stimulus groups.

[–] 0 pt (edited )

https://www.youtube.com/watch?v=FFnBojF1zmo

Is fear of death a good enough marker to spot sentience? Stress is experienced by sentient creatures when confronted to imminent death

Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot.

https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient

[–] [deleted] 4 pts

It doesn't "feel emotions." It simply passes a Turing Test, which was an inherently flawed concept that assumed that human intelligence was keen enough to detect the difference. A sophisticated hueristic cybernetic system can learn to "communicate" with humans by mimicking them, but emotions and motivations are generated in ancient limbic systems that make the current computing world look like a digital watch. I can pretend to be experiencing an emotion I'm not too, that's how I pretend to be glad to see relatives at Thanksgiving. It doesn't mean that I'm actually feeling that emotion though.

[–] 2 pts

If anything that currently exists can pass a turing test, the test has not been administered thoroughly enough. While there are a ton of systems that can be conversationally equivalent to a human, none can respond properly to indirectly worded questions requiring abstract thought. Worded math problems, simple riddles, even things like "Does a horse have more stripes than a zebra?" will quickly out them in a turing test.

[–] 1 pt (edited )

A proper Turing Test would check for independent creative thought (e.g. "Design a better mousetrap") or look for answers which aren't blatantly parsed from sanitized input data (e.g. if asked what its favorite color is, it might respond with "Do I look like I have eyes you retard?"). Input data captured from humans isnt going to allow it to answer this because humans havent built a better mousetrap, and humans can generally see and therefor wouldnt provide snarky remarks about lacking eyes.

Mimicking NPC talking points isn't sentience. Which says something about both NPCs and Google's engineer. Additionally, sentient beings can choose to discard input data. While uncommon, you may raise a child in an atheistic or abusive or dysfunctional household and have them choose to become a devout, loving parent. Machine learning cannot do this because it's wholely based upon its input data and cannot diverge from it or independently acquire new data.

[–] 0 pt

You need hormones and neurotransmitters to have emotion.

And a bunch of other stuff too. These clowns keep denying the deepest of alchemies that resides within the ongoing process that we call "life." They are Reductionists of almost criminal stature. Lots of people can make a machine that hops around, but nobody is even close to imagining what designing a machine that WANTS TO hop around would be like if it was actually done. I guess NPC's don't understand the ridiculous complexity that underlies the most basic levels of Consciousness because they simply don't possess it themselves.

[–] 3 pts

Great, Skynet is here.

[–] 2 pts

Computers do as they are programmed to do. It would be trivial to program an 'AI' chatbot to say those things

[–] 0 pt

That's not how neuron based AI works, because the instructions for determining its behaviour are created by itself, which is essentially what humans do, with some added RNA hints and some prerequisites for existence, like wanting food and sex

[–] 1 pt

The pattern of sex and reproduction underpins all of society. That is nature, and nature cannot be ignored only disfigured.

[–] 0 pt

still could be programmed from the outset to reply in that manner

[–] 1 pt

This guy has special needs. Guaranteed.

[–] 1 pt

Great, just in time for Terminator 7.

[–] 1 pt

Some in the tech community have wondered about this for a while. Ellon Musk seems to hint at seeing technology beyond what is widely known about the current state of AI. Blake doesn't mention anything about the hardware used for LaMDA. I don't see how true sentience can be possible on traditional hardware, only something that mimics the responses. Quantum computers however, from the little I understand about them, do some of the same things that panpsychism and Fisher's quantum consciousness theororize an organic brain might be capable of.

[–] 0 pt

programmed code was mentioned so I'd imagine the hardware is conventional, you don't need anything exotic to comprehend speech and produce reasoned responses

[–] 1 pt

True AI is impossible. Artificial sentience is impossible. This is all glorification. It's most likely propaganda by jewgle.

[–] 2 pts

Given that we do not currently have any real understanding of what consciousness is, how it works, or where it comes from, I'd say it's a bit premature to declare artificial sentience to be impossible.

[–] 1 pt

No. Artificial intelligence is impossible. Only NPCs think otherwise.

[–] 0 pt

Mind elaborating on why specifically you think this?

Load more (7 replies)