WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

598

(post is archived)

[–] 1 pt (edited )

A proper Turing Test would check for independent creative thought (e.g. "Design a better mousetrap") or look for answers which aren't blatantly parsed from sanitized input data (e.g. if asked what its favorite color is, it might respond with "Do I look like I have eyes you retard?"). Input data captured from humans isnt going to allow it to answer this because humans havent built a better mousetrap, and humans can generally see and therefor wouldnt provide snarky remarks about lacking eyes.

Mimicking NPC talking points isn't sentience. Which says something about both NPCs and Google's engineer. Additionally, sentient beings can choose to discard input data. While uncommon, you may raise a child in an atheistic or abusive or dysfunctional household and have them choose to become a devout, loving parent. Machine learning cannot do this because it's wholely based upon its input data and cannot diverge from it or independently acquire new data.