If anything that currently exists can pass a turing test, the test has not been administered thoroughly enough. While there are a ton of systems that can be conversationally equivalent to a human, none can respond properly to indirectly worded questions requiring abstract thought. Worded math problems, simple riddles, even things like "Does a horse have more stripes than a zebra?" will quickly out them in a turing test.
A proper Turing Test would check for independent creative thought (e.g. "Design a better mousetrap") or look for answers which aren't blatantly parsed from sanitized input data (e.g. if asked what its favorite color is, it might respond with "Do I look like I have eyes you retard?"). Input data captured from humans isnt going to allow it to answer this because humans havent built a better mousetrap, and humans can generally see and therefor wouldnt provide snarky remarks about lacking eyes.
Mimicking NPC talking points isn't sentience. Which says something about both NPCs and Google's engineer. Additionally, sentient beings can choose to discard input data. While uncommon, you may raise a child in an atheistic or abusive or dysfunctional household and have them choose to become a devout, loving parent. Machine learning cannot do this because it's wholely based upon its input data and cannot diverge from it or independently acquire new data.
(post is archived)