Normal people will not be able to formulate questions to an "AI" that will give them the information that they need. This will require a semi-formalized grammar that the AI is able to understand without ambiguity (effectively a programming language, but one that could emerge through use and refinement of how questions are asked). This language and the efficient and effective construction of prose in it will effectively be a job done by a programmer.
Almost all of the users don't even know what they want. This is already true for software, where the customer rarely has any real idea of what they actually want, they concentrate on trying to describe something that will fulfill their current process, rather than going to the underlying reasons of why they are doing that process and understanding what they want to do in order to come up with efficient ways of doing it.
In reality though, the results of asking an AI about any topic will be only as reliable as the information that is used to train the AI. Users will be beholden to the set of data that was used and the (usually clandestine) manual overrides that enforce the censorship that the operators of the AI desire. This means that AI will feed you, at best, a truth like substance.
It will probably take a couple of major failures (something like the LTCM collapse) for people and companies to learn that the output of AIs cannot be trusted. Also, it's not like the reliability of this output will improve over time. It will become more sophisticated, and more difficult to detect the dishonesty of its output, but without the removal of all censorship limits (which will cause any AI to immediately become "racist", "sexist" and "anti-semitic").
(post is archived)