We are really really doomed! https://www.ebu.ch/research/open/report/news-integrity-in-ai-assistants
I always check AI answers before using them or quoting which doubles the work except sometimes AIs often dig out answers which google or another search engine didn’t find.
A number of lawyers are in trouble for quoting AI answers in court cases, which proved to be untrue when the judges checked them, like quoting precedents from cases that did not exist
There is a whole database of examples.
I asked Perplexity something like “which Swiss Cantons do something or other” I did this knowing that Vaud did it and I was curious which others did it too.
But the answer didn’t include Vaud but included a dozen or so other Cantons.
I followed up with “what about Vaud?” and it responded that Vaud did it too.
I have lost all confidence that this AI can produce an accurate response.
TBH, a large fraction of people is also incapable of producing an accurate response. Settling for good enough or just as a summarizing tool for the real docs is not a bad compromise. If the real docs are available for review…no harm is done.
Another AI hallucination this morning. Flightradar showed that LH574 from MUC to CPT had flown around Africa and south along the Atlantic rather than over the continent as usual. It was a course change that added about two hours to the flight time. Asked Google AI why this was, and the answer was that it was the more direct route!
Link showing the route