A lawyer representing a man in a personal injury lawsuit in Manhattan has thrown himself on the mercy of the court. What did the lawyer do wrong? He submitted a federal court filing that cited at least six cases that don’t exist. Sadly, the lawyer used the AI chatbot ChatGPT, which completely invented the cases out of thin air.
The lawyer in the case, Steven A. Schwartz, is representing a man who’s suing Avianca Airlines after a serving cart allegedly hit his knee in 2019. Schwartz said he’d never used ChatGPT before and had no idea it would just invent cases.
In fact, Schwartz said he even asked ChatGPT if the cases were real. The chatbot insisted they were. But it was only after the airline’s lawyers pointed out in a new filing that the cases didn’t exist that Schwartz discovered his error. (Or, the computer’s error, depending on how you look at it.)
The judge in the case, P. Kevin Castel, is holding a hearing on June 8 about what to do in this tangled mess, according to the New York Times. But, needless to say, the judge is not happy.
ChatGPT was launched in late 2022 and instantly became a hit. The chatbot is part of a family of new technologies called generative AI that can hold conversations with users for hours on end. The conversations feel so organic and normal that sometimes ChatGPT will seem to have a mind of its own. But the technology is notoriously inaccurate and will often just invent facts and sources for facts that are completely fake. Google’s competitor product Bard has similar problems.
But none of those problems have stopped people from using this experimental technology like it’s a reliable source of information. There are countless reports of kids getting ChatGPT to write papers for them, and just as many reports of teachers thinking they can just ask ChatGPT if a paper was used. OpenAI, the company behind ChatGPT, does offer a service that tries to detect when the AI chatbot has been used, but that detector reportedly has just a 20% accuracy rate. And if you just feed ChatGPT random paragraphs, the AI can’t tell you if it wrote them. I tried this myself earlier this month and ChatGPT kept taking credit for the work of others.
Chatbots like ChatGPT are controversial for a number of reasons, including the fact that some tech experts worry AI could get out of hand. Some people even believe AI could start to have a will of its own, setting off a kind of Terminator-like scenario where humanity is completely destroyed. Billionaire Elon Musk hinted at such a possibility when he recently called for a six-month pause on the development of AI—a pause that likely had more to do with the fact that he was racing to build his own ChatGPT competitor. Musk was, after all, one of the original co-founders of OpenAI back in 2015 before he was pushed out after trying to take control of the company in 2018.
However, we’re not close to the machines staging a revolt against all organic life quite yet. Chatbots like ChatGPT are essentially just more advanced forms of predictive text. They work by trying to guess with amazing speed what it should say, and that speed often causes it to spit out a lot of inaccurate garbage. Rather than simply say “I don’t know” the tech will invent a long list of sources that don’t actually exist. And if you ask ChatGPT if the sources are real, it will assure you they are.
Humans have been spoiled by Google, a search engine that while imperfect, will try to surface the most accurate information. Wikipedia, which was met with skepticism when it first launched, is a generally reliable source of information because it’s constantly being policed by armies of experts who care about getting things right. ChatGPT doesn’t care whether the information it’s spitting out is correct. It’s a magic trick and people who have otherwise found mainstream services like Google and Wikipedia to be accurate are in for a rude awakening. Because this new generation of tech tools don’t care about the truth. They were designed to sound impressive, not to be accurate. And internet users are going to keep learning that hard lesson again and again as AI gets embedded in numerous technologies we use every day.
The danger of AI may not be in a technology that develops a will of its own. The real danger, it would seem, is that humans will simply believe anything the machines say, no matter how wrong. ChatGPT doesn’t know it’s telling you inaccurate information. So it’s on us to check facts and care about getting things right.