Some Kiwi lawyers have been tricked into believing fictitious case notes after being given answers from AI chatbots, a technology that researchers say will inevitably find a way into the law profession and other industries.
The NZ Law Society revealed in a member newsletter last month that its librarians had received requests from legal practioners for information on fictitious cases — presumably from people who had been using tools like ChatGPT.
"The cases appear real as ChatGPT has learnt what a case name and citation should look like. However, investigations by library staff have found that the cases requested were fictitious," the organisation said.
"It will fabricate facts and sources where it does not have access to sufficient data.
"So next time you use ChatGPT for a legal query, just remember the cases cited in its response could be fake; made up by a well-intentioned AI tool."
University of Auckland commercial law professor Alex Sims said people in the legal profession should get up to speed on how the new tools could help their work.
But legal academics that 1News spoke to were cautious about people using the new software tools without fully knowing the risks of getting false responses.
"It's really concerning that some lawyers are indiscriminately using it. It's almost lazy," Sims said. "They need to know how to use them and actually have the [legal] expertise to know whether what is coming back is accurate or not."
While chatbots like ChatGPT are powerful, they can't tell what information is true.
Some see ChatGPT as a learning opportunity, while for others there is concern. (Source: 1News)
The bots' highly-complex statistical models are trained on processing language patterns, in order to then effectively guess what words should form an answer — like a super advanced auto-complete.
Currently, this means the chatbot's goal is to give an answer that hopefully sounds plausible. It can't verify any information, and it doesn't know if what it's saying is true.
Sims said chatbots were useful as a tool but that people needed to have a certain degree of distrust when parsing the "very well-written and confident" answers from AI.
"It's human nature to take a shortcut, so you really do need to be very disciplined," the professor said. "The danger with these technologies is that if it looks really good — people just are assuming that it's correct when it's not."
In an example given by the Law Society, a legal prompt to ChatGPT returned a fictional case while still referencing existing courts and laws in its answer.
Q+A's Whena Owen speaks to Victoria University artificial intelligence expert Simon McCallum about how AI and chatbots could revolutionise the world of politics and education. (Source: 1News)
Sims added that it was also "very dangerous" to put confidential information into today's chatbots as most continued to store what their users wrote.
Last month, ChatGPT's owner said it would somewhat pull back on the long-held practice of using user data in order to train new AI models.
"You would be foolish to be putting in company information into something like ChatGPT because then that becomes part of the database and your confidential information — like clients details, personal information — all that stuff is in there."
Sims said a ban or prohibition on AI-assisted work couldn't be an option as lawyers, among other white-collar workers, were already using the technology for simple tasks like writing emails. "The genie is really out of the bottle," she opined.
More bots, fewer lawyers in future?
The rapid uptake and use of generative AI and chatbots have prompted many to leap to doomsday fears about the potential for job losses in white-collar professions.
But Otago University senior law lecturer Simon Connell is optimistic about the future that chatbots like ChatGPT or Microsoft Bing could bring to the legal profession. He believes being able to use the new tools could become a learned skill for legal workers
"The use of prompts could become a skill in the same way that working out what to put into a legal search database, or even just getting the most out of Google," he told 1News.

Connell said there were several things that chatbots were already capable of successfully doing — though not always accurately.
These included being a "very valuable research tool", as a proofreader or to produce paperwork that lawyers already used templates for.
"You can ask it to write a document, contract, a commercial lease, and all sorts of other legal documents," he said. "[But] people who approach it on a mistaken premise of what it's doing can be led astray."
Speaking to 1News, Alex Sims is more bullish about the potential for AI. The law academic and tech researcher believes humans will always be part of the process, but that fewer will be needed in the future.
"Workers need to be using these tools because they will have the edge over someone else. So they need to get upskilled.
She suggested that complacent workers would be "left behind" if they didn't know how to use the tools, but that some were "getting carried away" by the new tech.
Generative artificial intelligence could prove to be a "hydra-headed beast" leading to national security concerns if left unregulated, experts warn. (Source: 1News)
Connell said he could also see a "consumer" side to the shift towards more bots with people able to get more personalised legal information — though not legal advice.
He pointed to examples like "DoNotPay", an AI legal service that has already been put to work contesting tens of thousands of parking tickets overseas — among the tasks it could now do.
SHARE ME