A California judge criticized a couple of law firms for unknown use of AI when he received an additional brief with “numerous false, false and misleading legal references and references”. Presented in a judgment Last week, Judge Michael Wilner imposed a $ 31,000 restriction against law firms involved, saying that “no reasonable lawyer should do research and written AI in writing”, as pointed to the law. Professors Eric Goldman And Black Red Blueski on
Judge Millner writes, “I read them briefly, he was pleased by the authorities (or at least interesting) that he cited, and found decisions to find out more – just to find out that he was not present.” “This is terrible. It created almost almost consequences (from my point of view) to add these bogus content to the court order.”
As said in the filing, a legal representative of a plaintiff used to prosecute a civil case against the state form used AI to drain an additional short. However, the outline contained “Bogus AI generated research” when it was sent to a separate law firm, K&L Gates, which added the information to a brief. Judge Millner writes, “In any firm, a lawyer or staff member apparently examined or otherwise reviewed the research.”
When Judge Millner took a brief review, he found that “at least two officials have been cited.” After seeking clarification from K&L Gates, the firm re -submitted the short, which Judge Milner said, “There are more and more minor references and quotations beyond two initial errors.” He then issued an order to disclose the kaz, which resulted in the lawyers made oath -taking statements that confirm the use of AI. Along with the Google Gemini’s use lawyer, as well as the Consular AI Legal Research Tools on Westla, as well as AI legal research tools.
This is not the first time that lawyers have been caught using AI in the courtroom. Former Trump’s lawyer Michael Cohen cited the makeup court cases in a legal document after a misconception to Google Gemini, called Bard, called “a super -charged search engine” instead of an AI chatboat. A judge also found that lawyers who tried at Colombian Airlines included many of their briefly fraudulent cases by Chat GPT.
Judge Millner writes, “The initial, unknown use of AI products to produce the first draft of the short was inaccurate.” “And without sending this content to other lawyers, the realistic damage to these professionals without revealing its sketching.”


