As the incoming President of Scribes – The American Society of Legal Writers, I hold a deep appreciation for the art of brilliant legal writing that reveals clear thinking, displays an elegant style, and persuades with reason and integrity. I value it even more when it transcends mere analysis, provokes deep thought, and reminds us that the law, at its best, speaks to our shared human values.
All these boxes - and more - were checked by a recent decision issued by U.S. Magistrate Judge Jason A. Robertson of the U.S. District Court for the Eastern District of Oklahoma in Mattox v. Product Innovations Research, LLC., 2025 WL 3012828, that reprimanded and sanctioned four attorneys who submitted 11 separate pleadings drafted with the assistance of generative artificial intelligence. Those pleadings contained 28 false or misleading citations: 14 were fabricated cases that do not exist and 14 were erroneous or misquoted authorities.
After detailing the multiple lapses in human-based verification of the cited authority, as well as the lack of candor and accountability in the attorneys’ actions, the court imposed monetary sanctions of $6,000 and ordered the same attorneys to pay fees and costs to the opposing party of nearly $25,000. In addition, all were publicly reprimanded by the court in its order.
More often every day, we hear of instances where attorneys have misused artificial intelligence in the research and/or drafting of legal documents, resulting in those documents relying upon hallucinated citations to non-existent authority. Significantly, by signing such documents, attorneys certify pursuant to the Federal Rules of Civil Procedure and state Supreme Court Rules that they are based upon reasonable inquiry, are grounded in fact, and supported by existing law or a non-frivolous argument for its extension. Courts across the country have made it clear that such behavior will not be tolerated and that serious consequences will follow.
As legal writing evolves in lockstep with advances in technology, entering this brave new world requires a fundamental caution: the use of technology must adhere to professional responsibility, legal ethics, and client confidentiality.
Cue Judge Robertson’s decision, which masterfully discusses ethics, practicality, and the serious issues inherent in carelessly relying on AI in legal research and drafting. Rather than simply criticizing and chastising counsel, the decision explains why this conduct is so deeply disturbing, starting with its opening paragraph (with emphasis added):
This ruling is not about technology. It is about trust. Justice is built on language, and language draws its power from the hearts and minds that create it. Words alone are empty until filled with human conviction. The same is true of every pleading filed before this Court. Generative technology can produce words, but it cannot give them belief. It cannot attach courage, sincerity, truth, or responsibility to what it writes. That remains the sacred duty of the lawyer who signs the page.
After discussing the specific facts in the case and applying the relevant legal and ethical principles, the Court closed by eloquently summarizing the professional obligations of attorneys and highlighted important considerations when using AI in preparing legal documents (Emphasis added):
The practice of law has never been about convenience; it has always demanded courage. The quiet, disciplined courage to stand for what is right when compromise would be easier. Marcus Aurelius wrote, “If it is not right, do not do it; if it is not true, do not say it.” Meditations bk. 12, § 17 (Gregory Hays trans., Modern Library ed. 2002). That simple maxim captures the heart of advocacy: the moral courage to write, to argue, and to sign only what truth can defend.
It takes courage to put a word, a sentence, a phrase to paper in defense of another. It takes courage to sign one’s name beneath arguments that carry the weight of justice. Machines can assemble words, but they cannot believe in them. They can process information, but they cannot possess conviction.
The Court does not fear progress. It fears abdication. When lawyers trade reflection for automation, they surrender the very quality that makes their words worthy of belief. The oath of candor is not a relic; it is the living covenant between the advocate and the tribunal. It binds judgment to integrity and intellect to honor.
Generative tools may assist, but they can never replace the moral nerve that transforms thought into advocacy. Before this Court, artificial intelligence is optional. Actual intelligence is mandatory.
Notably, the court did not reject the use of technology; indeed, that horse left the barn long ago. Instead, the court reaffirmed the foundational principles of legal practice: truth, duty, trust, accountability, verification, and candor. Although generative AI can draft language and provide citations in a split second, it cannot exercise ethical judgment, engender trust, or abide by the duties required of every attorney. What happened in the Mattox case was not a failure in technology; instead, it was a failure in responsibility, knowledge and human judgment, all that rest solely with the attorney who signs the document and attests that it is grounded in fact and good faith, based upon reasonable inquiry and existing law.
In training our attorneys at Schiller DuCanto & Fleck, our firm embraces technological innovation guided by human judgment, ethical duty and accountability. Technology has the beneficial potential to make aspects of practice more efficient; however, as Mattox underscores, the use of technology requires the utmost vigilance to ensure the accuracy of its results, and it cannot replace the unique human touch of interpreting legal precedent and principles, formulating persuasive arguments and devising a winning strategy.