Skip to content

Artificial Intelligence or innocent ignorance? Hard lessons yield best practices

June 9, 2025

Artificial intelligence is a controversial but increasingly valuable arrow in the quiver of any litigator. While AI can provide great assistance to litigators in improving their efficiency, AI also raises ethical and professional concerns that continue to evolve. This article will briefly discuss some of the benefits of AI, the potential pitfalls surrounding the use of generative AI in litigation, and how some courts and court systems are attempting to address the rapidly changing landscape of AI. Hopefully, these efforts can provide litigators with a preliminary roadmap of the best practices to use to navigate the interesting and developing AI landscape.

The Good – What are the Benefits of Using AI in Litigation?

A 2024 Thomson Reuters survey found that 63% of responding lawyers used AI in their work, with 12% saying that they use AI regularly. Old-school litigators may be understandably reluctant to use AI in legal research and brief writing and may still be learning the benefits of the technology. Indeed, 43% of respondents to the survey said that they had not tried AI and that the reason was a concern over the accuracy of outputs. Notwithstanding this reluctance, the benefits of AI extend across various aspects of litigation and can streamline processes, enhance accuracy, and improve access to justice through lowered costs and automated assistance in completing routine legal tasks.

Here is a breakdown of some common but key advantages of the use of AI:

  1. Enhanced legal research
    • AI provides faster and more comprehensive legal research and predictive analytics for litigation strategies.
  2. Streamlined document management, review, and control
    • AI provides traditional automation of eDiscovery tasks and automated document review and improves document management and control.
  3. Predictive litigation strategy and document preparation
    • AI utilizes predictive analytics for use in settlement assessment and strategy, analyzes witness testimony to identify inconsistencies and vulnerabilities, and prepares pleadings, motions and other briefing.
  4. Improved client service and access to justice:
    • AI facilitates more efficient legal services by automating routine tasks and reduces the costs of legal research and document review.

This is all powerful stuff, right? But to quote Voltaire (or perhaps Uncle Ben Parker) “with great power comes great responsibility.” Indeed, this all sounds good, but what about the bad, or even the ugly, when it comes to the naïve if not irresponsible, use of AI?

The Bad and the Ugly

Briefs citing to non-existent case authority that has been ‘hallucinated’ by AI have been reported for several years now. Yet, even with the continuous improvement of AI’s capabilities and accuracy over several years now, the generative AI horror stories continue to be reported up through today. The states of California and Colorado, have been no exception.

California cases

Late last year, in Mojtabavi v. Blinken, U.S. District Court Judge Percy Anderson of the Central District of California sanctioned a pro se plaintiff for repeatedly providing falsified or inaccurate case citations in support of his arguments, which resulted from generative AI ‘hallucinations’ by ChatGPT. The court hung its order on the hook of Local Rule 11-3.9 which does not even address artificial intelligence at all, and merely concerns the proper case citation format, presumably because the local rules do not address the use or misuse of AI in court filings.

Earlier this year, in U.S. v. Hayes, U.S. District Court Magistrate Judge Chi Soo Kim sanctioned defense counsel for including fictitious citations in a motion that, according to the court, had “all the markings of a hallucinated case created by generative artificial intelligence.” Counsel stated in a subsequent filing that he had not used AI to draft the motion, yet still could not explain how the fake citations were created. After that court concluded that the attorney had made “knowing and willful misrepresentations with the intent to mislead the Court,” it ordered counsel to pay $1,500 in sanctions and served a copy of its order to the other bars of which the defendant’s counsel was a member and on all of the district and magistrate judges in the Eastern District of California.

Also this year, in Lacey v. State Farm Gen. Ins. Co., in a Central District of California case pending before a court-appointed special master, the plaintiff’s brief was found to have nine incorrect case citations, two cited authorities that did not exist at all, and several quotations that were “phony and did not accurately represent” the cited materials. Plaintiff’s attorneys admitted to using AI to draft the outline for the brief, and further, that they had not reviewed the brief for accuracy. In a harsh rebuke, the Special Master concluded that “Plaintiff’s use of AI affirmatively misled me” and ordered that the plaintiff pay sanctions in excess of $30,000, which included the defendant’s portion of the special master’s fees, and a portion of defense counsel’s fees.

Also in 2025, in a state court case pending in Alameda County Superior Court, Michael Evans v. Execushield Inc., plaintiffs’ counsel presented fake citations to the Court. In response, the court ruled that as a result, counsel was not competent to represent the class of plaintiffs, and would not be awarded fees if the plaintiffs should prevail. The court further required counsel to file a copy of the court’s order in all other actions currently pending in the Alameda County Superior Court in any department in which any one of the plaintiffs’ counsel is an attorney of record, and further, to review all of their filings in the Alameda County Superior Court in any currently pending action in which plaintiffs’ counsel used the same ‘tool’ that resulted in the misrepresentations, and to file corrected pleadings if necessary.

Finally, in a 2025 case still pending in the Northern District of California, Concord Music Group, Inc. v. Anthropic, Anthropic was ordered to respond to claims that a data scientist it employed relied on a fictitious academic article, likely generated by AI. In a declaration by Anthropic’s counsel, she stated that the article that was cited did indeed exist, but that the citation was merely incorrect. Counsel attested that she had asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. A manual citation check did not catch this error, as well as additional wording errors introduced in the citations during the formatting process using Claude.ai. The court has yet to address the aforementioned declaration of counsel. Ironically, the case involves accusations by plaintiffs against Anthropic that it used copyrighted song lyrics without permission to train its AI chatbot Claude.

Colorado cases

In the 2023 disciplinary proceeding People v. Crabill, an attorney was disciplined and suspended arising from his use of AI in a motion. The attorney had only been a licensed Colorado attorney for about a year and a half when he was working on his first civil litigation case. He claimed that he unknowingly included fictitious cases in a motion to set aside summary judgment. Because ChatGPT had accurately answered his previous inquires, counsel claimed that it “never even dawned on me that this technology could be deceptive.”

Through this conduct, the attorney violated Colo. RPC 1.1 (a lawyer must competently represent a client); Colo. RPC 1.3 (a lawyer must act with reasonable diligence and promptness when representing a client); Colo. RPC 3.3(a)(1) (a lawyer must not knowingly make a false statement of material fact or law to a tribunal); and Colo. RPC 8.4(c) (it is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit, or misrepresentation). He was suspended for one year and one day, with 90 days to be served and the remainder to be stayed upon successful completion of a two-year period of probation, with conditions.

In the 2024 case of Al-Hamim v. Star Heathstone, the Colorado Court of Appeals weighed sanctions against a pro se party who admittedly relied on AI “hallucinations” in his briefing. The court concluded that while the fake case citations violated Rule 28(a)(7)(B) of the Colorado Appellate Rules, it would not impose sanctions in light of the party’s acceptance of responsibility. Al-Hamin was the first published opinion by the Colorado Court of Appeals that addressed the submission of generative AI hallucinations in legal filings, and it contained a warning to lawyers and pro se litigants that they may be sanctioned for the misuse of generative AI, and that the Court will not “look kindly on similar infractions in the future.”

Most recently in 2025, in the defamation case of Coomer v. Lindell, et. al., currently pending in the U.S. District Court of Colorado, defense counsel’s use of generative AI in an opposition brief resulted in a blistering rebuke from the court that threatened severe sanctions (the disposition of the Court’s order to show cause on this matter is still pending).

According to the court’s order to show cause, the court identified nearly 30 defective citations in the opposition. The purported defects included but were not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist. The court added that it was not until it specifically asked counsel whether the opposition was the product of generative artificial intelligence did counsel admit that it indeed was.

In response, defense counsel stated that the opposition brief that had been filed with the Court was not the final version and was, in fact, only a draft. Declarations of counsel (with apologies to the court) also attested to the fact that the final (unfiled) brief contained numerous revisions to the incorrect case citations, which were not made in the draft version that had mistakenly been filed with the court.

After receiving defense counsel’s response to the OSC, the court further ordered that counsel lodge electronic copies in Word format for every version of the opposition brief at issue, with all associated metadata, and further, to submit the original email correspondence between the attorneys who drafted the brief, along with the original electronic attachments of the opposition brief and any associated metadata to the court. The court has yet to rule on the OSC but in the initial OSC, the court threatened sanctions be imposed on counsel, including referrals for disciplinary proceedings for violations of applicable Rules of Professional Conduct.

Initial Approaches to Tackle AI Issues

In 2024, initial concrete steps were taken to regulate the use of AI in the legal profession. For example, in California, the California State Bar issued practical guidance (i.e. “guiding principles” as opposed to “best practices”) which addressed how the use of generative AI products implicated the rules of professional conduct relating to the duty of confidentiality; the duties of competence and diligence; the duty to comply with the law; the duty to supervise lawyers and nonlawyers; responsibilities of subordinate lawyers; communication regarding generative AI use; charging for work produced by generative AI and generative AI costs; candor to the tribunal; meritorious claims and contentions; prohibition on discrimination, harassment, and retaliation; and professional responsibilities owed to other jurisdictions.

Confidentiality must be a significant consideration by any lawyer in the use of AI for a client. A lawyer who submits their client’s confidential information into an AI platform likely has breached the client’s confidence as that information is now known to AI and may be used in an imaginable way by AI.

According to California State Bar Practical Guidance, “A lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections. A lawyer must anonymize client information and avoid entering details that can be used to identify the client.”

By May of 2024, California Supreme Court Chief Justice Patricia Guerrero launched an Artificial Intelligence Task Force which was tasked with evaluating generative artificial intelligence for its potential benefits to courts and court users while mitigating risks to safeguard the public. By February 2025, the task force previewed a new model policy that purports to help ensure the responsible and safe use of generative AI by California courts. Courts will be able to adopt or modify the model policy as needed. The model policy will provide courts with general guidelines for using generative AI in their daily, non-adjudicative duties.

Colorado has taken a similar approach. In June 2023, the Colorado Supreme Court asked the Standing Committee on the Colorado Rules of Professional Conduct to form a subcommittee to consider recommendations for amendments to those rules to address lawyers’ use of AI tools. The rules that may be impacted by the use of AI are The Duty of Competence (Colo. RPC 1.1); The Duty to Communicate With Clients (Colo. RPC 1.4); Reasonableness of Fees (Colo. RPC 1.5); Confidentiality of Information (Colo. RPC 1.6); Candor to the Tribunal (Colo. RPC 3.3); Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistants (Colo. RPC 5.1); Conduct Involving Dishonesty, Fraud, Deceit, or Misrepresentation (Colo. RPC 8.4(c)); Conduct Prejudicial to the Administration of Justice (Colo. RPC 8.4(d)); Bias (Colo. RPC 8.4(g)).

On July 18, 2024, the Colorado Supreme Court’s Artificial Intelligence Subcommittee on the Practice of Law issued the memorandum Re: Potential Changes to the Colorado Rules of Professional Conduct in Response to Emerging Artificial Intelligence Technologies. Concurrent with these efforts, beginning in January 2025, the Colorado Supreme Court’s Committee on the Rules of Civil Procedure, has been considering rule changes to account for the rise of generative AI.

Likewise, the ABA also issued Formal Opinion 512 (July 29, 2024) which addressed the ethical obligations of lawyers using generative AI, including their duties to provide competent legal representation, protect client information, communicate with clients, supervise their employees and agents, advance only meritorious claims and contentions, ensure candor toward the tribunal, and charge reasonable fees.

Court-by-Court Approaches and Standing Orders

While committees and task forces in CA and CO continue to grapple with the issue of generative AI and how to regulate its use in the courts, individual judges in both states have taken the initiative and have issued their own standing orders and requirements regarding the use of AI in cases pending before their respective courts. These standing orders are a great “front line” look at the use of AI in litigation and assist litigators in assessing whether and how to use AI in submissions to the court.

In California, at least four judges in the Central District, four judges in the Northern District, and at least one judge in the Southern District now all have standing orders governing the use of AI. State court judges have adopted similar orders as well.

USDC ND CA Judge Araceli Martínez-Olguín wrote “Any submission containing AI-generated content must include a certification that lead trial counsel has personally verified the content’s accuracy.”

United States District Court Northern District of California San Francisco Division Magistrate Judge Peter H. Kang wrote “Any brief, pleading, or other document submitted to the Court the text of which was created or drafted with any use of an AI tool shall be identified as such in its title or pleading caption, in a table preceding the body text of such brief or pleading, or by a separate Notice filed contemporaneously with the brief, pleading, or document.”

“…in the course of preparing filings with the Court or other documents for submission in an action, counsel and Parties choosing to use an AI or other automated tools shall fully comply with any applicable protective order and all applicable ethical/legal obligations (including issues relating to privilege) in their use, disclosure to, submission to, or other interaction with any such AI tools.”

In Colorado, at least three District Court judges and one state trial court judge have also adopted similar standing orders.

Denver County District Court Judge Eric Elliff wrote “If Chat GPT or other AI is used to generate any written product filed with this Court, a notice shall appear on the first page of the filing indicating that AI was used to generate all or part of the filing, as well as the specific portions of the filing (e.g., page number and paragraphs or lines) which contain the AI-generated content.”

Best Practices

Regardless of the impact of AI on litigation (good, bad, or ugly), longstanding rules still require an attorney to attest that the claims, defenses, and other legal contentions made in any filing are warranted by existing law. Indeed, whether or not there are rules or guidelines adopted in any jurisdiction that pertain specifically to the use of AI, FRCP Rule 11 and its state equivalents (see C.C.P. Sec. 128.7 in California, or C.R.C.P. Rule 11 in Colorado) still apply. Citing to non-existent case authority clearly runs afoul of those rules and merely blaming it on AI hallucinations or innocent mistakes is not a viable excuse.

Fortunately, the aforementioned standing orders do provide some guidance on best practices when using generative AI in a briefing filed with the Court:

  • Double (and triple) check AI’s work to avoid filing documents with erroneous and fictitious case law or other references that mislead the court, waste time, and certainly run afoul of the applicable ethical and procedural rules;
  • Include an AI Certification regarding the use, or non-use, of generative AI in preparing the filing. In doing so, the preparer of the filing must certify either that:
    • No portion of the filing was drafted by AI, or
    • Any portion that was drafted by AI (even if later edited by a human) was personally reviewed by the filer or another human for accuracy, and, that all legal citations are to actual, non-fictitious cases or cited authority, verified by using print reporters or traditional legal databases.
  • Some standing orders require that the AI certification include a statement that lead trial counsel has personally verified the content’s accuracy and has personally confirmed the accuracy of any research conducted using generative AI tools.
  • Include a notice on the first page of the filing indicating that AI was used to generate all or part of the filing, as well as the specific portions of the filing (e.g., page number and paragraphs or lines) that contain the AI-generated content.
  • Maintain records of all prompts or inquiries submitted to any generative AI tools in the event those records become relevant at any point.

So, is the juice worth the squeeze? Maybe. If used correctly and responsibly, AI can be a tremendous asset to any litigator in the trial or appellate courts, as it can increase efficiency and even potentially reduce the costs of litigation. On the other hand, it may lead to additional work chasing down fake citations and going down rabbit holes of legal arguments that may simply be baseless. Nonetheless, if AI is not used responsibly, it poses risks and dangers of lost motions, and perhaps even worse, the loss of credibility before the tribunal. Misuse or negligent use of AI also poses a risk of possible sanctions under Rule 11 and its state court equivalents, accusations of professional misconduct, and/or malpractice. Fortunately, many courts have provided litigators with some clear recommendations that can be easily adopted as best practices when using AI to enhance any litigator’s practice.

Finally, lawyers must consider whether their malpractice insurance policy excludes coverage for claims arising out of the use of generative artificial intelligence.

This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.

Subscribe For The Latest

Subscribe