Menu icon Access the Business Officer Magazine menu by clicking or touching here.
Colorado Lawyer Magazine logo, click or touch this logo to return to the homepage Click or touch the Colorado Lawyer Magazine logo to return to the homepage. Search

AI in Legal Practice

The Line Between Ready Reference and Legal Research

September/October 2025

Download This Article (.pdf)

The introduction of generative artificial intelligence (AI) tools in the legal field has created both opportunities and significant challenges. While these tools promise streamlined workflows for document preparation, they pose serious risks through their tendency to produce “hallucinations”—impressive-looking but fictitious legal citations, nonexistent cases, or inaccurate legal analysis. Courts across the country are now confronting lawyers who have submitted AI-generated filings containing such errors, leading to sanctions and stern judicial warnings about the misuse of these technologies.

A 20th-Century Dream Finally Realized?

The promise of computers answering our questions isn’t new. Fans of classic cinema will recall the 1957 film Desk Set, starring Katharine Hepburn and Spencer Tracy, which explored this very tension. Hepburn plays a reference librarian at a newspaper who spends her days looking up facts in catalogs and delivering them faithfully to reporters on the other end of a ceaselessly ringing phone. Tracy’s character arrives with revolutionary technology that threatens to end it all: EMERAC (Electromagnetic Memory and Research Arithmetical Calculator), the supercomputer.

“Did you ever see one of these electronic brains work?” Tracy’s character asks. Hepburn replies that she witnessed an IBM demonstration that translated Russian into Chinese, jokingly asserting: “Gave me the feeling that maybe, just maybe, people were a little bit outmoded.” To which Tracy responds, “Wouldn’t surprise me a bit if they stopped making them.”

With this banter, the film revolves around a fundamental question that remains as relevant now as it was then: Can machines truly evaluate information the way humans do? Hepburn asserts to her reference department, attempting to reassure them: “No machine can evaluate.” By the film’s climax, when EMERAC appears to have taken over the reference department, the computer’s purpose is revealed as supposedly noble—not to replace the librarians, but rather “to free mankind” and “liberate his time for more important work.”

For the next 66 years after Desk Set, this dream of liberation remained mostly in the realm of fiction. Then ChatGPT arrived in 2022 and seemed to finally make this vision a reality. Here it was at last: a computer willing to answer any question with a real answer, like Ask Jeeves, but more direct—like a real person, but more informed. Beyond search engines, which only index the internet and leave users to select their own sources, here was a machine that appeared to know how to consult trusted resources, just like the ready-reference librarians of Hepburn’s era.

The Fundamental Problem: AI’s Current Limitations

But the parallel to Desk Set reveals our current dilemma. The film asked what kinds of questions we would pose to a supercomputer. The examples were tellingly simple: Who held the highest lifetime batting average? What poem has the line, “Wah-wah-taysee, little fire-fly?” (Henry Wadsworth Longfellow’s Song of Hiawatha). Give all available statistics on Corfu.

These are ready-reference questions—factual inquiries with definitive answers. AI can provide a quick ready-reference answer, but it still requires the user to verify it with a reliable source.

The real problem is that these tools are now being marketed as if they were able to perform complex legal research, going far beyond the realm of ready reference. Users need to keep in mind that generative AI tools like ChatGPT, Claude, Perplexity AI, and others were designed to predict text based on patterns in their training data. But they are not yet capable of competently duplicating the work of legal professionals or reliably conducting legal research. By some accounts, more than half the time these tools generate false or partially inaccurate information in response to queries—a phenomenon termed “hallucination.”1 All across the country, news proliferates with stories of lawyers filing AI-drafted documents filled with fabricated legal citations, creating a crisis of confidence in legal technology.

Real Consequences: Colorado Court Cases

Two recent Colorado cases illustrate the dangers of unverified AI use in litigation.

Coomer v. Lindell (US District Court of Colorado)

In this case before Judge Nina Y. Wang, attorneys representing Mike Lindell and related entities filed a brief containing nearly 30 “defective citations,” including misquotes, misrepresentations of legal principles, and citations to cases that “do not exist.”2 When questioned, lead counsel Christopher Kachouroff initially blamed others before admitting—only when directly asked by Judge Wang—that he had used generative AI without properly verifying the citations.

Citing Federal Rule of Civil Procedure 11, Judge Wang issued sanctions in the amounts of $3,000 each to Kachouroff and his co-counsel Jennifer DeMaster (to whom he had delegated the cite-check), calling it the “least severe sanction adequate to deter and punish defense counsel in this instance.”3 She stopped short of referring them for disciplinary action, stating that the court “derives no joy from sanctioning attorneys.”4

Al-Hamim v. Star Hearthstone, LLC (Colorado Court of Appeals)

This case marked the first published opinion in which the Colorado Court of Appeals addressed generative AI misuse.5 The self-represented plaintiff’s opening brief included eight “fake cases” generated by AI. When confronted, he admitted using AI, acknowledged the hallucinations, and apologized for failing to verify the citations.

While the court declined to impose sanctions due to mitigating factors—including plaintiff’s pro se status and immediate admission of error—it issued a clear warning that future submissions with AI-generated inaccuracies will likely result in sanctions for both lawyers and self-represented parties. The message was unequivocal: technological convenience must never outweigh accuracy and integrity in the judicial process.

Ethical Obligations in the AI Era

An attorney’s use of AI can implicate numerous ethical obligations.6 Some of the most critical are discussed below.

Duty of Competence (Colo. RPC 1.1)

Lawyers must provide competent representation, which includes keeping abreast of technological changes. Comment 8 to this rule explicitly addresses technology competence. Using AI without understanding its limitations violates this duty if lawyers fail to supplement AI output with competent legal work.

Duty of Candor (Colo. RPC 3.3)

Submitting AI-generated filings containing fictitious cases violates the duty not to make false statements to tribunals. Refusing to admit errors when confronted compounds this violation.

Duty of Diligence (Colo. RPC 1.3)

While AI might seem to save time, failing to verify AI-generated content demonstrates lack of diligence, causing delays and additional work for courts and opposing counsel.

Supervisory Responsibilities (Colo. RPC 5.1 and 5.3)

Supervisory lawyers must ensure that subordinate lawyers and nonlawyer assistants understand the risks and limitations of generative AI tools and are properly trained in their use and verification. Failure to implement such safeguards could be considered a failure to ensure competent work product.

Rule 11 Obligations (Fed. R. Civ. P. 11(b))

By presenting any filing, attorneys certify that all legal contentions are warranted by existing law. Relying on AI without verification violates this fundamental requirement, and subjects lawyers to sanctions.

Caveat Emptor: The Vendor Problem

Several legal research tools now on the market claim to answer any legal question with a simple query. Lawyers today face an alarming proliferation of generative AI tools that promise to revolutionize legal practice. Some vendors even promise to increase access to justice by creating specialized tools that let nonexperts type in any legal question and get a reliable answer. The hubris is staggering.

These vendors fail to understand that separating facts from law is a type of thinking that doesn’t come naturally—it takes years of practice to develop. Even lawyers shifting into new practice areas must study the right materials to become current on the law in that area. The law constantly shifts and must be updated continuously. This is not a task for a chatbot, and it’s not a promise that tech industry marketers should be making.

Seminars abound teaching how to prompt AI for desired answers. However, these courses don’t convey the legally trained thinking required to parse questions and filter out irrelevant aspects. No prompting course teaches the fundamental skill of separating facts from law or the ability to identify which precedents truly apply to a novel situation. Performing this kind of task requires lawyers to simultaneously recall scattered sources of law and apply them to the facts at hand. AI struggles to even cross-reference multiple documents at once. This limitation has been referred to as the “context window limitation,” and it puts the possibility of AI performing accurate, comprehensive legal analysis out of reach for the foreseeable future.7

A Framework for Responsible AI Use

Just as doctors have adapted to patients arriving with WebMD self-diagnoses, lawyers must adapt to an environment where clients may have already consulted AI. The general public is using custom ChatGPTs to ask legal questions and receiving all sorts of convoluted information. These tools are part of the landscape now.

Lawyers considering AI tools should follow these guidelines:

  • Understand the technology: Gain a basic understanding of how generative AI functions and, crucially, its limitations. Remember that AI cannot evaluate the way humans do.
  • Verify everything: Any AI-generated content must be thoroughly reviewed and verified before submission to any court or client. This includes every citation, every quote, and every legal principle.
  • Check citations: Run briefs and motions through a cite-checking product such as Westlaw’s Quick Check or Drafting Assistant to ensure your citations are accurate and updated.
  • Maintain traditional research skills: For in-depth legal research questions, follow the procedure taught in every American legal research class: consult reputable authorities such as treatises, legal encyclopedias, or restatements (secondary sources). Then follow up with primary sources of law—relevant statutes, regulations, or administrative laws. Finally, read the notes of decisions on those statutes and update your case law research. The process is basic, but finding the answers remains grueling. You won’t escape it by typing questions into a search box.
  • Implement safeguards: Supervisory lawyers must ensure subordinates understand AI risks and are properly trained in verification procedures.
  • Disclose AI use when appropriate: Consider whether disclosure of AI assistance is necessary in particular contexts.
  • Draw clear boundaries: Use AI for ready reference and extractive tasks within closed, private systems. Draw the line at performing legal research.

The Proper Role of AI in Legal Practice: Your Ready-Reference Librarian

The distinction between ready reference and legal research forms the foundation of every law librarian’s education.8 Lawyers must observe restraint before delivering legal advice to avoid unknowingly creating attorney-client relationships. Law librarians must observe the same caution. The concern is that vendors and IT developers may not be as cognizant of this critical line. Giving legal advice without proper authority not only jeopardizes the ethics of attorneys and law librarians but also potentially exposes IT specialists and AI vendors to charges of practicing law without a license. Ultimately, those who rely on the information pay the price.

Ready-reference librarians were once seen as the cornerstone of any community, business, or newspaper—the human version of today’s search engines. But law librarians are trained to pay attention to the moment when a ready-reference question turns into a request for legal advice, and they avoid crossing that line with all due caution. Lawyers know that what they practice is not ready reference but rather applying a vast universe of knowledge about the law to a specific set of unique facts. Law librarians are careful not to wade into those waters and instead restrain themselves to providing the right resource for the question. Lawyers should expect chatbots to practice this same restraint.

Appropriate AI Uses

That said, there are use cases for general artificial intelligence that lawyers will find helpful. The following list provides some examples of the kinds of tasks for which generative AI would be appropriate.

  • Extractive tasks where users input documents and ask the platform to summarize, retrieve data, and organize information into categories.
  • Answering readily verifiable questions about law (e.g., “What is the statute of limitations for breach of contract in Colorado?”).
  • Brainstorming and idea generation.
  • Summarizing lengthy documents for initial review.
  • Administrative tasks like scheduling and document organization.
  • Transcribing handwritten documents into typewritten documents (note the platform’s privacy policies so as not to breach the duty of confidentiality).

Inappropriate AI Uses

These are the kinds of tasks we are seeing vendors claim their tools can do, but they are not accurate enough to justify the risk.

  • Conducting comprehensive legal research.
  • Drafting legal arguments without thorough review.
  • Citing cases or statutes without verification.
  • Analyzing complex legal issues requiring nuanced understanding.
  • Providing legal advice to clients.
  • Determining possible outcomes for specific cases given particular facts.

Conclusion: The Human Element Remains Essential

Desk Set ultimately resolved with the recognition that machines and humans could coexist, each serving their appropriate function. EMERAC could handle the ready-reference questions, but human judgment, evaluation, and expertise remained irreplaceable. Nearly seven decades later, this lesson remains profoundly relevant.

Colorado’s model rules for professional conduct prohibit lawyers from putting forth inaccurate research. Attorneys practicing in Colorado have been sanctioned for submitting hallucinated cases derived from irresponsible AI use, and pro se litigants have been threatened with the same. Lawyers remain responsible for updating their cases and will face sanctions for submitting false citations to courts.

The message for legal professionals is clear: embrace AI’s potential for appropriate tasks—ready-reference queries, document extraction, and administrative efficiency—but never as a substitute for genuine legal research and analysis. Courts have spoken decisively that technological convenience cannot excuse the submission of inaccurate information.

In an era of rapid technological change, our foundational ethical obligations have never been more important. The integrity of our legal system depends on maintaining the distinction between what AI can do well—ready reference—and what remains the exclusive province of trained legal professionals—true legal research and analysis. Like the librarians in Desk Set, we must remember that no machine can truly evaluate. That remains, and must remain, a uniquely human responsibility.

Baylee Suskin, JD, MLIS, is a research and reference librarian at the US Courts Library for the Tenth Circuit Court of Appeals. She previously taught courses on the foundations of legal research at the University of Colorado and the University of Denver and has served on the Executive Board of the Colorado Association of Law Libraries since 2021—baylee_suskin@ca10.uscourts.gov. Coordinating Editor: Michelle Penn, michelle.penn@colorado.edu.


Notes

1. Magesh et al., “Hallucination‐Free? Assessing the Reliability of Leading AI Legal Research Tools,” J. of Empirical Legal Studies (2025).

2. Coomer v. Lindell, No. 22-CV-01129-NYW-SBP, Order to Show Cause (D.Colo. Apr. 23, 2025), https://storage.courtlistener.com/recap/gov.uscourts.cod.215068/gov.uscourts.cod.215068.309.0.pdf.

3. Coomer v. Lindell, No. 22-CV-01129-NYW-SBP, Order (D.Colo. July 7, 2025), https://storage.courtlistener.com/recap/gov.uscourts.cod.215068/gov.uscourts.cod.215068.383.0.pdf.

4. Id.

5. Al-Hamim v. Star Hearthstone, LLC, 564 P.3d 1117 (Colo.App. 2024).

6. See Berkenkotter and Lipinsky de Orlov, “Artificial Intelligence and Professional Conduct: Considering the Ethical Implications of Using Electronic Legal Assistants,” 53 Colo. Law. 20 (Jan./Feb. 2024), https://cl.cobar.org/features/artificial-intelligence-and-professional-conduct.

7. Estreicher and Polani, “AI’s Limitations in the Practice of Law,” Verdict: Legal Analysis & Commentary From Justia (Aug. 8, 2025), https://verdict.justia.com/2025/08/08/ais-limitations-in-the-practice-of-law.

8. Healey, Legal Reference for Librarians: How and Where to Find the Answers (American Library Association 2014).