Menu icon Access the Business Officer Magazine menu by clicking or touching here.
Colorado Lawyer Magazine logo, click or touch this logo to return to the homepage Click or touch the Colorado Lawyer Magazine logo to return to the homepage. Search

The Legality of Generative AI—Part 2

I’m sorry, User. I’m afraid I can’t do that.

September 2023

Download This Article (.pdf)

This is the second in a series of articles discussing the legal implications of generative AI. This installment discusses potential risks to end users of AI in the commerce context.

Just as pocket calculators lighten the mental weight associated with arithmetic, generative AI promises to do the same for writing, art, and other creative endeavors. Rather than spending decades developing an artistic style, individuals can spend an afternoon describing to a computer program the kind of art they want to see and then select the best results from among the hundreds of output images. Rather than wrestle with essay structure and topic sentences, a writer can generate a grammatically correct first draft in a few minutes of prompting. In addition to producing output for the unskilled, this capability may also speed up the output of skilled professionals. But this potential increased efficiency is not without consequences. This article addresses some of the apparent risks to the generative AI end user, with a focus on use in commerce.1

An Efficient but Unreliable Business Partner

Some studies suggest that the use of generative AI has already increased worker productivity by 14%.2Combined with the ability to work remotely, some workers report becoming so productive that they can work several jobs simultaneously.3 The technology may be new, but businesses are already adopting generative AI, particularly large language models (LLMs), into their workflow.4

This potential utility comes with costs, of course.5 Some of these costs are to society as a whole. How does society change when individuals need not invest the personal growth required to compose, write, or draw at a certain baseline level of competence?6 Generative AI may harm those whose jobs are changed, replaced, or devalued, as is often the case with automation.7 Other risks are specific to individual users who may wish to integrate generative AI to generate content, interact with third parties, or make decisions.

Of course, it is possible to intentionally use generative AI to advance bad goals. Cybercriminals use generative AI to create viruses and malware.8 Scammers use AI to learn and replicate family member’s voices to place phone calls asking for money.9 Counterfeiters use AI to learn and copy the literary or artistic style of another.10 Political actors use it to create fake videos for campaign purposes.11 Generative AI is used to create fake data to stymie researchers.12 These are serious problems keeping businesses and law firms on their guard. But while generative AI may make it easier to commit or harder to detect fraud, these kinds of intentional bad acts are not alien to the law.

Software that can misbehave unbidden is more novel. The first article in this series explained why, at a very fundamental level, it is not possible to perfectly predict the behavior of generative AI.13 To recap briefly, generative AI models are not directly programmed with a series of instructions by human beings. Instead, the programmer provides a training framework and a large set of training data to a model, and then repeatedly tests how well the model performs compared to its training data. The programmer then adjusts the model to perform slightly better the next time until the model ends up generating a good internal map between prompts and the desired output. No one knows exactly what kind of internal algorithms the model ended up using. Moreover, the training process is not identical to the real world, and behavior that may have worked well in training may produce erroneous results in practice. Finally, though the software often appears omniscient when it comes to information in its training set, it is a far cry from omnipotent and will often produce vague and nonspecific output unless carefully prompted or managed to do otherwise.14

In its current state, therefore, the software behaves like an untrained, entry-level intern with full access to the Internet and great technical writing skills but without experience, context, or particular loyalty to your company. If a business would not entrust a task to such an individual, it probably should not entrust the task to AI.15

User Concerns Regarding Intellectual Property

A natural role for generative AI in a business context is to generate content.16 Automating the human creative process brings with it several new risks, however. Without more involvement from a human creator, the work product of AI is probably not protected by either copyright or patent law, leaving a business unable to protect what it created. And, as a golem without loyalty or context, AI could create legal problems if it infringes on prior works without the user’s intention or knowledge.

Difficulty in Protecting Intellectual Property in Works Generated by AI

While users may be prone to anthropomorphize, the law is not. Works wholly created by generative AI are likely not protected by intellectual property law because they are not human. Sometimes, non-human entities do have rights under the law. Corporations, governments, boats, and others can own property and exercise rights.17 Colorado law, for example, expressly conveys rights on corporations,18 and many state statutes expressly include entities in the definition of “person.”19 Where non-humans have rights, however, it is typically the result of an express exception to the normal assumption that laws apply only to natural persons. Courts have sometimes expanded on the rights afforded to entities,20 but premised on the entity as a vehicle for human constitutional rights.21

The judicial presumption appears to be that laws are intended to apply to human beings except as otherwise stated. So, for example, “the world’s cetaceans” (whales, porpoises, dolphins, etc.) do not have standing to bring claims under the Endangered Species Act or similar laws because, though Congress could have chosen to authorize suits by animals, it did not do so.22 Absent a law to the contrary, “[a]nimals are simply not capable of suing or being sued . . . .”23

Thus, even though the Copyright Act does not expressly state that an author must be human for a work to qualify for copyright protection, courts have held that this is so.24 In Naruto v. Slater, a crested macaque discovered a wildlife photographer’s camera and took several photographs of itself.25 The photographer published the photos and was sued by PETA, on behalf of the monkey, for violating the monkey’s alleged copyright.26 The Ninth Circuit noted that the Copyright Act “does not expressly authorize animals to file copyright infringement suits” and explained that the Act’s use of human family terms such as “children . . . legitimate or not, . . . widow, and widower, all imply humanity and necessarily exclude animals that do not marry and do not have heirs . . . .”27 In Urantia Foundation v. Maaherra, the Ninth Circuit also refused to acknowledge copyright rights for a book allegedly “authored by celestial beings” and instead based its analysis on the humans who arranged it and wrote it down.28

The US Copyright Office interprets the term “author” in the Copyright Act to “exclude non-humans,” including generative AI.29 It requires that any work be the product of human authorship to be eligible for copyright protection.30 With respect to generative AI, the Copyright Office weighs the specific facts of the creation and “will consider whether the AI contributions are the result of mechanical reproduction or instead . . . an author’s own original mental conception . . . .”31 The question appears to be whether the generative AI is responsible for the creative work or is merely being used as a tool by a creative human.32 At one extreme, the office refused to grant a copyright in an image that was entirely “autonomously created by a computer algorithm,” according to its author.33 In a more recent case, the office took a more nuanced approach with a comic book written and arranged by a human, but where the art was entirely the creation of generative AI.34 In this case, the office decided that the art could not be copyrighted, but the other creative elements could be since they were the product of a human.35

Patents, too, must be the invention of a natural person to warrant protection.36 In Thaler v. Vidal, an individual claimed to have developed AI systems that generate patentable inventions and attempted to patent two outputs of his AI.37 Despite prompts from the US Patent and Trademark Office to identify someone as the inventor, he insisted that the AI was the inventor.38 His patent was denied because “a machine does not qualify as an inventor,” and the Federal Circuit affirmed.39 The court reasoned that the use of the word “individual” in the Patent Act ordinarily meant a human being and, absent an indication that Congress intended a different result, the meaning was plain.40

While works created wholly by AI are therefore unlikely to be protected, there is probably some level of human involvement that can likely result in copyrightable or patentable work. Exactly how much human involvement is needed is an open question, but the answer probably lies in how much and what kind of creative work is performed by humans after receiving a result from the software. The Copyright Office suggests that feeding a text prompt into the software is not enough.41 It explains that “prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”42 Whether instructing a human or AI software, the user describes the work in text and gets back an image, but the resulting work is only eligible for protection when a human artist is involved. Without more, the raw output from generative AI may fall into the public domain.43

So, the human input needed to elevate an AI-generated work may be the same as that needed to establish copyright for a work derived from something in the public domain. Doing so requires that the final work be “original,”44 meaning it must “contain some substantial, not merely trivial originality.”45 While triviality may be in the eye of the fact finder, it is not commonly held to be a high bar. The test of originality is “concededly one with a low threshold.”46 Some decisions suggest that originality “means little more than a prohibition of actual copying.”47 Thus, it may be that only moderate reworking of the output of generative AI by an artist48 will allow protection for the final derivative work even if the original AI output remains unprotected.

It is conceivable that, as the capabilities of AI grow, new statutes may allow individuals or entities to obtain intellectual property rights in the output of generative AI. Commentators have proposed giving artificial intelligence personhood, at least indirectly, through the establishment of an “autonomous entity.”49 The scheme involves having a human set up a limited liability company, convey property to the LLC, establish an operating agreement requiring the LLC to act at the direction of a computer program, and then disassociating from the company so that it has no (human) members.50

There does not appear to be any current law dealing with the propriety of an algorithm functioning totally autonomously as a legal person, but there are clues that the law is not quite ready to dispense with the need for human creators entirely.51 Four states have experimented with allowing digital management of entities, known as decentralized autonomous organizations (DAOs), but these entities ultimately are owned by human beings.52 When faced with legal questions concerning DAOs, a court may still seek out the humans who created or operate the software.53 In one case brought by a government regulator against a DAO, the US District Court for the Northern District of California determined that this was an “unincorporated association” of the DAO’s human affiliates under applicable law for purposes of service of process.54 Even if this tendency to seek out the humans in charge of an algorithm relaxed, that would not solve the copyright problem. An entity can become the owner of works assigned to it55 or even be considered the author if the artist was doing work for hire,56 but both of these doctrines still assume the original creation was the product of a natural person.

Unintentional Infringement Through Use of Generative AI

A user who explicitly prompts generative AI for a new Star Wars screenplay or a Harry Potter novel would probably find no protection in the fact they used ChatGPT instead of Microsoft Word to write it. A standard word processor, however, will not generate an infringing work unbidden. Generative AI can, for the same reason that the Copyright Office does not believe the creations of generative AI can be protected: it mechanizes the human creative process. The output of generative AI might infringe on a work that the user has never seen.57 If it does, would the user of the software be liable? Perhaps.

To begin with, the user’s subjective intentions are probably not relevant. Copyright infringement is a strict liability claim.58 As to state of mind, all that is required is to show that the user engaged in a volitional act related to the infringement. So, for example, an Internet provider was not liable where a user posted an infringing image because the provider took no volitional act toward the infringement.59 In the case of generative AI, however, the user is typing in a prompt that calls the text or image into existence. This act may be sufficient to incur liability.60

The determinative question will likely be whether the user can be said to have had “access” to the original work. As part of showing infringement, the plaintiff must show that the defendant copied original elements of the first work.61 Showing “copying” is important because independent creation is a complete defense to copyright.62 It is not infringement for two creators to create similar, even identical, works without ever having seen the other.63 In the absence of direct evidence of copying, a plaintiff can show copying by demonstrating that the defendant had “access to the plaintiff’s work and that the two works share similarities probative of copying.”64 Showing prior access merely requires evidence that the defendant had a reasonable opportunity to view or opportunity to copy the prior work.65 This can be direct, such as proof that the defendant specifically viewed the original work,66 or a more general showing that the prior work was widely displayed in close geographic proximity to the defendant.67 Since intention is not relevant, once prior exposure is shown, even subconscious copying is sufficient to support infringement.68

In some cases of infringement, such as ones involving widely known characters or stories, it may be possible to show that the user had access to the original works through the general popular media. But, using generative AI, it is possible for a user to create a work that is similar to a more obscure existing work without ever having seen it. Perhaps the model “saw” it, though. Is that enough? There appears to be no guidance in the existing law. After all, never before has a writer had to worry that her typewriter might have read a story she did not. It remains to be seen how courts will resolve this issue. Perhaps the same arguments that suggest human involvement is required to have a copyrightable work in the first place might similarly suggest that exposure to human memory might be required for infringement.

Even if the courts determine that “access” can be shown by demonstrating that the model was trained on a dataset including the offending work, there may be other unique challenges to a plaintiff seeking to prove infringement. A copyright plaintiff likely will not have access to the training data used by the generative AI software employed, and the software developer has no incentive to voluntarily disclose the details of the training set.69 Some plaintiffs are arguing that they can divine the inclusion of their work in the training process by showing that the generative AI is able to summarize that work.70 But since this might only indicate that the model was trained on summaries or reviews of the original work and not the original work itself, it is unclear if that strategy will be successful. Attempts to prove “access” based on the model’s training may thus flounder upon difficult third-party discovery.

For the same reason, a user wishing to avoid infringement might have a hard time knowing in advance whether the model was trained on a particular work because that user also has no access to the underlying training data. Nevertheless, a business that is serious about using generative AI may be able to mitigate the risk of accidental infringement by checking the output for similarity to existing work. The best course would probably be to hire a copyright lawyer to conduct due diligence, but there are budget-friendly options. One option might be to employ generative AI to solve its own problem. Midjourney, a generative AI for artwork, has a “/describe” command that essentially runs the model in reverse to generate possible text prompts from an image.71 This can include the names of artists who the model believes create similar work. The user could then inspect other works by those artists for similarity. Google, too, has a “reverse image search” that allows users to upload an image and use Google’s search engine to identify possible similar images online.72

Providers of generative AI can and are taking steps to minimize this problem, but it remains to be seen how effective those steps are. The model does not exist in a vacuum but relies on other code to obtain input and provide output to the user. Providers can implement external software that looks for improper behavior and stops it. Recently, users have observed changes to Open AI’s ChatGPT that abort responses that appear to be providing copyrighted material.73 Microsoft may have implemented some form of copyright monitoring and warning system in its image-generating software.74 Google’s music-generating AI, MusicLM, appears to reject prompts that use the name of an existing song, artist, or other intellectual property.75 As the technology improves, it may become more commercially viable to rely on providers to filter out potential infringement.

Misrepresentations and Misstatements Related to AI in Commerce

The danger to a business using generative AI does not end with intellectual property law. In any situation where an untrained, disloyal employee could cause mischief, generative AI potentially could, too. A business should not lie about the skill level of its employee. It should properly train and supervise its employees. If the employee lies to third parties within the scope of employment, the business may find itself with legal exposure or worse. And, of course, a business cannot suggest that its employee can perform services that can only be performed by a licensed professional. So too with generative AI.

Misrepresentation About Services or Products Containing AI

As of the writing of this article, excitement about the possibilities of generative AI has reached a fever pitch.76 There may be a temptation to cash in on the hype by misrepresenting the details or capabilities of systems. The Federal Trade Commission (FTC) has been quick to issue warnings about the same. The FTC issued a statement in which it warns businesses that “artificial intelligence” is “an ambiguous term with many possible definitions.”77 It warns against two specific forms of possible misrepresentation: (1) misrepresenting whether a business actually uses AI at all, and (2) exaggerating what the technology can do.78 The FTC warns that it can “look under the hood” to see whether AI is actually being sold, and further notes that using an AI tool to develop a product does not mean the product “has AI in it.”79

If a business makes misleading claims about the functionality of its services, it may be liable under state and federal law. The Colorado Deceptive Trade Practices Act prohibits false representations about the quality, characteristics, uses, or benefits of goods or services.80 The misrepresentation is equally actionable whether it was done “knowingly” or merely “recklessly.”81 Under the Federal Trade Commission Act, any “unfair or deceptive acts or practices in or affecting commerce” are unlawful.82 The Act defines “unfair” conduct as conduct that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”83 A representation is “deceptive” if it is likely to mislead consumers acting reasonably under the circumstances and is material.84

A business should be careful about advertising the use of “AI” at all. In the recent past, the discussion over AI has had a lot to do with generative AI, namely, software that is able to produce creative works such as text, art, code, or music. But the term “AI” is older than 2023 and has changed over time. It was coined in the 1950s to refer to a computer performing tasks that were previously only capable by beings already recognized as intelligent.85 Early computers like ENIAC first tackled the problem of arithmetic, which had previously been something only a human could do.86 Over time, each time engineers figure out how to automate a task that was previously assumed to require “intelligence,” that task tends to be removed from the scope of what is considered “true intelligence,” and the popular definition of AI retreats to accommodate.87 Today, a pocket calculator is not considered AI even though it is doing an intellectual task only humans could do before its invention.

Because the definition of AI is fluid, the term is vulnerable to being used in a misleading way.88 It may not be possible to determine precisely what can and cannot be called AI, but at least some legal commentators suggest that in the modern context, AI refers to a computer program that does not “operate based on a definite set of pre-programmed instructions” but instead is “trainable and able to learn from experiences” and thus produce output “not contemplated by the human-in-interest.”89 In other words, a company marketing its product as AI in 2023 may need to consider the present context, in which “AI” usually refers to software that runs on a model created by machine learning and has some level of unpredictability, as opposed to a piece of software that was explicitly written with conditional statements to produce a given output for a given result. So, for example, it may be a deceptive trade practice for a business to claim it is using “artificial intelligence” to generate documents if it merely runs a website that fills in the blanks on a form in the fashion of a Mad Lib.

Even if the service offered does use the modern version of AI, businesses should be truthful about its capabilities and limitations. Generative AI software is unpredictable and may generate improper or incorrect output.90 In the case of LLMs, some of this unpredictability is inherent in the way the models are trained and refined. An LLM is generally first trained to accurately predict the next word by testing it based on its training data. Then, the currently popular LLMs are also taken through a period of human-assisted reinforced learning involving a human subject ranking how much they like the output of the LLM.91 As a result, the model ends up with a good ability to predict the next word based on its training data and human-based feedback, not based on what is objectively true.

In several well-documented cases, LLMs are prone to “hallucinate,” or make up false information, providing a confident and well-written response that is not justified by the training data.92 The model’s training process encourages the LLM to provide some kind of response that sounds like what the LLM was trained on and would be pleasing to a human. LLMs can be asked to provide an explanation for a scientific phenomenon that does not exist,93 provide incorrect biographical information,94 agree with false or fake claims,95 get the date wrong,96 invent false court decisions,97 or become stuck in a loop of unhinged ranting.98 Users must bear in mind that LLMs were not trained to navigate the real world, only to predict the next word. Currently, LLMs exist in Plato’s cave,99 and all they know about how the real world operates is from the shadows reflected in the language upon which they trained. Blind faith in the veracity of current LLMs, like blind faith in an unskilled entry-level intern, is a recipe for disaster.100

So too would be promises of infallibility by businesses offering generative AI. To the contrary, businesses should provide prominent disclaimers concerning the risks and instability of the software. This is the approach taken by OpenAI, which states in its disclaimer that it “takes no responsibility or warranties about the completeness, accuracy, or reliability” of its information101 and notes in its terms of use that its software “may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts.”102 Google, likewise, warns its users that “LLM experiences (Bard included) can hallucinate and present inaccurate information as factual.”103 A disclaimer is probably the bare minimum a company can do, however. It may be better to ensure that customers enter into user agreements or similar contracts before getting access to tools powered by AI, in which customers are not only warned of the limitations of the software but also affirmatively agree to shift the responsibility for those errors to the customer or otherwise hold the business harmless.

Misrepresentations or Mistakes Made by AI

Instead of using generative AI internally, some businesses may want to give their customers direct access to generative AI. LLMs take human language as input and output, code, and data; customers used to texting or typing emails can easily interact with LLMs.104 Companies are exploring use cases where customers obtain access to a trained LLM as the core product, including online tutoring,105 therapy,106 weight loss guidance,107 small talk for romantic partners,108 or playing the role of an AI romantic partner itself,109 and as a supplement to an existing product, such as customer service chatbots.110 Any time a business is connecting generative AI to its customers directly, however, it must be concerned about the AI itself making mistakes or misstatements that could get the business into trouble.

Federal law likely provides no safe harbors. The normal safe harbors that protect companies that provide access to information online probably are not going to help when a business gives customers access to generative AI. There are two major safe harbors in federal law for online providers: the Digital Millennium Copyright Act (DMCA)111 and the Communications Decency Act (Section 230).112 Both of these safe harbors apply only where the offending material was provided by a third party, not where it was generated by the service provider itself. The DMCA provides that a service provider shall not be liable for copyright infringement by reason of the provider’s transmitting, routing, or providing transient storage of infringing material under certain conditions.113 Among other things, those conditions require that “the transmission was initiated by or at the direction of a person other than the service provider.”114 Section 230 offers broader protection to service providers, not just protection from copyright infringement. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”115 As with the DMCA, it only offers protection against information “provided by another,” not against information provided by the service provider itself.116

Whether DMCA and Section 230 protect a business against bad behavior by customer-facing LLMs may depend on how the LLM is being used. If a chatbot is acting merely as a “neutral tool” to retrieve information from another provider or user, the business might be able to claim Section 230 protection.117 This argument could arise for a provider that, for example, allows users to train chatbots that then have preliminary conversations on the user’s behalf with potential dates118 or uses a chatbot to facilitate research into articles published by third parties.119 Here again, though, the peculiar way in which generative AI works may be its downfall. In Fair Housing Council v., LLC, the Ninth Circuit had to consider whether a website was involved in developing information provided by users about sex, sexual orientation, and the presence of children, because the website required users to provide that information.120 The court found that by “requiring subscribers to provide the information . . . and by providing a limited set of pre-populated answers,” the website became more than a passive transmitter and was actually engaged in developing the content.121 Thus, the website had no immunity. In the case of generative AI, too, the provider is probably not able to merely provide naked access to third-party data. That kind of functionality does not require a chatbot. Instead, the LLM will create new text based on what relationships in its model are triggered by the user’s input. If merely posing multiple choice questions is enough involvement to lose Section 230 protection, then it is hard to see how generating entirely new explanations or paraphrases would not also meet that threshold.122

Defamatory statements by generative AI. A misbehaving chatbot could cause trouble for a company in basically the same way as a rogue employee chatting with customers. If the LLM publishes false and scandalous biographical information about a third party to a user, the subject of the information might have suffered actionable defamation. One such case was filed this summer in Georgia, in which a reporter allegedly asked ChatGPT to summarize the accusations in a complaint filed in the Western District of Washington.123 In response, ChatGPT hallucinated accusations that one Mr. Walters was “accused of defrauding and embezzling funds.”124 Walters, upon learning of this, filed a defamation lawsuit. In Colorado, defamation is “a communication holding an individual up to contempt or ridicule that causes the individual to incur injury or damage”125 that is false 126 and published to a third party.127 Certain kinds of accusations, including allegations of criminal conduct, are per se defamatoryand are actionable even without showing damage.128 If a chatbot spouts defamatory statements to the public, especially if it slurs business competitors, a business may risk becoming a test case for defamation liability.

The risk of defamation claims might be mitigated with a disclaimer to the effect that the LLM output is inaccurate, non-factual, or not the opinions of the business, but it is unclear if such a disclaimer will eliminate the problem. On the one hand, simply labeling a work “fiction” does not prevent it from being defamatory.129 In Muzikowski v. Paramount Pictures Corp., the little league coach featured in the book on which the movie Hardball was based sued the movie studio for defamation.130 The Seventh Circuit reasoned that even though the movie was labeled “fiction,” the coach was still entitled to present evidence that a character in the movie was actually intended to portray him.131 On the other hand, Fox News was recently successful in dismissing defamation claims brought against it based on statements of one of its hosts by arguing that, in context, the host’s statements could not be construed as provable facts but mere hyperbole.132 The court considered the context of the host’s speech as being part of an ongoing political commentary and decided that the “general tenor of the show should then inform the viewer that he is not stating actual facts” and that any reasonable viewer must approach “with an appropriate amount of skepticism.”133 If a chatbot spouts defamatory statements about a third party but there is a prominent disclaimer not to believe those statements, is that more like a cursory note that the information is fiction or more like political partisan spouting arguments in a subjective forum?

Discriminatory statements by generative AI. A misbehaving chatbot might also make discriminatory statements concerning race, gender, national origin, or other protected classes. An LLM’s output is only as good as the text that trained it and the human preferences that refined it. If the training process included biases or misinformation, the model will include associations to those patterns.134 For example, some have reported finding evidence that certain LLMs identify certain jobs, like “flight attendant” or “secretary,” as feminine and others, like “lawyer” or “judge,” as masculine.135 This problem can be exacerbated by users using “prompt injection,”136 such as telling the LLM to answer in the persona of someone who would say discriminatory things.137 At least in the view of many federal agencies, including the Equal Employment Opportunity Commission (EEOC), businesses are liable for actionable discrimination carried out by their computer programs.138 In a press release describing enforcement action against a tutoring company that allegedly used an algorithm to reject any applicants above a certain age, the EEOC stated: “Even when technology automates the discrimination, the employer is still responsible.”139

In Colorado, discriminatory statements by a chatbot might give rise to problems under several laws. Discriminatory responses from LLMs in the employment context may cause violations under the Colorado Anti-Discrimination Act.140 This is true even if the LLM is not actually making any decisions. The Act prohibits an employer from “caus[ing] to be printed or circulated any statement” in connection with prospective employment that “expresses, either directly or indirectly, any . . . discrimination as to disability, race, creed, color, sex, sexual orientation, gender identity, gender expression, religion, age, national origin, or ancestry.”141 It also prohibits “harass[ment],” which means creating a hostile work environment based on a protected class.142 Colorado has several unique laws governing discrimination by AI, including prohibiting insurers from using algorithms to unfairly discriminate143 and requiring companies that handle a large amount of private customer data to conduct annual reviews for fairness and disparate impact for certain decisions that produce legally significant effects on a consumer.144

Contracts or promises made by generative AI. Another potential risk for businesses is that a customer-facing chatbot might make promises or representations that could later bind the company. If a customer, through prompt injection or inadvertence, coaxes a chatbot to promise to sell goods at a particular price, for example, would that create a contract binding on the company? This problem has already arisen in the field of smart contracts that incorporate algorithms into their terms.145 It might be possible for a chatbot to produce written words communicating to a third party that objectively show an offer to enter into a contract, and some commentators suggest that may be enough.146 There are cases supporting the idea that a company can be bound by errors in its algorithms. In Bristol Anesthesia Services., P.C. v. Carilion Clinic Medicare Resources, LLC, the US District Court for the District of Tennessee found a triable issue of fact for an implied-in-fact contract claim where an invoice was generated and paid in the wrong amount due to an error in the algorithm.147

Even if the words generated by AI are not found sufficient to demonstrate an actual meeting of the minds between a customer and a business, promises or representations might still cause problems. Promissory estoppel, for example, does not require actual acceptance.148 Instead, it requires that (1) the promisor made a promise to the promisee, (2) the promisor should reasonably have expected that the promise would induce action or forbearance by the promisee, (3) the promisee reasonably relied on the promise to the promisee’s detriment, and (4) the promise must be enforced to prevent injustice.149 If a chatbot hallucinates a statement that an uninformed customer might reasonably interpret as a promise to provide goods or services at a certain price or in a certain amount and then relies on that promise, an estoppel claim might arise.

As with accidental infringement, improvements to the technology may minimize the risk of generative AI misbehaving, but it is unclear whether it can ever be entirely eliminated any more than human employees can be guaranteed to always act perfectly. As mentioned above, a business employing customer-facing generative AI would be wise to ensure that customers sign a user agreement expressly noting the limitations of the software and making it clear that the software is not an agent with authority to bind the company. A business would also be wise to carefully supervise and monitor any such generative AI.

Practicing Without a License

Because the most popular LLMs are trained on a massive corpus of human writing, including specialized professional knowledge, the models end up learning relationships between words common to those professions. This means, as a practical matter, that they can answer medical, legal, or scientific questions. Some can do this sufficiently well to pass law school tests,150 the bar exam,151 and medical licensing tests.152 This capability may tempt some companies to market generative AI as a substitute for a professional in a specialized field. Some professions, however, require a license. These include attorneys,153 doctors,154 certified public accountants,155 public adjusters,156 real estate brokers,157 and others.158 The model may be capable of producing specialized information and even sometimes providing correct advice, but a business is asking for trouble if it sells an LLM for this purpose.

Marketing or using generative AI as a substitute for a licensed professional likely violates Colorado law. The practice of law, for example, includes “act[ing] in a representative capacity in protecting, enforcing, or defending the legal rights and duties of another and in counseling, advising and assisting [another] in connection with these rights and duties . . . .”159 More broadly, practicing law involves “the exercise of professional judgment, calling upon ‘legal knowledge, skill, and ability beyond [that] possessed by a layman.’”160 Charging a fee to prepare legal documents for another can also constitute the practice of law.161 There is no reason to think that the Colorado Supreme Court would find the artificial judgment of generative AI is permitted to engage in this kind of conduct.

Some jurisdictions have recognized a so-called “scrivener’s exception” that allows unlicensed individuals to merely record information that another provides so long as the individual exercises no judgment at all.162 It seems unlikely that generative AI could fall under this exception because LLMs do more than merely record information; they use a model of relationships between prompts and text output to predict the appropriate legal advice or document being requested. In Conway-Bogue Realty Investment Company v. Denver Bar Association, the Colorado Supreme Court determined that unauthorized practice of law specifically includes the preparation of promissory notes, deeds, mortgages, releases, leases, notices, and demands for particular clients.163 This implies potential liability for any company marketing generative AI as a replacement for lawyers.

Early adopters of generative AI for this purpose are indeed facing peril. A company named DoNotPay marketed its software as “a robot lawyer” on behalf of customers,164 is developing apps that use LLMs to provide services like contract negotiations,165 and planned to provide an earpiece to allow a pro se defendant to rely on an LLM to present argument in court.166 The last of these led to warnings from “multiple state bars” of possible “prosecution and prison time,” according to the company’s owner.167 It is unclear whether any prosecutions have actually commenced, but at least two civil lawsuits alleging unauthorized practice have already been filed.168

Of course, if a licensed professional is involved, that professional probably may use generative AI to provide professional advice.169 At least in Colorado, the professional might not even have to be licensed in the same practice area as the advice is given. In Conway-Bogue, the Colorado Supreme Court ultimately permitted real estate brokers to practice law in the form of drafting documents because the brokers were properly licensed in their profession, limited the drafting of paperwork to customers who had hired them for real estate work, and charged no fee for this work.170 Could a realtor use ChatGPT to write those contracts? The scope of a professional’s ability to use generative AI to supplement their core profession with skills from other licensed trades has yet to be determined.171

Breach of Confidentiality

A business using generative AI to supplement its existing workflow rather than as a core part of the goods or services offered to its customers still must grapple with risks in using the new technology. Among these is the risk that using generative AI may endanger confidentiality.

At the moment, the most powerful and common generative AI programs are owned and operated by third parties like Google, Microsoft, and OpenAI. Few users download and run their own, local generative AI programs, though this is possible.172 If a business uses the generative AI of a large tech company, it has to send its prompts or other data to that company for the model to process. Businesses often deal in confidential information such as trade secrets, protected health information, student information, or customer private data. This has already caused problems for businesses allowing employees to use LLMs. Samsung employees copied proprietary code into ChatGPT to help fix errors.173 When this information is provided to generative AI, does it remain confidential?

The terms and conditions of some generative AI explicitly state that prompts and responses to the same may be used by the provider to help develop and improve the models or use the information in some other way.174 Models can continue to train and improve over time, adjusting internal relationships, with the result that the prompt and output can be “gobbled up” and used as “fodder for pattern matching . . . .”175 The model may subtly adjust the pattern of relationships that allow it to predict text to account for new prompts. Some commentators suggest that this process may work to undermine confidentiality, including trade secrets.176 There is some truth to this claim, because the definition of trade secret in Colorado requires that the information be “secret” and that the owner “took steps to keep it secret.”177 But the outcome is probably not that simple. Generative AI models do not retain perfect copies of all information from training. Rather, the models are subtly adjusted by each new training iteration. The precise prompt and response are not copied into the model so much as added to the mathematical matrix making up the weights in the model. Thus, disclosure to ChatGPT may not automatically destroy a company’s trade secrets, but it may undermine such protection.178

Businesses wishing to avoid this risk have a few options. There are already models available that are small enough to run on a local computer and could, in theory, operate without any disclosure outside of the local machine. Doing this, however, means the business loses the potential protections imposed by large LLMs on misbehavior. Moreover, the business would have to be careful to vet whether these models came from legitimate sources as opposed to improper leaks of intellectual property and are free of viruses or malicious code. Another option is to hire a company that has established its own instantiation of a generative AI model, independent of a public model, and has taken effective steps to prevent any injection of user data.179 It appears likely that if generative AI shows it is useful in a particular application, third-party services to accommodate this need and simultaneously preserve confidentiality will likely emerge.


Businesses are likely to experiment in the coming years with the best uses for generative AI and, while they do, judges and legislatures will be sorting out the legal implications of those uses. For now, caution is likely the best policy. If a business would not entrust a task to an entry-level intern, it should not entrust the task to generative AI. Rather, the software should be supervised carefully by a skilled employee and, where appropriate, a licensed professional. The software should not be overmarketed or overpromised, nor entrusted with any secret information. This will help avoid misbehavior and help ensure that the work product of the generative AI may best qualify for intellectual property protection. Where a business interfaces with the public using generative AI, it should ensure that those engaging with the software are provided disclaimers or user agreements to minimize the chance of misunderstanding or reliance. When advising business clients, attorneys should examine any use case with a clear view of these risks and advise clients carefully.

Colin E. Moriarty practices with Underhill Law, P.C. in Greenwood Village. Focusing on business and commercial litigation and arbitration, he has litigated business disputes, construction and fabrication defect claims, employment discrimination lawsuits, subcontractor litigation, state RICO fraud lawsuits, civil theft disputes, insurance appraisal and adjustment disputes, and other lawsuits involving complex commercial and construction matters— Coordinating Editor: K Kalan,; William P. Vobach,

Related Topics


1. Because a general familiarity with the functioning of generative AI is helpful, the reader is encouraged to read part 1 of this article series before embarking on this one. Moriarty, “The Legal Challenges of Generative AI—Part 1: Skynet and HAL Walk Into a Courtroom,” 52 Colo. Law. 40 (July/Aug. 2023),

2. Brynjolfsson et al., “Generative AI at Work,” Nat’l Bureau of Econ. Rsch. (working paper 31161) (Apr. 2023),

3. Chung, “‘Overemployed’ People Using ChatGPT to Secretly Work Multiple Full-Time Jobs,” N.Y. Post (Apr. 19, 2023),

4. In a survey of 1,000 US business leaders, found that about half of them are already using ChatGPT and 30% plan to. “1 in 4 Companies Have Already Replaced Workers with ChatGPT,” ResumeBuilder (Feb. 27, 2023), Use cases include internal tasks like coding, time management, and research, as well as external tasks like customer service inquiries and email creation. See O’Sullivan, “10 Ways Businesses Are Using ChatGPT Right Now,” (blog) (June 20, 2023),

5. This article is focused on practical, business-oriented risks and will not delve into greater existential concerns sometimes raised in connection with developing general AI. Lay readers interested in a hype-free introduction to the challenges and methods of ensuring that capable AI systems are working toward the goals we want them to, known as “AI alignment,” would be well served in first reviewing the video series produced by Robert Miles, a PhD student at the University of Nottingham. Miles, Robert Miles AI Safety, These provide a good layperson understanding of the technical reasons why some professionals are worried, without engaging in science fiction thinking.

6. If the experience of calculators in the classroom is any indication, it does not necessarily mean that people will stop developing the skills AI can replicate. See Ellington, “The Effects of Non-CAS Graphing Calculators on Student Achievement and Attitude Levels in Mathematics: A Meta-analysis,” 106 Sch. Sci. and Mathematics 16, 23 (Jan. 2006) (surveying studies and concluding no overall impact on student learning when calculators are used in instruction); Ellington, “A Meta-Analysis of the Effects of Calculators on Students’ Achievement and Attitude Levels in Precollege Mathematics Classes,” 34 J. for Rsch. in Mathematics Educ., 433, 456 (Nov. 2003).

7. See, Maynard “Afraid You’ll Lose Your Job to ChatGPT? You’re Just The Latest Person in the Last 200 Years to Become a Luddite,” Fortune (May 12, 2023),; Ekin, “Neo-Luddites and the Era of AI,” McGill Int’l Rev. (Apr. 8, 2023),

8. See “OpenAI: Cybercriminals Starting to Use ChatGPT,” Check Point Rsch. (Jan 6, 2023),

9. Kohli, “From Scams to Music, AI Voice Cloning is On the Rise,” Time (Apr. 29, 2023) In one case, a worker was tricked into wiring money to criminals by a phone call using a cloned voice of the business’s chief executive. Stupp, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,” Wall St. J. (Aug. 30, 2019),

10. Moran, “ChatGPT is Making Up False Guardian Articles,” Guardian (Apr. 6, 2023),

11. Bond, “AI-generated Deepfakes Are Moving Fast. Policymakers Can’t Keep Up,” NPR (Apr. 27, 2023),

12. See Liverpool, “AI Intensifies Fight Against ‘Paper Mills’ that Churn Out Fake Research,” Nature (May 31, 2023),

13. Moriarty, supra note 1. See also Weidinger et al., “Taxonomy of Risks Posed by Large Language Models,” FAccT (June 21, 2022), For example, LLMs trained to mimic human speech learned to perform arithmetic, descramble letters in a word, conceptualize space such as in navigating a map, and correctly answer knowledge-based questions. Wei et al., “Emergent Abilities of Large Language Models,” arXiv (Oct. 26, 2022), See also Binz and Schultz, “Using Cognitive Psychology to Understand GPT-3,” 120 PNAS (Feb. 2, 2023), But see Schaeffer et al., “Are Emergency Abilities of Language Models a Mirage?,” arXiv (May 22, 2023), Some LLMs also demonstrate an ability to learn new tasks without retraining. Zewe, “Solving a Machine Learning Mystery,” MIT News (Feb. 7, 2023),

14. See Hofstader, “Gödel, Escher, Bach, and AI,” Atlantic (July 8, 2023),

15. This is an analogy as to the capabilities of generative AI only, not its nature. It would be a mistake to anthropomorphize. There is no particular reason that a model would develop any feelings, intentions, or other human experiences unless algorithms equivalent to such improved the quality of the output in training. Similarly, in the author’s opinion, it is not necessary to be distracted over questions about the nature of intelligence to analyze the legal risks. As Alan Turing suggested at the dawn of the computer age, it is unnecessary to worry about philosophical concepts when one can simply measure the observed capabilities of an artificial system. See Turing, “Computing Machinery and Intelligence,” 433 LIX Mind (Oct. 1, 1950). The risks of AI result not from philosophical questions but from observed behavior.

16. Companies are already rushing to offer AI writing assistant services for copywriters and journalists. Bernard, “ChatGPT and Generative AI Not Replacing Copyrighters . . . Yet,” CMSWire (Mar. 9, 2023),

17. See e.g., City of Sausalito v. O’Neill, 386 F.3d 1186, 1200 (9th Cir. 2004) (city); Walker v. City of Lakewood, 272 F.3d 1114, 1123 n.1 (9th Cir. 2001) (nonprofit corporation); The Gylfe v. The Trujillo, 209 F.2d 386 (2d Cir. 1954) (boat); Cruzan v. Dir., Mo. Dep’t of Health, 497 U.S. 261 (1990) (human in a “persistent vegetative state”).

18. See CRS § 7-103-102.

19. See, e.g., CRS §§ 38-22-101(1), 7-90-102(49), 26-6-102(28), 32-11-104(41)(a), and 38-20-102(11).

20. Citizens United v. FEC, 558 U.S. 310 (2010).

21. In Citizens United, the Supreme Court did not reason that corporations were included in the plain language of the First Amendment, but rather that individual rights persisted though exercised in an association of humans. Id. at 342 (“political speech” which would be protected by an individual “does not lose [its] First Amendment protection ‘simply because its source is a corporation’”) (citing First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 784 (1978)).

22. Cetacean Cmty. v. Bush, 386 F.3d 1169, 1174–79 (9th Cir. 2004).

23. Jameson v. Oakland Cnty., No. 10-10366, 2011 U.S. Dist. LEXIS 83392, *3 n.1 (E.D.Mich. July 29, 2011) (“The court thought that there may have been a misapprehension about the identity of ‘Detective Christie,’ but, no, Plaintiff’s complaint made quite clear that he was suing a dog.”).

24. Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018).

25. Id. at 420.

26. Id.

27. Id. at 426 (internal citations and quotation marks omitted). Determining whether a married monkey has standing under the Copyright Act is an exercise left to the reader.

28. Urantia Found. v. Maaherra, 114 F.3d 955, 956, 958 (9th Cir. 1997).

29. US Copyright Off., Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence 2, 37 CFR pt. 202, See also Zirpoli, Generative Artificial Intelligence and Copyright Law 2, Cong. Rsch Serv. (updated May 11, 2023),

30. US Copyright Off., supra note 29 at 2.

31. Id.

32. Alternatively, the prompt might be more akin to the ideas behind the creative work, and mere ideas are not copyrightable. Moriarty, supra note 1.

33. US Copyright Off. Rev. Bd., Decision Affirming Refusal of Registration of a Recent Entrance to Paradise 2–3 (Feb. 14, 2022),

34. See US Copyright Off., Cancellation Decision re: Zarya of the Dawn 2 (Feb. 21, 2023),

35. Id.

36. Thaler v. Vidal, 43 F.4th 1207, 1209 (Fed.Cir. 2022).

37. Id.

38. Id.

39. Id.

40. Id. at 1211 (citing Mohamad v. Palestinian Auth., 566 U.S. 449, 454 (2012)).

41. Id.

42. US Copyright Off., supra note 29 at 2.

43. See Durham Indus., v. Tomy Corp., 630 F.2d 905, 908 (2d Cir. 1980) (“[I]n the absence of copyright . . . protection, even original creations are in the public domain . . . .”).

44. 17 USC § 102(a).

45. L. Batlin & Son, Inc. v. Snyder, 536 F.2d 486, 490 (2d Cir. 1976), cert. denied, 429 U.S. 857 (1976) (cited more recently by Cabrera v. Teatro del Sesenta, Inc., 914 F.Supp. 743, 763 (D.P.R. 1995)). Put another way, “the work must be original in the sense that the author has created it by his own skill, labor and judgment without directly copying or evasively imitating the work of another.” Alva Studios, Inc. v. Winninger, 177 F.Supp. 265, 267 (S.D.N.Y. 1959).

46. Knickerbocker Toy Co. v. Winterbrook Corp., 554 F.Supp. 1309, 1317 (D.N.H. 1982) (citing L. Batlin & Son, 536 F.2d 486)).

47. Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 102–03 (2d Cir. 1951) (citing Bleistein v. Donaldson Lithographing Co., 188 U.S. 239 (1903)).

48. Several artists are using generative AI in this way, as a starting point for other works. Biles, “What is Art Without the Human Mind,” Mind Matters (Dec. 15, 2022),

49. Bayern, “Are Autonomous Entities Possible?,” 114 Nw. U. L. Rev. Online 23, 26–27 (2019); LoPucki, “Algorithmic Entities,” 95 Wash. U. L. Rev. 887 (2018).

50. Bayern, supra note 49 at 26–27.

51. Linarelli, “Advanced Artificial Intelligence and Contract,” 24 Unif. L. Rev. 330 (June 2019),

52. Wyoming, Tennessee, and Vermont permit DAOs. Wyo.Stat.Ann. §§ 17-31-101 et seq.; Tenn. Code Ann. § 48-250-103; 11 V.S.A. § 4173. And as of March of 2023, the Utah Legislature is working on its own DAO Act. H.B. 357, 2023 Gen. Sess. (Utah 2023),

53. Commodity Futures Trading Comm’n v. Ooki DAO, No. 3:22-cv-05416, 2022 U.S. Dist. LEXIS 228820 (N.D.Cal. Dec. 20, 2022).

54. Id. In Colorado, similarly, unincorporated associates cannot be liable in tort, contract, or otherwise, but their members could be based on partnership law. CRS Title 7, Art. 30, Prefatory Note.

55. 17 USC § 201(a).

56. Playboy Enters. v. Dumas, 53 F.3d 549, 554 (2d Cir. 1995), and cases cited therein.

57. This is probably why Microsoft’s current version of the Bing terms and conditions specifically states that it “does not make any warranty or representation of any kind that any material created by the Online Services does not infringe the rights of any third party in any subsequent use of the content you may use . . . .” Bing Conversational Experiences and Image Creator Terms (Feb. 1, 2023),

58. Buck v. Jewell-LaSalle Realty Co., 283 U.S. 191, 198 (1931); Educ. Testing Serv. v. Simon, 95 F.Supp.2d 1081, 1087 (C.D.Cal. Apr. 12, 1999); 3 Patry on Copyright § 9:5.

59. See Religious Tech. Ctr. v. Netcom On-Line Comm’n Servs., 907 F.Supp. 1361 (N.D.Cal. Nov. 21, 1995). This case predated the 1996 Communications Decency Act and specifically Section 230, which thereafter expressly protected online providers from liability for content posted by others. 47 USC 230.

60. The amount of the liability can vary depending on the state of mind of the infringer, however, with willful infringement allowing statutory damages of up to $150,000 and innocent or unknowing infringement being as low as $200. 17 USC § 504(c)(2).

61. Fiest Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 361 (1991) (citing Harper & Row Publishers, Inc. v. Nation Enters., 471 U.S. 539, 548 (1985)).

62. Skidmore v. Led Zeppelin, 952 F.3d 1051, 1064 (9th Cir. 2020).

63. Whelan Assocs. v. Jaslow Dental Lab, Inc., 797 F.2d 1222, 1227 n.7 (3d Cir. 1986) (“The independent creation of even identical works is therefore not a copyright infringement, and independent creation is a complete defense to a claim of copyright infringement.”) (cited by Blehm v. Jacobs, No. 09-cv-02865, 2011 U.S.Dist. LEXIS 105583 (D.Colo. Sept. 19, 2011)). See also Darrell v. Joe Morris Music Co., 113 F.2d 80 (2d Cir. 1940) (“Recurrence is not therefore an inevitable badge of plagiarism.”); Arnstein v. Edward B. Marks Music Corp., 82 F.2d 275 (2d Cir. 1936).

64. Skidmore, 952 F.3d at 1064 (citing Rentmeester v. Nike, Inc., 883 F.3d 1111, 1116–17 (9th Cir. 2018)). See also Autoskill, Inc. v. Nat’l Educ. Support Sys., Inc., 994 F.2d 1476, 1490 (10th Cir. 1993).

65. Autoskill, 994 F.2d at 1490.

66. Id.

67. See Blehm v. Jacobs, No. 09-cv-02865, 2011 U.S.Dist. LEXIS 105583 (D.Colo. Sept. 19, 2011).

68. ABKCO Music, Inc. v. Harrisongs Music, Ltd., 722 F.2d 988, 998 (2d Cir. 1983); Herbert Rosenthal Jewelry Corp. v. Kalpakian, 446 F.2d 738, 741 (9th Cir. 1971).

69. This is particularly true after lawsuits were filed against companies that publicly explained how their datasets were trained. See Anderson v. Stability AI, LTD, No. 3:23-cv-00201 (N.D.Cal. Jan. 13, 2023); Doe v. Github, Inc., No. 3:22-cv-06823 (N.D.Cal. Nov. 3, 2022); Getty Images v. Stability AI, No. 1:23-cv-00135 (D.Del. Feb. 3, 2023).

70. That is the angle taken by a fourth case that has joined the trio pending at the time of the first part of this article series. See Complaint at ¶ 40, Tremblay v. OpenAI, Inc., No. 3:23-cv-03223 (N.D.Cal. June 28, 2023). A fifth new case goes even further, arguing essentially that the training set should be assumed to include anything posted online by anyone. P.M. v. OpenAI LP, No. 3:23-cv-03199 (N.D.Cal. June 28, 2023).


72. McCamy, “How to Google Reverse Image Search . . . ,” Insider (June 15, 2023),

73. Kandyba, “OpenAI’s ChatGPT May Be Rejecting Prompts that Violate Copyright,” Wrap (Jan. 13, 2023), However, even where the companies running generative AI use text comparisons to block output that may infringe on copyright, copyrighted material can still be coaxed out of the model. For example, while ChatGPT will try to block recitations of passages from Frank Herbert’s Dune, it can be tricked into doing so by asking it to mix up the words used in the passage. See Deep, “A Cyrillic Solution Unveiled: ChatGPT Recites Dune’s Litany Against Fear,” Deepleaps (June 20, 2023),

74. See ClinicalIllusionist, Reddit post, (suggesting prompts are rejected where they reference copyrighted works).

75. This is based on the author’s own testing of the software.

76. See Siegel, “The AI Hype Cycle is Distracting Companies,” Harv. Bus. Rev. (June 2, 2023),; Vinsel, “Don’t Get Distracted by the Hype Around Generative AI,” MIT Sloan Mgmt. Rev. (May 23, 2023),

77. Alteson, “Keep Your AI Claims In Check,” FTC (blog) (Feb. 27, 2023),

78. Id. The FTC claims to be “focusing intensely on how companies may choose to use AI technology, including new generative AI tools.” Atleson, “The Luring Test: AI and the Engineering of Consumer Trust,” FTC (blog) (May 1, 2023),

79. Alteson, supra note 77.

[80]. CRS § 6-1-105(e). Other subsections might also apply, such as subjection (g) prohibiting false representations about services being of a particular quality or standard.

81. CRS § 6-1-105(e).

82. 15 USC § 45(a)(1).

83. 15 USC § 45(n).

84. FTC v. Gill, 265 F.3d 944, 950 (9th Cir. 2001).

85. While John Von Neumann and Alan Turing are widely regarded as the fathers of this field, the term “artificial intelligence” is often credited to John McCarthy of MIT. McCarthy et al., “A Proposal for the Dartmouth Summer Project on Artificial Intelligence,” AI Mag. (Aug. 31, 1955),

86. The word “computer” was, prior to the second half of the 20th century, a job title for human beings who carried out calculations. Williams, “Invisible Women: The Six Human Computers Behind the ENIAC,” Lifehacker (Nov. 10, 2015),

87. McCorduck, Machines Who Think 204 (2d ed. A K Peters/CRC Press 2004). The tendency to redefine intelligence in the face of increasing digital competence continues to this day, with academics arguing that LLMs are mere “stochastic parrots” that cannot possibly be considered “intelligent” because they merely predict the next word. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” ACM Conference on Fairness, Accountability, and Transparency (virtual event, Mar. 2021).

88. McCorduck, supra note 87.

89. Daniel, “Electronic Contracting Under the 2003 Revisions to Article 2 of the Uniform Commercial Code: Clarification or Chaos?” 20 Santa Clara High Tech L.J. 319, 330 (Jan. 2004) (citing Allen and Widdison, “Can Computers Make Contracts?,” 9 Harv. J.L. & Tech 25, 28–29 (1996), and Engle, “Smoke and Mirrors or Science? Teaching Law With Computers—A Reply to Cass Sunstein on Artificial Intelligence and Legal Science,” 9 Rich. J.L. & Tech. 2 (Winter 2002–03)).

90. Moriarty, supra note 1.

91. Amodei et al., “Learning From Human Preferences,” arXiv (June 13, 2017), Because humans are comparatively slow, this can be done in two stages. The human ranks the results from the LLM, but this ranking is fed not directly to the model but instead into an intermediate AI. The intermediate AI tries to generate its own model about what the human prefers and uses that model, in turn, to rank many, many more results from the LLM more quickly than a human could. Thus, the human trains the intermediate model, and the intermediate model trains the LLM, speeding up training.

92. Ji et al., “Survey of Hallucination in Natural Language Generation,” arXiv (Feb. 8, 2022), See also Weise and Metz, “When A.I. Chatbots Hallucinate,” N.Y. Times (May 9, 2023).

93. Bowman, “A New AI Chatbot Might Do Your Homework For You. But It’s Still Not an A+ Student,” NPR (Dec. 19, 2022),

94. Harrison, “When ChatGPT Writes Bios for People, They’re Littered With Fabrications,” Futurism (Feb. 25, 2023),; Zorach, “ChatGPT Fabricated a Plausible, But Incorrect, Biography of Me,” Handbuilt City (Mar. 19, 2023),; Huizinga, “We Asked an AI Questions About New Brunswick. Some of the Answers May Surprise You,” CBC News (Dec. 30, 2022),

95. Elliott, “It’s Way Too Easy to Get Google’s Bard Chatbot to Lie,” Wired (Apr. 5, 2023),

96. Mitchell, “Microsoft AI Chatbot Gets Into Fight With Human User,” N.Y. Post (Feb. 14, 2023),

97. Mata v. Avianca, Inc., __F.Supp.3d __, No. 22-cv-1461, 2023 U.S.Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023).

98. See Marcin, “Microsoft’s Bing AI Chatbot Has Said A Lot of Weird Things. Here’s a List,” Mashable (Feb. 16, 2023), Ongoing research suggests that LLMs may have specific failure modes unique to the fact that they analyze language and do not otherwise learn about the world. For example, LLMs may prefer memorized recitation of sequences over in-context instructions, McKenzie et al., “Inverse Scaling: When Bigger Isn’t Better,” arXiv (June 15, 2023),, or they may prioritize answers that match the semantics of the sentences provided over correctness. See “GPT Gets Irrational,” AI Explained,

99. Plato’s “Allegory of the Cave” refers to a theoretical set of people chained inside a cave and unable to see directly to the outside world, but able to see shadows of that world on the wall in front of them. Plato, Republic (514a–520a).

100. A recent high-profile example arose in New York, where a lawyer claimed he used ChatGPT to conduct legal research and it provided cases that sounded great but were entirely fictitious. Weiser and Schweber, “The ChatGPT Lawyer Explains Himself,” N.Y. Times (June 8, 2023). The judge was not impressed and issued sanctions under Rule 11. See Avianca, Inc, No. 22-cv-1461, 2023 U.S.Dist. LEXIS 108263. Before Colorado readers assume this problem is limited to less meticulous counsel in other states, note that this mistake has occurred right here in Colorado, too. See Order, Gates v. Chavez,No. 2022CV31345 (El Paso Cnty. Dist. Ct. filed May 5, 2023).

101. Disclaimer for online-ChatGPT,

102. Terms of Use,

103. Bard FAQ, See also Generative AI Additional Terms of Service (Mar. 14, 2023), (noting that Bard “may sometimes provide inaccurate or offensive content”). The unreliable nature of the technology has led one group to file a complaint with the FTC arguing that LLMs are a per se unfair and deceptive trade practice. CAIDP Complaint, In the Matter of OpenAI (Mar. 30, 2023),

104. It is risky to venture any predictions in this emerging field, but the author proposes that the real power of generative AI is as a new human interface device. Like the computer mouse or touch screens, generative AI allows users to operate the software simply by communicating with it in the same way they have already learned to communicate with other humans and with other tools humans use.

105. Singer, “New A.I. Chatbot Tutors Could Upend Student Learning,” N.Y. Times (June 8, 2023).

106. Browne, “The Problem With Mental Health Bots,” Wired (Oct. 1, 2022).

107. Wells, “Can a Chatbot Help People with Eating Disorders as Well as Another Human,” NPR (May 24, 2023),

108. Pejcha, “Meet Cupidbot, an AI Designed to Automate Straight Men’s Dating Life,” Lanvin (Mar. 16, 2023),

109. Cost, “I Dated ChatGPT’s AI Girlfriend—What Happened When I Broke Up With Caryn,” N.Y. Post (May 16, 2023)

110. Crain, “Why AI-Powered Customer Service and Support Are Crucial in 2023,” Forbes (May 10, 2023),

111. 17 USC § 512.

112. 47 USC § 230.

113. 17 USC § 512(a).

114. 17 USC § 512(a)(1).

115. Id.

116. Id.

117. Fair Hous. Council v., LLC, 521 F.3d 1157, 1169 (9th Cir. 2008).

118. Siberling, “Teaser’s AI Dating App Turns You Into a Chatbot,” TechCrunch (June 9, 2023),

119. Jungofthewon, “Elicit: Language Models as Research Assistants,” LessWrong (blog) (Apr. 9, 2022).

120. Fair Hous. Council, 521 F.3d at 1166.

121. Id.

122. There are emerging uses of LLMs that might avoid this problem by returning only links to third-party data, however. An LLM can be connected to additional programs that prompt the LLM repeatedly based on its prior answers with the goal of mimicking a “chain of thought” by humans. See Wei and Zhou, “Language Models Perform reasoning via Chain of Thought,” Google Research Blog (May 11, 2022); Raieli, “Multimodal Chain of Thoughts: Solving Problems in a Multimodal World,” Towards Data Science (blog) (Mar. 13, 2023), Perhaps engineers will be successful in using LLMs as a kind of plug-and-play brain or reasoning unit for larger software applications in the future, perhaps not. But that is not the kind of customer-facing LLM that is currently being used by businesses today.

123. Walters v. Open AI, LLC, No. 23-A-04860-2 (Gwinnett Cnty. Super. Ct. June 5, 2023).

124. Id. at ¶ 16.

125. Keohane v. Stewart, 882 P.2d 1293 (Colo. 1994).

126. N.Y. Times Co. v. Sullivan, 376 U.S. 254 (1964); CJI 22:1.

127. CRS § 13-25-125.5. Libel and slander, referred to in this statute, refer only to whether the defamatory statement was communicated in writing or verbally.

128. Gordon v. Boyles, 99 P.3d 75, 79 (Colo.App. 2004) (citing Restatement (Second) of Torts § 570).

129. Smith v. Stewart, 660 S.E.2d 822 (Ga.Ct.App. 2008); Muzikowski v. Paramount Pictures Corp., 322 F.3d 918, 925 (7th Cir. 2003).

130. Muzikowski, 322 F.3d at 921.

131. Id. at 925.

132. McDougal v. Fox News Network, LLC, 489 F.Supp.3d 174, 182 (S.D.N.Y. 2020).

133. Id. 183–84 (internal citations and quotation marks omitted).

134. Gordon, “Large Language Models Are Biased. Can Logic Help Save Them?” MIT News (Mar. 3, 2023),

135. Id. See also Getahun, “ChatGPT Could Be Used For Good, But Like Many Other AI Models, It’s Rife With Racist and Discriminatory Bias,” Insider (Jan. 16, 2023) (using clever prompting to get ChatGPT to associate being white and male with being a good scientist),

136. Prompt injection refers to using specific words in a prompt with the goal of drawing out specific, often unwanted, behavior from the model. The name comes from the computer science field, where an “injection” attack generally means tricking a computer into accepting code to execute when it should only be accepting data.

137. Deshpande et al., “Toxicity in ChatGPT: Analyzing Persona-assigned Language Models,” arXiv (Apr. 11, 2023),

138. Chopra et al., “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” (issued by the US Bureau of Consumer Financial Protection, Department of Justice, EEOC, and FTC),

139. EEOC, “EEOC Sues iTutorGroup for Age Discrimination” (press release) (May 5, 2022),

140. CRS § 24-34-402.

141. CRS § 24-34-402(1)(d).

142. CRS § 24-34-402(1)(a).

143. CRS § 10-3-1104.9.

144. CRS § 6-1-1309; 4 CCR 904-3 § 8.05(C).

145. Dhanoa, “Making Mistakes With Machines,” 37 Santa Clara High Tech. L.J. 97 (2021), The article discusses a case from Singapore, B2C2 Ltd v. Quoine Pte Ltd, (2019) SGHC(I) 03, aff’d Quoine Pte Ltd v. B2C2 Ltd (2020) SGCA(I) 02.

146. Linarelli, “A Philosophy of Contract Law for Artificial Intelligence: Shared Intentionality,” in Ebers et al., eds., Contracting And Contract Law in the Age of Artificial Intelligence (Bloomsbury 2022), Linarelli argues that human intention is not really required for a “meeting of the minds,” citing to such eminent legal scholars as Judge Learned Hand and Judge Easterbrook. Id. It is thus unclear whether a generative AI response that would seem to satisfy the objective requirements for a meeting of the minds (i.e., a clear offer, exchange, and acceptance of specific consideration) would be invalid simply because it was automatic. See also Linarelli, supra note 51.

147. Bristol Anesthesia Servs., P.C. v. Carilion Clinic Medicare Res., LLC, No. 2:15-CV-17, 2017 U.S.Dist. LEXIS 147955 (E.D.Tenn. Sept. 13, 2017). But see Dhanoa, supra note 145 at 111 (noting that the Singapore decision of Quoine looked past the algorithm to the intentions of the programmer to determine whether there was a meeting of the minds).

148. Marquardt v. Perry, 200 P.3d 1126, 1131 (Colo.App. 2008).

149. Berg v. State Bd. of Agric., 919 P.2d 254 (Colo. 1996).

150. Choi et al., “ChatGPT Goes to Law School,” J. of Legal Educ. (forthcoming),

151. Katz, “GPT-4 Passes the Bar Exam,” SSRN (Mar. 15, 2023)

152. Kung et al., “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models,” Plos Digit. Health (Feb. 9, 2023),

153. CRS § 12-5-101.

154. CRS § 12-240-135.

155. CRS § 12-100-116.

156. CRS § 10-2-417.

157. CRS § 13-1-127.

158. CRS § 12-20-407.

159. People v. Shell, 148 P.3d 162, 171 (Colo. 2006) (citing Denver Bar Ass’n v. Pub. Util. Comm’n, 391 P.2d 467, 471 (Colo. 1964)) (internal quotation marks omitted).

160. People v. Layton, No. 22PDJ032, 2023 Colo.Discpl. LEXIS 28, at *43 n.128 (Colo. Apr. 19, 2023) (citing In re Swisher, 179 P.3d 412, 417 (Kan. 2008)). See also People v. Adams, 243 P.3d 256, 266 (Colo. 2010).

161. See Conway-Bogue Realty Inv. Co. v. Denver Bar Ass’n, 312 P.2d 998 (Colo. 1957); Title Guar. Co. v. Denver Bar Ass’n, 312 P.2d 1011 (Colo. 1957).

162. Lola v. Skadden, Arps, Slate, Meagher & Flom, LLP, 620 Fed.Appx. 37, 44 (2d Cir. 2015). See Lanctot, “Scriveners in Cyberspace: Online Document Preparation and the Unauthorized Practice of Law,” 30 Hofstra L. Rev. 811 (2002).

163. Conway-Bogue, 312 P.2d at 1004–05.

164. This company’s website advertised a “Robot Lawyer” through at least mid-June of 2023. See Internet Archive Wayback Machine, More recently, the page has removed this reference. DoNotPay,

165. GPT3 Demo,

166. Allyn, “A Robot Was Scheduled to Argue in Court, Then Came the Jail Threats,” NPR (Jan. 25, 2023),

167. Id.

168. Millerking, LLC v. DoNotPay, Inc., No. 23-cv-863 (S.D.Ill. Mar. 15, 2023); Faridian v. DoNotPay, Inc., No. 23-604987 (Sup.Ct.Cal.S.F.Cnty. Mar. 3, 2023).

169. Though, of course, the professional had better be using their own professional judgment and not simply delegating all work to the generative AI. Weiser and Schweber, supra note 100. Law firms have heightened duties of competence, loyalty, and confidentiality that all impact how they should and should not implement this technology.

170. See Conway-Bogue, 312 P.2d at 1001, 1007.

171. The third part of this article series will address practical and ethical issues for lawyers who may wish to use generative AI in their practice.

172. Sha, “How to Run a ChatGPT-Like LLM on Your PC Offline,” Beebom (Mar. 29, 2023),

173. DeGeurin, “Oops: Samsung Employees Leaked Confidential Data to ChatGPT,” Gizmodo (Apr. 6., 2023),

174. Terms and Conditions § 3(c),; Bing Conversational Experiences and Image Creator Terms § 8,

175. Eliot, “Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law,” Forbes (Jan. 27, 2023),

176. Gorman, “Chat GPT: Business Use May Cause Loss of Trade Secret Protections, Waiver, of Privilege, and Other Harms,” Leon Cosgrove Jiménez, LLP (Mar. 8, 2023),

177. CRS § 7-74-102(4).

178. Partow-Navid and Salinas, “Spilling Secrets to AI: Does Chatting With ChatGPT Unleash Trade Secret or Invention Disclosure Dilemmas?,” JD Supra (Apr. 19, 2023)

179. In an interview conducted by the author, LexisNexis reported that its own AI service was developed using these ideas.