Utah Becomes First State To Enact AI-Centric Consumer Protection Law | Skadden, Arps, Slate, Meagher & Flom LLP

On March 13, 2024, Utah enacted the Utah Artificial Intelligence Policy Act (UAIP), which imposes sure disclosure necessities on entities utilizing generative AI instruments with their clients, and limits an entity’s capacity to “blame” generative AI for statements or acts that represent client safety violations.

Companies topic to the UAIP might want to guarantee they’ve the suitable disclosure regime in place, and different corporations ought to contemplate whether or not the UAIP method is an effective enterprise apply they need to undertake. The UAIP goes into impact on May 1, 2024.

Defining Generative AI

The UAIP necessities solely concern generative AI, which the act defines as “a synthetic system that (a) is educated on information; (b) interacts with an individual utilizing textual content, audio or visible communication; and (c) generates non-scripted outputs much like outputs created by a human, with restricted or no human oversight” — in impact, using AI to generate content material, similar to chatbot responses. Non-generative AI instruments, similar to ones which may listing product suggestions primarily based on buyer pursuits, are usually not topic to the UAIP.

Disclosure Obligations

Under the UAIP, these in “regulated occupations” (i.e., these which require an individual to acquire a license or state certification), similar to most well being care professionals, should “prominently” disclose {that a} client is interacting with generative AI, or supplies created by generative AI, originally of any communication. This disclosure have to be made verbally earlier than an oral change and thru digital messaging earlier than written exchanges.

Although the UAIP doesn’t specify what “prominently” entails, entities or individuals in regulated occupations ought to assume that merely disclosing using generative AI in a privateness coverage or phrases of use probably is not going to be ample to fulfill this obligation.

Those exterior of “regulated occupations” however that are topic to Utah client safety legal guidelines should “clearly and conspicuously” disclose using generative AI if requested or prompted by a client. The regulation doesn’t specify how a client can pose this query, nor dictate how such disclosure ought to happen however, given the case regulation thus far on what’s required for “clear and conspicuous” discover to customers (similar to these circumstances analyzing what’s required to create a binding settlement), companies ought to assume that merely directing an inquiring client to an internet site’s phrases of use or privateness coverage will not be ample.

Companies Are Responsible for Generative AI Output

Under the UAIP, an organization that has violated a Utah client safety regulation can not defend itself by arguing that it was the generative AI instrument that made the violative assertion, took the violative act or was utilized in furtherance of the violation. In impact, corporations topic to the UAIP ought to view statements “made” by a generative AI instrument no in another way than statements made by its personal workers.

Fines and Penalties

While the UAIP doesn’t present for a non-public proper of motion, the Utah Division of Consumer Protection (UDCP) could impose an administrative effective of as much as $2,500 per violation, and courts are empowered, in actions introduced by the UDCP to impose such fines, enjoin the illegal exercise and order disgorgement of any cash obtained in violation of the UAIP. The Utah Attorney General may additionally search $5,000 per violation from any one that violates such administrative or court docket order.

Other Provisions of the UAIP

While the UAIP imposes the foregoing obligations on AI utilization, it additionally seeks to encourage AI innovation. To that finish, the UAIP creates an Office of Artificial Intelligence Policy, which is tasked with creating and administering an “Artificial Intelligence Learning Laboratory Program” (AI Lab) and to seek the advice of with companies and different stakeholders about AI regulatory proposals.

The AI Lab gives a mechanism for corporations to use for 12 months of “regulatory mitigation” (with a single 12-month extension) whereas they develop AI techniques. Such mitigation can embrace lowered fines for violations and remedy durations earlier than fines are assessed. The program is successfully a regulatory sandbox for AI improvement in Utah.

Key Points

Companies which can be topic to the UAIP must put in place a compliant disclosure regime by May 1, 2024. For these in regulated occupations, this implies together with a distinguished disclosure earlier than the consumer engages with any generative AI content material. This would possibly embrace, for instance, a distinguished textual content assertion earlier than an AI-enabled chatbot launches. Companies in non-regulated occupations might want to set up a method to detect whether or not a consumer has requested if they’re partaking with a generative AI instrument, which inquiry may take many types, and be capable to reply to that immediate. Companies ought to understand that such inquires may be posed to the generative AI instrument itself, and such instruments will subsequently should be programmed to reply “clearly and conspicuously” to that inquiry. Employees that work together with customers will even should be educated as to how the corporate is utilizing generative AI and the way to answer inquiries they might obtain.

Companies that aren’t topic to the UAIP could wish to contemplate whether or not establishing a disclosure regime to answer inquiries from customers as as to if they’re interacting with a generative AI instrument is an effective enterprise apply to foster transparency with its customers.

The provision of the UAIP that prohibits corporations from “blaming” generative AI for an announcement made or motion taken serves as an necessary guidepost for corporations creating AI insurance policies. In common, corporations mustn’t assume that they are going to be capable to deal with statements generated by AI as in the event that they had been made by an unaffiliated third get together that’s chargeable for its personal actions. For instance, in February 2024, a British Columbia Civil Resolution Tribunal discovered that Air Canada negligently misrepresented its bereavement airfare coverage due to an announcement made by the corporate’s customer support chatbot. Companies ought to search out AI instruments which can be developed and educated in a way that minimizes the chance of inaccurate data being introduced to clients, and must also contemplate including disclaimers that content material generated by AI is for common data functions solely and that the corporate’s (human-generated) official phrases and insurance policies are what govern.

We anticipate that, within the absence of federal laws, particular person states will proceed to enact legal guidelines regulating using AI, together with requiring disclosures as to how AI is getting used and making corporations chargeable for statements made by generative AI instruments. This may result in a patchwork of AI legal guidelines with which corporations should comply, growing prices and requiring corporations to ascertain and keep sturdy AI compliance applications.

Download PDF

[View source.]

https://www.jdsupra.com/legalnews/utah-becomes-first-state-to-enact-ai-2635614/

Recommended For You