Seth Higgins: Harrisburg’s AI bills are a mixed bag
Most state governments across the country are considering laws to regulate the use and development of artificial intelligence. Pennsylvania is no different. In recent months, members of the General Assembly, spearheaded by State Representative Melissa Shusterman (D-157), have introduced three bills that would impact the use of AI within the Commonwealth. These bills range from good, to understandable but flawed, to downright awful.
Let’s start with the good.
HB 1993 — Regulating Use of AI in Mental Healthcare — largely lives up to its title. At its most basic, this bill would regulate the ways mental health professionals would be able to utilize AI in their practices and the corresponding disclosures they would have to provide patients.
It would allow mental health professionals to use AI for basic administrative tasks, such as scheduling, billing, and recordkeeping. The bill also anticipates the use of AI tools in recorded or transcribed sessions and requires explicit written consent before such tools can be used.
Additionally, the bill prohibits mental health professionals from using certain AI-enabled services and prohibits AI providers from marketing artificial intelligence systems as capable of providing therapy or psychotherapy services. In short, clinicians would be barred from letting AI participate directly in therapy itself.
The precise, targeted nature of this bill is its strength. It is well drafted and regulates the use of a tool by a profession that is already subject to extensive state oversight. In that way, it is not a bill regulating AI but rather an effort to regulate an existing profession.
Further, HB 1993 isn’t going after a made-up issue. There have been instances where people were harmed by AI systems engaging with users’ mental health concerns. While such uses of AI will be difficult to regulate outside a medical office, mental health professionals should not actively contribute to the problem.
The main weakness of this bill is that at times terms and provisions are in tension or contradictory. For instance, the bill prevents a mental health professional from using AI to “detect emotions or mental states.” However, two of the “supplementary support” services mental health professionals can use AI for are “analyzing anonymized data to track client progress or identify trends, subject to review by a mental health professional” and “identifying and organizing external resources or referrals for client use.”
It is conceivable that performing those tasks would require an AI system to infer emotional states or mental health conditions. Without that ability, how would it meaningfully track progress or recommend appropriate resources? These terms are also subjective and ill-defined, meaning competing interpretations are inevitable.
Such concerns are not enough to scuttle the legislative effort, but the bill should be tightened prior to final passage.
Next is HB 2006 — Artificial Intelligence in Companionship Applications Safety Act.
In short, this bill would require AI developers and providers to build in protocols that identify suicidal or self-harm ideation in a user of AI. It also bars AI from assisting an AI user with a suicide attempt and it must refer a potentially suicidal AI user to a crisis center.
These are laudable goals and this legislation is attempting to address a pressing issue. This isn’t hypothetical. AI systems have given people suicide advice.
The core issue with this legislation is a doozy: it may violate the free speech protections of the First Amendment.
AI is developed and operated with computer code and algorithms, and the courts have held that code is speech. And the only way for an AI developer to comply with HB 2006 is by writing or developing government-mandated code and algorithms to detect and respond to certain user prompts. I suspect this issue will eventually be taken up by the federal courts, but as currently devised, there is a strong argument that this bill is government-compelled speech.
This concern isn’t hypothetical. One section of the bill requires that at “the beginning of a session with an AI companion and once every three hours during the session” there must be provided “a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.”
Courts sometimes treat consumer disclosure requirements differently than other forms of compelled speech, particularly in the context of commercial products and consumer protection. However, HB 2006 goes beyond a simple disclosure requirement by mandating repeated, ongoing notifications within an interactive conversation. Forcing recurring disclaimers into a conversation-based AI product raises compelled-speech concerns and could invite constitutional scrutiny under the First Amendment.
The government cannot force a novelist to insert a disclaimer every ten pages. HB 2006 attempts something similar by requiring AI systems to repeatedly interrupt a conversation with a government-mandated message.
The next issues are more practical: regulatory capture and the “whack-a-mole” conundrum inherent in efforts to regulate the digital world.
First, large tech companies will have the resources to navigate and comply with requirements such as HB 2006. This could inadvertently favor big tech companies and drive down competition.
Next is the whack-a-mole nature of governing online spaces. There is a reasonable, though contested, argument that government attempts to moderate online speech pushes online discourse into darker, more sinister fringes of the internet. Relatedly, efforts to regulate online piracy have only led to an arms race between regulators and the operators of shady websites, which often exist outside of American jurisdiction. These aren’t arguments against regulation per se but suggest such regulation may be less effective and more costly than imagined.
And this brings us to the bad: HB 2175 — AI Chatbot Consumer Protection.
A portion of this bill attempts to limit the circumstances in which AI companies can sell and share healthcare information it gains from users of their AI systems. This is an understandable goal, but monetizing user data is what allows most online systems to be “free.” I also imagine enforcement will be difficult, and the bill may violate the First Amendment in ways similar to HB 2006.
It appears the drafters meant to target mental-health chatbots. The problem is that the bill doesn’t actually say that. Mental health is included as an example—not a limit—and the definition is broad enough to sweep in almost any consumer-facing AI tool.
This leads to the core issue: it restricts how AI may interact with users.
For instance, the bill prohibits AI from advertising “a specific product or service to a consumer” and it does not allow AI to utilize user input to “determine whether to display an advertisement for a product or service to the consumer” or “determine a product, service or category of product or service to advertise to the consumer.”
This portion of the bill is so flawed it makes me think the drafters don’t understand how most users of AI interact with the technology.
For example, I recently uploaded a few pictures of my living room to an AI system, explained my aesthetic preferences, and asked the AI for recommendations for light fixtures and blinds. The AI then provided some options, explained its rationale, and linked to websites to purchase those products. Similarly, I’ve also provided AI with a list of movies and genres I like and asked for recommendations.
HB 2175 could easily be interpreted to prohibit these uses of AI, or at least discourage them through legal risk. It may be argued that the uses I describe above aren’t advertisements, meaning a third party did not pay the AI developer to bring those products to my attention. This may be the intention of the bill, but as written it would effectively block users from engaging with AI for a host of reasons.
Lastly, HB 2175 would create a bureaucratic minefield. For example, it requires AI suppliers to provide “consumers with instructions on the safe use of the chatbot,” explain how the AI “prioritizes consumer mental health and safety over engagement metrics or profit,” and requires “measures to prevent discriminatory treatment of consumers,” among other requirements.
As with the concerns of HB 2006, HB 2175 favors big tech, since only large, wealthy institutions would be able to navigate this regulatory labyrinth.
Governments and their citizens often react to the opportunities and challenges that accompany new technology with new legislation. This is understandable and often necessary. But poorly drafted legislation can trample citizens’ rights or strangle fledgling technological breakthroughs.
For these reasons, HB 1993 should be passed after further refinement. HB 2006 should be abandoned in its current iteration, but other legislative approaches should be taken to pursue those same goals. HB 2175 should be tossed in the Susquehanna River.
Seth Higgins is a native of Saint Marys, Pennsylvania. He currently resides in Philadelphia.
