By Riley Lankes

SLAAIT Policy Researcher


Introduction

As governments worldwide have begun to grapple with the emergence of AI, the lion’s share of media attention has focused on high-level legislative efforts. Projects such as the European Union’s historic Artificial Intelligence Act, or the Biden Administration’s Executive Order on Safe, Secure and Trustworthy AI will have great impact on the future of AI development and regulation. However, focusing only on high-level efforts like these risks missing the much more varied efforts to pass AI-related legislation at the state/county level. In the United States, 32 states and the District of Columbia have attempted to enact AI-related laws, covering a wide range of topics.

This briefing provides an overview of state-level AI legislation in the U.S., including legislation that has been enacted, legislation that has failed, and legislation that has recently been proposed. This briefing also offers analysis of emerging trends in state-level legislation, with the goal of providing a clear picture of how states are attempting to craft AI policy. This analysis was performed using a custom AI analyzer called SAIPA (State AI Policy Analyzer), pulling from information on current state-level AI legislation.[1]

Fast Facts

  • At time of writing, 33 U.S. States (including the District of Columbia) have attempted to pass some form of AI-related legislation.
  • At time of writing, 16 U.S. States have successfully enacted AI-related legislation.
  • California, Colorado, and Texas lead the nation in the passage of AI-related legislation, with each state having enacted 2 pieces of legislation related to AI at time of writing
  • In total, approximately 89 distinct pieces of AI-related legislation have been introduced in State Legislatures across the U.S.

Emerging Trends

  • States appear to be primarily focusing on Consumer Privacy and Data Protection in legislative efforts related to AI.
    • 11 States have enacted legislation that either directly impacts consumer data protection, privacy rights in the context of AI, or mandates certain disclosures and impact assessments to safeguard personal data against potential misuse or bias in automated processes.
    • 13 States have proposed legislation in this area.
    • Consumer Privacy and Data Protection is also the most common topic among AI-related legislation that has successfully been enacted.
  • A significant amount of legislation deals with the Ethical Use of AI, such as laws aimed at preventing discrimination and ensuring fairness in automated decision-making. This includes measures to regulate AI in hiring, insurance, and other consumer services where biased AI could have significant negative impacts on individuals.
    • Most of the active legislation in this category is proposed, not yet enacted. This may indicate that states are moving towards this issue area.
    • Illinois (AI Video Interview Act) is the only state to have enacted Ethical AI legislation at the state level.
    • New York City has successfully enacted New York City Local Law 144, which mandates bias audits for AI-enabled tools. However, this is only city-level legislation at the time of writing.
    • At time of writing, 16 distinct pieces of legislation dealing with the Ethical Use of AI have been proposed, but not yet enacted, across 9 different States.
  • Several pieces of Influential Legislation have emerged at the state level, serving as models for legislation subsequently passed in other states. These pieces of influential legislation include:
    • California Consumer Privacy Act (CCPA): As one of the earliest comprehensive consumer privacy laws in the United States, the CCPA has been highly influential, setting a precedent for how personal information is handled and protected. Various states have proposed or enacted legislation inspired by the CCPA, aiming to give consumers more control over their personal data.
    • Virginia Consumer Data Protection Act (VCDPA): Modeled in part after the CCPA, the VCDPA has also inspired similar legislation in other states, such as Colorado and Connecticut. These laws generally include provisions for consumer rights to access, correct, delete, and opt-out of the processing of personal data.
    • Colorado Privacy Act (CPA): Since its enactment, the CPA has been referenced in legislative efforts in other states looking to bolster consumer privacy protections. Its comprehensive approach to data privacy, including specific duties for data controllers and processors, has been seen as a model for other jurisdictions.

Assessing Legislative Failures

  • Legislation that failed to pass can still be informative. By looking at patterns among failed legislation, we can make educated guesses about areas where States are struggling to regulate AI.
  • Among the pieces of AI-related legislation that have failed, common topics often include ambitious measures to regulate or restrict the use of AI in specific, high-stakes areas.
  • Legislation related to Automated Decision Making often struggled to pass.
    • This includes laws that attempted to regulate the use of Automated Decision-Making Tools in Employment.
    • This category also includes legislation which attempted to regulate the use of ADTs (automated decision tool) outside of just employment-related decisions.
      • California AB 331 was an attempt to craft comprehensive regulation for the use of ADTs, requiring any entity using them to perform an impact assessment, and to submit this assessment to the California Civil Rights Department. This legislation would have also required anyone deploying an ADT to “…notify any natural person that is the subject of the consequential decision that the deployer is using an ADT.”
  • Legislation which attempted to regulate the use of AI in Healthcare also appears to have struggled to pass thus far.
    • The approaches to regulation among these bills vary – whereas Texas HB 4695 attempted to ban the use of AI in mental health services, the legislation from Massachusetts and Illinois are more attempts to regulation when the technology can be used in healthcare, and to disclose its use to patients.

*Author’s note on Methods and AI*


Our belief within SLAAIT is that one of the best ways to understand AI is to use it – this is the “Deconstruct” part of our “Dream, Dread, Deconstruct” philosophy. In keeping with this philosophy, AI was used to perform the analysis found in this briefing.

            Analysis was performed by a custom GPT called SAIPA (State AI Policy Analyzer), built using GPT-4. This custom GPT was given a knowledge base, consisting of snapshots of AI-related legislation published by BCLP, an international law firm which (at the time of writing) has the most current available data on state-level AI legislation. BCLP snapshots were supplemented by internally authored snapshots of legislation that BCLP did not include in its list, drawn from internal research on AI legislation. SAIPA was also given access to the full text of all legislation mentioned in these snapshots. The GPT was then trained to answer questions about the text-based data in its knowledge base. Analysis performed by SAIPA was validated by comparing results to analysis that I performed by hand. To qualify this statement – my training is in politics, I hold a BA and an MA in international relations and have experience performing policy analysis in both academic and professional contexts.

            All analysis performed by SAIPA included in this briefing has been validated. If you’re interested in testing SAIPA for yourself, the GPT will be made available to all SLAAIT partners soon. Go forth and deconstruct!


[1] Analysis performed by SAIPA was validated by comparing it to analysis performed by SLAAIT Policy Researcher Riley Lankes.