Skip to content

Virginia's Proposed AI Legislation Fails to Hit the Mark

Governor Glenn Youngkin faces a decision on Monday, March 24, 2025, concerning the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This bill aims to combat AI bias, but its rules are questionable, its enforcement is impractical, and it misconstrues fair treatment.

Governor Glenn Youngkin faces a decision on Monday, 24th March, 2025, regarding whether to sign or...
Governor Glenn Youngkin faces a decision on Monday, 24th March, 2025, regarding whether to sign or veto the High-Risk AI Developer and Deployer Act (HB 2094). This bill, intended to combat AI bias, is questionable due to its inconsistent rules and impractical enforcement mechanisms. Moreover, it incorrectly equates all AI developments.

Virginia's Proposed AI Legislation Fails to Hit the Mark

Governor Glenn Youngkin is set to make a decision on Monday, March 24, 2025, regarding the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This bill, designed to mitigate AI bias, is facing resistance due to its inconsistent rules, unworkable enforcement, and misunderstanding of fairness.

HB 2094 aims to regulate high-risk AI systems, which influence consequential decisions in sectors like housing, employment, education, healthcare, lending, parole, and legal services. Developers are required to document the system's purpose, risks, and performance, while deployers must create risk management policies, conduct detailed impact assessments, and notify individuals when AI is used in decisions about them. If a decision caused harm, deployers must offer an explanation and a chance to appeal. The Attorney General is tasked with enforcing these rules, with civil penalties for violations.

The bill's fundamental issue lies in its flawed conception. The bill seeks to ensure equal treatment for all, but neglects the question of whether that treatment is fair. For instance, an AI system used by a landlord to screen tenants needs only to avoid discriminating between races, without ensuring accurate or quality decisions. In that case, if the system unfairly denies rental opportunities, it complies with the law as long as it does so evenly for all.

The bill also presents seemingly arbitrary lines concerning compliance, exempting banks, insurers, and some healthcare providers if they're already covered by sector-specific regulations. However, it would mandate landlords, schools, and employers, even though they're governed by their own civil rights and consumer protection laws. This inconsistency is evident when an insurer evading AI-guided home loan requirements is exempt, whereas a housing association using AI to screen tenants is subject to the bill. Both decisions impact housing access, and both sectors are under anti-discrimination oversight.

If Virginia wishes to genuinely further fairness, it should focus on areas where state-level intervention can genuinely help. Instead of settling for systems that distribute errors evenly, Virginia should demand that any high-risk AI systems used by state agencies meet robust performance metrics, such as accuracy and error rates broken down by age, race, and gender. Clear performance standards would improve outcomes across the board and ensure taxpayer money is not wasted on flawed decision-making tools.

However, the state government should not impose pre-deployment performance standards on non-government AI systems. The National Institute of Technology and Standards (NIST) is already engaged in the task of evaluating AI systems and informing federal regulators. This task requires technical expertise and coordination across the entire AI private sector, making national consistency and scale essential—only the federal government can provide this.

Some argue that NIST's AI work is being eroded, leaving states with no choice but to intervene to ensure AI benefits people. However, well-intentioned state involvement might only worsen the situation. The proposed Virginia bill (similar to the Texas bill) assumes that transparency alone will lead to meaningful accountability, but such an approach only reinforces flawed systems by giving the illusion of oversight without actual delivery.

Furthermore, the bill demands detailed impact assessments from deployers before launching any high-risk AI system, with subsequent updates. These reports must outline the system's purpose, benefits, inputs and outputs, limitations, risks, mitigation steps, monitoring plans, and accuracy metrics. The Attorney General's office is expected to make sense of this information, oversee compliance, and take action if necessary. However, this office lacks the resources and expertise required for this task, resulting in a box-ticking exercise where companies flood the system with paperwork that remains uninterrogated.

In conclusion, Governor Youngkin should veto HB 2094 not as a stop to progress, but as an insistence on a version that truly delivers it. The bill is confusing, unworkable, and risks entrenching flawed systems under the guise of fairness.

[Image Credits: Jordan Vonderhaar/Bloomberg]

  1. The High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) aims to regulate high-risk AI systems, influencing decisions in sectors like housing, employment, education, healthcare, lending, parole, and legal services, by requiring developers to document system's purpose, risks, and performance, and deployers to create risk management policies.
  2. However, the fundamental issue with HB 2094 lies in its flawed conception, where it seeks to ensure equal treatment but neglects the question of whether that treatment is fair, leading to AI systems that distribute errors evenly and comply with the law while failing to deliver accurate or quality decisions.
  3. The bill also presents inconsistencies in compliance, exempting certain sectors from AI regulations that apply to others, such as landlords, schools, and employers, which are governed by their own civil rights and consumer protection laws.
  4. The debate on AI regulation in Virginia should focus on improving outcomes and ensuring fairness through robust performance metrics for high-risk AI systems used by state agencies, while allowing the National Institute of Technology and Standards (NIST) to evaluate AI systems and inform federal regulators, providing technical expertise and coordination across the entire AI private sector.

Read also:

    Latest

    Preserving integrity and securing confidential data is of utmost significance in today's digital...

    Query Transformed:

    Maintaining integrity and safeguarding data is crucial in today's digital landscape. Any enterprise striving for fairness and legality should prioritize data privacy diligently.