ITIF Logo
ITIF Search
Virginia’s New AI Executive Order Is A Model For Other States to Build On

Virginia’s New AI Executive Order Is A Model For Other States to Build On

February 16, 2024

Virginia’s Governor Glenn Youngkin recently issued Executive Order 30, a five-part directive intended to boost the use of artificial intelligence (AI) within state government agencies, law enforcement, and education. Notably, the executive order recognizes the opportunities presented by AI and avoids treating it as a fundamentally dangerous technology. It also backs up the policy with $600,000 in proposed funds to launch new AI pilots. Other states should take note and replicate these measures to accelerate their own statewide government deployment of AI.

Governor Youngkin’s executive order includes five key steps. First, it establishes a top-level set of guiding principles for the ethical use of AI by the government. This policy includes principles such as ensuring departments use well-documented AI models, humans validate outputs, and agencies avoid using unexplainable AI for decision-making. Importantly, the policy also directs government agencies to establish a business case for using AI that results in positive outcomes for citizens, such as reducing waiting times, cutting costs, and improving government services. By emphasizing both ethical use of AI and effective use of AI, the policy takes a balanced approach. It also creates a clear pathway for obtaining approval to use an AI system, sets transparency requirements to catalog its use internally and disclose its use publicly, and sets controls for protecting private data. Given that these are the questions many government agencies will have to wrestle with when deploying AI, setting out a clear policy on how to handle these questions will likely expedite deployment of the technology.

Second, the executive order creates detailed IT standards for government agencies and their suppliers to use for AI products and services. These standards build on the guiding principles by creating specific requirements for how agencies should manage their new and existing AI systems and integrate them into their enterprise architecture. By creating these standards, the executive order will streamline the implementation of AI across state agencies and ensure a consistent and secure approach. The policy also directs the Virginia IT Agency to establish a center to develop and share best practices among agencies on how to best use AI. While there is room for debate on some of the specific requirements (e.g. limiting the use of facial recognition only to authentication purposes), this document is quite impressive in both scope and detail and builds off of existing federal initiatives, such as the NIST AI Risk Management Framework, to avoid creating duplicative standards.

Third, it establishes guidelines for the use of AI in K-12 and post-secondary education. These guidelines include a set of overarching guiding principles, strategies for successful integration of AI in education, and specific roles and responsibilities for education stakeholders, such as state education agencies and school boards. The principles are generally reasonable, such as “do no harm” and “harness AI to empower student success,” but they occasionally detour into more fear-based rhetoric, such as calling for AI to “augment, not replace humans.” (It was likely an educator who wrote the line arguing that “[AI] will never replace teachers who provide wisdom, context, feedback, empathy, nurturing and humanity in ways that a machine cannot.”) But after this throat clearing, the heart of the education guidelines consists of a set of strategies for integrating AI into education. At the top of the list is the recommendation to give educators “hands-on experience” and the warning not to “rely on rumors or superficial perceptions” when considering how to use AI in the classroom. Instead, it directs school leaders to “actively facilitate opportunities for teacher teams to directly explore various AI tools” and encourages providing professional development and knowledge sharing among educators.

Fourth, the executive order directs the Virginia Secretary of Public Safety and Homeland Security, in partnership with the attorney general, to create standards for AI use by state law enforcement agencies within nine months. The order also instructs the secretary to provide guidance, upon request, to local law enforcement agencies to assist them in deploying AI. Given the unique concerns around law enforcement use of AI, such as potential increased surveillance or unjust outcomes, carving out a separate policy for law enforcement allows the government to proceed with other AI applications more quickly.

Finally, the executive order establishes an AI task force to provide ongoing recommendations on the state’s use of AI. This effort should ensure that these initial efforts are merely first steps and that the state leaders continue to evaluate AI pilots and adapt their policies as the technology matures and government leaders gain more experience.

Virginia’s executive order is a potential blueprint for AI implementation in state government that focuses on accelerating deployment of the technology not just managing risks. Other states that want to make use of this technology should take similar proactive measures to begin the process of integrating AI into government agencies, public education, and law enforcement.

Back to Top