Pioneering Safe, Equitable, and Accountable AI Governance in the United States
Artificial Intelligence
Security & Privacy
Innovation
Summary
This insight highlights Executive Order 14110, a landmark directive that shapes the United States' strategy for artificial intelligence governance and development. The order addresses key areas including safety, innovation, worker support, equity, privacy, and federal leadership. It aims to position the U.S. as a global leader in responsible AI development while balancing innovation with ethical considerations and risk mitigation.
Key insights:
Comprehensive Framework: Executive Order 14110 establishes a holistic approach to AI governance, addressing safety, innovation, equity, and global leadership.
Safety and Security: The order emphasizes developing guidelines, standards, and best practices for AI safety, including red-teaming techniques and monitoring of dual-use AI models.
Innovation and Competition: Initiatives focus on attracting global AI talent, strengthening intellectual property protections, and supporting small businesses and startups in AI development.
Worker Support: The order aims to understand AI's labor market effects, develop employer guidelines for AI deployment, and ensure fair compensation in AI-augmented workplaces.
Equity and Civil Rights: Measures are outlined to prevent discrimination in AI applications across various sectors, including criminal justice, government programs, and housing.
Privacy Protection: The order strengthens privacy impact assessments, advances privacy-enhancing technologies, and develops guidelines for differential privacy protections.
Federal Government Leadership: A coordinated approach to AI governance across agencies is established, including the appointment of Chief Artificial Intelligence Officers and the creation of an interagency council.
Global Leadership: The U.S. commits to fostering international collaboration on AI governance, advancing global AI standards, and addressing cross-border AI risks to critical infrastructure.
Introduction
Executive Order 14110, The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, was issued by President Joseph R. Biden, Jr. on October 30, 2023. This order offers a framework for controlling AI. Acknowledging AI's enormous potential as well as its significant threats, the executive order aims to optimize its advantages and tackle issues with safety, equity, innovation, and responsibility. By implementing this comprehensive plan, the Biden administration hopes to position the US as a global leader in the development of responsible AI. The objective is to ensure that these developments complement the country's strategic interests and moral principles.
The executive order lays out a unified, government-wide strategy for AI governance that covers a wide range of areas, including essential infrastructure, healthcare, education, economic development, and national security. It emphasizes the importance of supporting innovation while upholding strict regulations to preserve civil rights, privacy, and public safety. In addition to addressing the immediate concerns of AI misuse, the order provides the groundwork for long-term, fair, and reliable progress in the field by providing federal agencies, private-sector stakeholders, and foreign partners with effective steps.
The main tenets and guidelines of this executive order—ensuring the safety and security of AI technology, encouraging innovation and competition, assisting workers, advancing equity and civil rights, safeguarding privacy, advancing the use of AI by the federal government, enhancing American leadership overseas, and implementation—will be covered in detail in this article. These parts offer a thorough examination of how the US intends to take the lead in the responsible development and governance of AI, guaranteeing that it is a tool for advancement and the good of the public.
Purpose
The goal of promoting the safe, secure, and reliable development of artificial intelligence stems from its fundamental dual nature as a potentially dangerous source for society as well as a transformational tool for human advancement. On the one hand, AI presents unmatched chances to tackle urgent global issues, boost productivity, and spark cross-sector innovation, resulting in a more wealthy, just, and secure future. The ethical application of AI has the potential to improve quality of life and boost economies in a number of ways, from improving healthcare and education to expediting public services and promoting scientific research.
The same technology, however, poses major risks to privacy, national security, and workforce stability and can worsen systemic biases, deepen disparities, spread misinformation, and violate civil rights if abused or produced without sufficient controls. In order to create governance frameworks that not only reduce these risks but also maximize the advantages of the technology, the initiative emphasizes the necessity of a comprehensive and cooperative effort involving government agencies, private businesses, academic institutions, and civil society. This strategy puts the US at the forefront of global leadership in establishing moral, creative, and security-focused standards while also acknowledging the quick development of AI capabilities. Fundamentally, the program embodies the idea that for AI to become a force for justice, opportunity, and communal advancement, its evolution must be directed by the ideals, values, and varied inventiveness of the society it serves.
Policy and Principles
A legislative framework that guarantees the proper use of AI while maximizing its transformational potential must direct its research and implementation. To promote AI governance, the Federal Government stresses a thorough and coordinated strategy that involves all facets of society under this Executive Order. Establishing guiding principles to reduce risks, encourage innovation, and preserve American ideals like accountability, openness, and equity is a top priority for this plan. This strategy seeks to align AI governance with the more general objectives of economic growth, national security, and social equality by taking into account the opinions of a wide range of stakeholders, including government, the private sector, academia, civic society, labor unions, and overseas allies. The mentioned principles seek to establish a balanced framework that guarantees the safe, secure, and advantageous development of AI technology. They do this by acknowledging the numerous issues faced by AI, which range from ethical quandaries to competition dangers:
1. Safety and Security
To guarantee their dependability and security, AI systems must undergo stringent assessments and standardized testing. The Federal Government works to reduce risks while promoting public trust by tackling issues like adversary manipulation, cybersecurity flaws, and threats to key infrastructure. Effective labeling techniques and post-deployment performance monitoring will guarantee that AI performs as planned and stays transparent to users. This idea emphasizes how crucial it is to anticipate and mitigate AI's hazards without limiting its advantages.
2. Innovation and Competition
The US needs to support a just and competitive AI environment if it wants to continue to lead the world. To support innovation, safeguard intellectual property, and encourage small company involvement, investments in research, education, and capacity-building programs are essential. In order to provide entrepreneurs and up-and-coming developers with an even playing field, the federal government also works to stop anti-competitive actions by powerful companies.
3. Worker Support
Policies that safeguard employee rights, improve job quality, and reduce detrimental disruptions are necessary for the integration of AI into the workforce. The Federal Government hopes to foster the development of new opportunities while making sure AI enhances human labor rather than replaces it by working with employers, educators, and labor unions. All workers will gain fairly from initiatives to make skill training accessible and to modify job duties for the AI era.
4. Equity and Civil Rights
AI must be developed and used in ways that respect civil rights and promote equity. The federal government will make sure AI systems do not support prejudice or discrimination, particularly in crucial areas like housing, healthcare, and employment. This idea, which builds on programs like the AI Bill of Rights, stresses thorough assessments and accountability systems to guard against injustices and advance justice.
5. Consumer Protection
AI-enabled goods and services need to respect user safety and adhere to current consumer protection regulations. In sectors like healthcare, education, and financial services, where improper use of AI could have serious repercussions, safeguards against fraud, bias, and discrimination are particularly important. In addition to increasing customer access and reducing costs, responsible AI should improve the quality of products and services.
6. Privacy Protection
In the era of artificial intelligence, the federal government will give the defense of individual civil liberties and privacy-first priority. Agencies will reduce the dangers of personal data misuse by utilizing measures like privacy-enhancing technologies. In order to preserve public confidence and uphold First Amendment rights, this concept seeks to guarantee that data collection and usage are legal, safe, and transparent.
7. Federal Government Leadership
When it comes to implementing and regulating AI, the federal government aims to set an example. This entails luring top talent, updating IT infrastructure, and giving federal workers thorough training so they can comprehend the advantages and dangers of AI. The government hopes to establish a standard for moral and successful AI deployment by exhibiting responsible AI use.
8. Global Leadership
The United States acknowledges its influence on the global AI scene and pledges to engage with other nations to create common norms and standards. The Federal Government aims to reduce the hazards associated with AI and highlight its potential to solve global issues by promoting international discussions and collaborations. The objective is to guarantee that the advantages of AI are shared fairly across the globe. By striking a balance between innovation and caution, this policy and its guiding principles make sure AI is used responsibly to advance national security, economic prosperity, and society as well as respecting fundamental rights and freedoms.
Ensuring the Safety and Security of AI Technology
In order to handle the many issues raised by this quickly developing technology, the United States has developed a framework for ensuring the safety and security of AI. This project places a high priority on the adoption of guidelines, regulatory standards, and industry-wide best practices. The effort aims to reduce the risks of cybersecurity flaws, threats to essential infrastructure, and dual-use AI applications that could have nefarious or unexpected effects by including important government organizations, commercial sector players, and foreign partners.
In order to detect such problems before they become real, the strategy also calls for the creation of red-teaming techniques, improved regulatory supervision, and methodical assessments of AI capabilities. This concerted effort emphasizes how crucial it is to incorporate safety standards and ethical considerations into AI research procedures in order to preserve national security, preserve public confidence, and guarantee responsible AI progress:
1. Guidelines, Standards, and Best Practices for AI Safety
The National Institute of Standards and Technology (NIST), together with other federal agencies, is responsible for developing a comprehensive framework to address the urgent need for standardized safety measures. This entails expanding the Secure Software Development Framework to incorporate dual-use models and creating resources like the AI Risk Management Framework (NIST AI 100-1) specifically designed for generative AI. By standardizing procedures that guard against vulnerabilities, these resources hope to guarantee that AI systems function within safe and regulated boundaries. Furthermore, guidelines and benchmarks for auditing AI capabilities will be created, with a focus on high-risk domains like biosecurity and cybersecurity.
2. Red-Teaming and Testing Environments
A key element of this approach is AI red-teaming, which allows developers to mimic hostile environments in order to find and fix possible vulnerabilities. In cooperation with organizations like the Department of Energy, this entails developing standards for evaluating the security and reliability of dual-use foundation models and setting up testbeds. While incorporating privacy-enhancing technologies (PETs), these controlled settings facilitate the assessment and development of AI systems while guaranteeing that safety precautions are thoroughly examined and put into place.
3. Monitoring Dual-Use AI Models
Businesses creating dual-use AI systems must continue to be transparent by regularly reporting to federal agencies. The results of red-teaming exercises, model weights, and cybersecurity measures must all be covered in detail in reports. This control lowers the possibility of misuse, such as in the development of biological weapons or the exploitation of cybersecurity, and guarantees that AI systems are built to satisfy safety requirements.
4. Critical Infrastructure Protection
One of the main priorities is safeguarding vital infrastructure against vulnerabilities brought on by AI. Regulatory bodies are tasked with assessing how the implementation of AI would affect infrastructure systems and identifying potential hazards like cyberattacks and operational breakdowns. A unified approach to reducing these risks is offered by the incorporation of the AI Risk Management Framework into safety procedures, and sector-specific evaluations guarantee customized solutions for every important industry.
5. Cybersecurity Enhancements
One of the main priorities is safeguarding vital infrastructure against vulnerabilities brought on by AI. Regulatory bodies are tasked with assessing how the implementation of AI would affect infrastructure systems and identifying potential hazards like cyberattacks and operational breakdowns. A unified approach to reducing these risks is offered by the incorporation of the AI Risk Management Framework into safety procedures, and sector-specific evaluations guarantee customized solutions for every important industry.
6. Mitigating CBRN Threats through AI
Federal agencies will carry out thorough assessments of AI models that present possible risks in chemical, biological, radiological, and nuclear (CBRN) applications in recognition of the potential for misuse. The creation of safety safeguards will be guided by partnerships with private AI labs, academic institutions, and independent assessors. The main goal of regulatory oversight recommendations will be to stop AI from being used in ways that could jeopardize national security.
7. Synthetic Content Detection and Labeling
Measures to authenticate and classify digital information will be put in place to combat the spread of AI-generated synthetic content. To preserve transparency, this entails implementing watermarking technology and monitoring the provenance of content. The government hopes to protect against false information, inappropriate pictures, and other negative applications of AI-generated content by putting such safeguards in place.
8. Regulating Dual-Use Foundation Models
In order to develop voluntary and regulatory procedures to address the dangers associated with widely available dual-use models, public consultations will be performed. This entails examining the advantages these models offer for innovation as well as evaluating the possibility of abuse through the modification or elimination of protections. Policy decisions will be guided by recommendations to strike a balance between security and innovation.
9. Managing Federal Data for AI Training
Agencies will perform security checks of their data assets to stop the nefarious use of federal data to train dangerous AI systems. Guidelines will be created to evaluate risks and make sure that publicly available data does not unintentionally aid in the development of CBRN weapons or autonomous offensive cyber capabilities.
10. Developing a National Security Memorandum
A National Security Memorandum will provide a unified strategy for handling AI's security threats in intelligence and national defense. In order to handle hostile use cases and safeguard American interests both domestically and internationally, this document will offer recommendations on using AI capabilities.
The framework emphasizes proactive risk management and the advancement of international standards that are consistent with the country's values and security interests in order to establish a strong and safe environment for AI development and implementation.
Promoting Innovation and Competition
Maintaining global leadership and fostering innovation in a fair and competitive environment depends heavily on the development of AI technology. The United States has made the creation of laws and initiatives to promote innovation while upholding fair competition a top priority because it recognizes the revolutionary potential of artificial intelligence. These initiatives seek to increase access to state-of-the-art research resources, draw in top talent, and encourage the responsible deployment of AI in a variety of industries. The plan also places a strong emphasis on fostering an atmosphere in which underprivileged communities, small enterprises, and startups may actively support and profit from AI-driven expansion. The program aims to create a vibrant ecosystem where innovation is not only safeguarded but also enhanced through cooperation and fair access by tackling systemic issues including monopolistic tactics and resource inequalities:
1. Attracting Global AI Talent
Attracting and keeping international talent in AI and future technologies is a key component of the plan. For qualified noncitizens, this entails increasing visa renewal programs, guaranteeing prompt access to visa appointments, and expediting the visa application process for researchers, students, and skilled professionals. The Exchange Visitor Skills List must be updated by the Secretary of State in order to give priority to abilities that are essential to the interests of the country. In order to avoid interfering with ongoing research and innovation, a domestic visa renewal program for academic and STEM personnel has been implemented concurrently. In order to facilitate smooth integration into the innovation ecosystem, efforts are also being made to create extensive instructional materials that will help international AI specialists navigate U.S. visa procedures and job openings.
2. Public-Private Partnerships and AI Research Resources
The National AI Research Resource (NAIRR) pilot initiative is being developed to establish an integrated infrastructure for distributed computing and data resources in order to close the gap between research and commercialization. In order to guarantee access to the resources required for AI innovation, the initiative promotes cooperation between federal agencies, the commercial sector, and research groups. The creation of more National AI Research Institutes and regional innovation engines that concentrate on addressing workforce demands, societal issues, and AI-driven solutions further supports this strategy. These facilities act as focal points for developing innovative AI applications and expanding public-private collaborations.
3. Strengthening Intellectual Property Protections
The United States Patent and Trademark Office (USPTO) is creating guidelines for patent examiners and applicants to address the relationship between artificial intelligence and intellectual property (IP). In addition to offering instances of how AI systems support the creative process, this guidance will address questions about AI-assisted inventorship. Additionally, improved investigative frameworks, training initiatives, and information-sharing systems are being used to counteract IP theft linked to AI. These initiatives seek to safeguard innovators' rights while promoting ethical AI development.
4. Supporting Small Businesses and Startups
The Small Business Administration (SBA) is giving financing and resources for AI-related projects top priority because it recognizes how important small businesses are to fostering innovation. As part of this, Small Business AI Innovation and Commercialization Institutes have been established to offer training and technical support. Additionally, accelerator programs are receiving incentives for incorporating AI-related courses and assisting in the commercialization of AI technology. To ensure that small firms can use AI for growth and operational efficiency, the SBA is updating the requirements for eligibility in its loan and funding programs to account for costs associated with AI adoption.
5. Driving Innovation in Healthcare and Climate Change
The Department of Health and Human Services (HHS) is giving priority to grants and partnerships that advance AI-enabled technologies for healthcare equity and customized medicine in order to fully realize the potential of AI in healthcare. In a similar vein, the DOE is using AI to improve grid dependability, solve climate issues, and hasten the rollout of sustainable energy. Among these efforts is the creation of AI models to improve national resilience to the effects of climate change, reduce environmental risks, and expedite the permission process.
6. Promoting Competition in the AI Marketplace
Federal agencies are charged with combating monopolistic behavior and promoting competition in AI-related markets to guarantee fair competition. To ensure that startups and small enterprises can successfully compete, the Federal Trade Commission (FTC) is urged to exercise its power to stop collusion and advance ethical business practices. In order to ensure that creative players may prosper in a competitive environment, steps are being taken to give startups access to resources including specialized equipment, datasets, and workforce development programs in the semiconductor industry, which is crucial to AI technology.
By taking these all-encompassing steps, the US hopes to establish itself as a leader in AI innovation worldwide and promote an atmosphere that is competitive, inclusive, and centered on ethical technology development. This strategy guarantees that the advantages of AI are widely shared, promoting both economic expansion and societal advancement.
Supporting Workers
Workers in a variety of industries may face both possibilities and challenges as a result of the labor market being redefined by the quick adoption of artificial intelligence in the workplace. The United States is dedicated to developing laws that minimize workforce disruptions while creating fair chances for people to prosper in an AI-driven economy, acknowledging the significant effects of AI on employment. The government aims to guarantee that AI is used responsibly in ways that improve, rather than compromise, workers' well-being by implementing strategic programs that evaluate labor-market consequences, broaden educational and training opportunities, and create strong workplace protections. These initiatives seek to secure essential worker rights and protections while preparing the workforce for new opportunities through partnerships with companies, labor organizations, and educational institutions:
1. Understanding the Labor-Market Effects of AI
The Chairman of the Council of Economic Advisers is charged with creating a thorough report outlining the labor-market impacts of AI in order to assess its influence on employment trends. Potential disruptions and industries where AI may result in job creation or displacement will be highlighted in this report. At the same time, the Secretary of Labor will evaluate whether federal programs—like unemployment insurance and Workforce Innovation and Opportunity Act initiatives—are prepared to assist workers who have been displaced by emerging technologies. In order to prepare workers for future prospects, the report will also examine avenues for education and training in AI-related disciplines and offer solutions for improving existing programs, including legislative proposals.
2. Developing Employer Guidelines for AI Deployment
To guarantee that AI is applied in ways that put the welfare of employees first, the Secretary of Labor will create and disseminate best practices and principles for employers. Important topics will be covered by these rules, such as reducing the likelihood of job displacement, advancing fair labor practices, and guaranteeing openness in the way AI systems gather and utilize employee data. Particular suggestions will try to strike a balance between the necessity to safeguard pay, health, safety, and privacy in the workplace and the deployment of AI technologies. In order to create workplaces where AI improves job quality and opportunities, employers will be urged to incorporate these ideas into their daily operations.
3. Ensuring Fair Compensation in AI-Augmented Workplaces
To safeguard workers' rights in workplaces where AI is used for monitoring or augmenting tasks, the Secretary of Labor will issue clear guidance reinforcing employers' obligations to comply with federal labor laws. This includes adherence to the Fair Labor Standards Act, which mandates appropriate compensation for all hours worked, regardless of AI's role in task evaluation or monitoring. These measures are designed to prevent AI systems from inadvertently undermining labor protections and ensure that workers are fully compensated for their contributions.
4. Building a Diverse AI-Ready Workforce
By utilizing its current fellowship programs and awards, the National Science Foundation (NSF) will give priority to funding workforce development and education efforts relevant to artificial intelligence. Working together with other federal agencies, we will find more ways to fund the development of an inclusive workforce with the skills needed for jobs related to artificial intelligence. The goal of these initiatives is to equip a diverse workforce with the skills necessary to succeed in a changing labor market driven by technological advancement.
The government shows its dedication to safeguarding workers from the dangers of AI while allowing them to take advantage of its potential advantages through these programs. These steps are intended to guarantee that the development of AI acts as a catalyst for societal advancement and economic empowerment by tackling employment disruptions, encouraging fair workplace policies, and increasing educational opportunities.
Advancing Equity and Civil Rights
In order to ensure that technology breakthroughs do not reinforce or worsen discrimination, the integration of artificial intelligence (AI) across multiple domains requires a commitment to equity and the preservation of civil rights. The federal government's emphasis on incorporating safeguards in AI applications to eliminate biases, advance fairness, and preserve justice is motivated by this imperative. The administration hopes to maximize AI's potential to improve public well-being while minimizing harm by tackling the possible discriminatory effects of AI in vital systems including criminal justice, government programs, housing, and employment. These actions highlight how crucial openness, responsibility, and inclusion are to the development, application, and supervision of AI. These are some measures to advance equity and civil rights:
1. Strengthening Civil Rights Protections in the Criminal Justice System
The Attorney General is entrusted with organizing federal efforts to prevent discrimination associated with automated systems, acknowledging the dangers of bias and injustices made worse by AI in criminal justice. Federal civil rights offices will convene in a joint meeting to explore ways to stop algorithmic discrimination and raise public awareness of possible AI risks. A thorough analysis will also review AI's role in risk assessment, surveillance, sentencing, parole, and other areas, striking a balance between the need for efficiency and strong privacy and civil liberties protections. Additionally, efforts will concentrate on providing law enforcement with the technological know-how required to employ AI responsibly and handle rights violations resulting from its abuse.
2. Ensuring Equity in Government Programs and Benefits
Federal authorities are tasked with ensuring equity and preventing discrimination when implementing AI in public benefit initiatives. A strategy to assess the use of automated technologies in public benefit administration will be released by the Secretary of Health and Human Services (HHS), with an emphasis on fair access, human oversight, and decision-making openness. The Secretary of Agriculture will also provide state and local officials with instructions on how to reduce bias in benefit programs and guarantee that recipients have access to human review of decisions. These actions are intended to improve government service delivery's accountability and fairness.
3. Promoting Fairness in Housing and Financial Markets
Federal authorities are urged to use AI techniques to detect biases in tenant screening systems, appraisal procedures, and underwriting models in order to fight discrimination in housing and financial services. In order to ensure compliance with regulations like the Fair Housing Act, the Consumer Financial Protection Bureau (CFPB) and the Department of Housing and Urban Development (HUD) will release recommendations to combat discriminatory practices in digital advertising and tenant reviews. Certain initiatives seek to create best practices for the equitable and legal application of AI in certain fields as well as lessen inequities that impact protected groups.
4. Protecting the Rights of Individuals with Disabilities
People with impairments are particularly in danger from AI applications that use biometric data, such as eye tracking or gait analysis. The Architectural and Transportation Barriers Compliance Board will interact with the public and offer technical advice on the appropriate use of these technologies in order to address these issues. Through preventing discriminatory outcomes and guaranteeing access to necessary services, this effort seeks to optimize the advantages of AI for people with impairments.
5. Nondiscrimination in Employment Practices
The Secretary of Labor will create guidelines for federal contractors within a year to make sure AI-powered employment practices adhere to nondiscrimination laws. In addition to addressing the dangers of algorithmic biases that could disfavor protected groups, this endeavor will advance fair employment practices. These policies seek to promote an inclusive labor market by placing a strong emphasis on equity in technology-based employment systems.
The federal government shows its dedication to integrating human rights and equity into the core of AI deployment with these all-encompassing initiatives. These efforts seek to guarantee that AI functions as a tool for advancement, justice, and societal inclusion by combating discrimination, encouraging transparency, and cultivating cooperation amongst agencies and stakeholders.
Protecting Consumers, Patients, Passengers, and Students
While AI has great promise for revolutionizing various industries, it also carries several serious hazards, such as the possibility of discrimination, privacy violations, and inadvertent harm. A proactive strategy that incorporates safety, equality, and accountability into AI deployment is necessary to safeguard the public from these threats while fostering AI-driven innovation. By encouraging openness, responsible supervision, and cooperation with stakeholders from a variety of industries, the federal government is dedicated to protecting customers, patients, travelers, and students. These programs seek to guarantee that AI is applied to improve public welfare without endangering safety or rights:
1. Consumer Protections and Regulatory Oversight
Independent regulatory bodies are urged to use all of their power to combat AI-related concerns, such as privacy invasion, discrimination, and fraud. To guarantee that organizations implementing AI perform due diligence, uphold transparency, and give concise explanations of AI systems, agencies are entrusted with elucidating current legislation. In order to ensure a thorough framework for consumer protection, this involves addressing financial stability issues and highlighting the accountability of third-party AI service providers.
2. Enhancing Healthcare Safety and Equity
Leading the charge to guarantee the safe and fair application of AI in public health and healthcare is the Department of Health and Human Services (HHS). A comprehensive plan addressing privacy requirements, equity in AI models, and predictive technologies will be developed by the recently formed HHS AI Task Force. AI's performance in the actual world, particularly its effects on demographic groups, will be assessed by long-term monitoring systems. The main areas of guidance for healthcare providers will be transparency in AI-enabled decision-making and adherence to nondiscrimination regulations. An AI safety program would also monitor and correct clinical AI deployment problems, sharing best practices to prevent patient harm.
3. AI in Transportation
Through programs like cross-modal working groups and pilot programs, the Department of Transportation (DOT) will evaluate how AI is being incorporated into transportation systems. These initiatives seek to assess how AI affects efficiency, safety, and policy concerns such as driverless vehicles. Additionally, DOT will prioritize grant allocations for promising AI applications, investigate transportation-related difficulties, and create a coherent plan for integrating AI across transportation ecosystems by utilizing the knowledge of advisory committees and ARPA-I.
4. Advancing AI in Education
To address the effects of AI in education, the Department of Education will create tools and policies that ensure AI is applied responsibly and fairly. The development of an "AI toolkit" will provide educational leaders with practical guidance on implementing AI systems that enhance security and trust while adhering to privacy laws. Additionally, the project seeks to protect vulnerable communities and ensure AI improves learning environments without introducing harmful or discriminatory practices.
5. Strengthening Communications Networks
The potential of AI to improve communication networks, including spectrum management and network security, will be investigated by the Federal Communications Commission (FCC). The development of next-generation technologies, such as 6G and Open RAN, which integrate AI for increased efficiency and resilience, will be given top priority. Using cutting-edge technologies to shield consumers from unsolicited communications and promote confidence in communication networks, the FCC is also entrusted with preventing AI-facilitated robocalls and robotexts.
By taking these specific actions, the government hopes to strike a balance between innovation and strong safeguards, promoting public confidence in AI technologies while making sure they are created and implemented with safety, equity, and accountability as top priorities. These initiatives show a dedication to protecting the rights and interests of every person while utilizing AI's potential to enhance lives in healthcare, transportation, education, and other areas.
Protecting Privacy
Given AI's ability to process enormous volumes of personal data and deduce sensitive information, preserving individual privacy has become crucial in an era of rapidly increasing AI deployment. The federal government is developing comprehensive procedures to guarantee that AI systems respect privacy rights while reducing the risks connected with data misuse because it understands the dual difficulties of protecting privacy and fostering innovation. These efforts include strengthening privacy-enhancing technologies (PETs), developing research to meet new dangers, and setting strong norms for the handling of personal data:
1. Evaluation of Commercially Available Information (CAI)
The acquisition and utilization of CAI by federal agencies, especially data including personally identifiable information (PII), will be evaluated by the Office of Management and Budget (OMB). In order to improve accountability and transparency in agency reporting procedures, this assessment will include data that was obtained from brokers and processed by contractors. Activities related to national security shall be excluded. To address privacy concerns made worse by AI, the OMB will also review existing guidelines for handling this kind of data and consider updating them.
2. Strengthening Privacy Impact Assessments
In order to strengthen privacy protections, OMB will publish a Request for Information (RFI) to investigate how privacy impact assessments, which are governed by the E-Government Act of 2002, can more effectively handle the dangers associated with artificial intelligence. Revisions to agency procedures will be guided by feedback, guaranteeing that privacy safeguards continue to be flexible and efficient in an AI-driven environment. Issuing revised guidelines and conferring with pertinent parties to improve privacy tactics are examples of follow-up measures.
3. Guidelines for Differential Privacy Protections
The National Institute of Standards and Technology (NIST), on behalf of the Department of Commerce, will develop comprehensive recommendations to assess the effectiveness of differential privacy strategies in AI systems. By addressing common risks and outlining important considerations for introducing differentiated privacy safeguards, these guidelines will offer a uniform foundation for improving privacy protections across federal agencies.
4. Advancing Privacy-Enhancing Technologies (PETs)
In order to further PETs research, the Department of Energy and the National Science Foundation (NSF) will form a Research Coordination Network (RCN). This network will prioritize converting research into useful applications, encourage the creation of standards, and help privacy researchers collaborate. Furthermore, the NSF will work with agencies to find ways to incorporate PETs into their operations, utilizing knowledge gained from global contests such as the U.S.-U.K. PETs Prize to guide best practices.
With these programs, the government hopes to fully address privacy concerns in AI systems, protecting people's data rights while promoting responsible innovation. These initiatives demonstrate a dedication to building public confidence in AI technology and creating a privacy-focused framework for their application.
Advancing Federal Government Use of AI
By integrating AI across agencies, the Federal Government hopes to maximize its revolutionary potential and improve governance, efficiency, and innovation while controlling risks and maintaining credibility. In order to improve public service delivery and handle issues with technology deployment, these initiatives heavily rely on a structured framework for direction, accountability, and talent acquisition. The plan places a strong emphasis on developing staff with strong AI capabilities, strict control, and coordinated leadership.
1. Coordinating AI Governance Across Agencies
According to the legislation, the Office of Management and Budget (OMB) Director will form an interagency council, with OMB as the chair and the Office of Science and Technology Policy (OSTP) as the vice-chair, to expedite the adoption of AI in all Federal agencies, with the exception of national security systems. Chief Artificial Intelligence Officers (CAIOs) will be appointed by agency heads to supervise the integration of AI, foster innovation, and control related risks. To guarantee multi-stakeholder input and adherence to defined norms, agencies will also establish internal AI Governance Boards.
2. Issuing Comprehensive Guidance for AI Use
The OMB will publish guidelines describing AI management procedures within 150 days, including required risk-management procedures in line with the OSTP's AI Bill of Rights and the NIST AI Risk Management Framework. Public consultation, data quality evaluation, bias reduction, AI performance tracking, and human supervision of important choices are a few of these. To prevent misuse or injury, agencies will need to identify high-impact AI applications, solve adoption barriers, and set up procedures for assessing and protecting generative AI outputs.
3. Enhancing Transparency and Compliance Monitoring
OMB will create a system that will allow agencies to monitor the adoption, governance, and adherence to federal policies of AI. To improve accountability and public trust in AI programs, agencies will be required by annual orders to submit AI use cases and risk management strategies.
4. Facilitating Secure and Responsible Use of Generative AI
By putting in place certain risk assessments and procedures to guarantee adherence to cybersecurity, privacy, and data protection regulations, agencies are urged to employ generative AI responsibly. To direct approvals under the Federal Risk and Authorization Management Program, a priority methodology for assessing generative AI tools—such as big language models and picture generators—will be developed.
5. Accelerating AI Talent Acquisition and Workforce Training
An AI and Technology Talent Task Force will spearhead the hiring and retention of AI specialists across Federal departments in recognition of the urgent need for AI expertise. The measures include increased use of fellowship programs, direct-hire authority for technical professions, and pooled hiring actions. Training programs will give Federal workers—including non-technical staff—the fundamental knowledge of AI they need to evaluate possibilities, reduce risks, and improve service delivery.
6. Modernizing AI Procurement and Infrastructure
To make it easier for agencies to obtain commercial AI capabilities, the General Services Administration (GSA) will create acquisition tools. Guidelines will encourage thorough recording of acquired AI, independent assessment of vendor claims, and incentives for ongoing AI system development.
7. Expanding AI Use in Mission-Critical Applications
The Technology Modernization Fund encourages agencies to give priority to supporting AI initiatives that improve mission delivery. To optimize operational effectiveness and public benefit, these projects will incorporate AI capabilities into high-impact fields like infrastructure planning, healthcare, and defense.
The Federal Government hopes to harness AI's transformational potential while upholding moral principles, reducing dangers, and establishing public confidence by promoting a unified approach to AI governance. These actions demonstrate a dedication to innovation that is consistent with the values of accountability, transparency, and equity.
Strengthening American Leadership Abroad
The US is dedicated to being a world leader in the responsible advancement and application of artificial intelligence. The United States seeks to influence AI governance globally through addressing AI-related issues, developing strong global frameworks, and encouraging international cooperation. In order to assist allies and foreign partners in implementing reliable AI systems, these initiatives concentrate on controlling the risks associated with AI, maximizing its advantages, and promoting moral and rights-abiding behavior:
1. Fostering International Collaboration on AI Governance
The Secretary of State will lead global initiatives to increase bilateral and multilateral discussions on AI policy, working with important government agencies. These efforts seek to advance common regulatory and accountability standards and improve partners' comprehension of U.S. AI-related policies. Through these partnerships, the United States hopes to promote voluntary pledges from other countries, similar to those made by American businesses, to mitigate the risks associated with AI and guarantee worldwide agreement on its ethical application.
2. Advancing Global AI Standards
Working together with foreign partners and standards groups, the Department of Commerce will take the lead in establishing a coordinated global approach to AI standards. Creating consensus standards on important topics like AI jargon, data security, risk mitigation, and reliability is part of this. The United States will advance these standards as the cornerstone of safe and efficient AI deployment globally, guided by the NIST AI Risk Management Framework. Within 270 days, a strategy plan will be created for these initiatives, and priority action reports will follow.
3. Publishing a Global AI Development Playbook
Within a year, the Secretary of State and the Administrator of the United States Agency for International Development (USAID) will create an AI in Global Development Playbook in coordination with other agencies. This resource will use the NIST AI Risk Management Framework to incorporate AI governance concepts into global development contexts. In order to promote safe and rights-affirming AI practices overseas, it will include insights from AI applications in social, economic, and technical development.
4. Creating a Global AI Research Agenda
A Global AI Research Agenda will be created to direct AI-related research in global settings. The goals and guiding principles for guaranteeing the long-term and advantageous global adoption of AI will be delineated in this agenda. Additionally, it will discuss the global labor market implications of AI and offer ways to reduce risks and guarantee advantages that are equitable for all areas.
5. Addressing Cross-Border AI Risks to Critical Infrastructure
The Department of Homeland Security (DHS) and the State Department will spearhead global efforts to enhance AI safety and security for vital systems because of the potential for AI to damage critical infrastructure. Within 270 days, a strategy plan will be created to coordinate global initiatives with the AI safety standards outlined in previous directives. In order to ensure resilience against malevolent or inadvertent AI applications, DHS will also report on measures taken to reduce cross-border vulnerabilities to U.S. infrastructure.
The United States hopes to establish high standards for moral development, foster international collaboration, and create a more secure and just AI-driven future by spearheading worldwide efforts in AI governance. These actions address global dangers, promote confidence in AI technologies, and reaffirm the United States' dedication to innovation.
Implementation
The Executive Office of the President formed the White House Artificial Intelligence Council (White House AI Council) to guarantee the coordinated implementation of AI-related policies throughout the Federal Government. It is the responsibility of this Council to coordinate agency efforts to efficiently create, convey, and carry out AI projects, including those mentioned in this executive order. Senior officials from the Cabinet, important federal agencies, and executive offices, including the Secretaries of Defense, Treasury, Commerce, and Energy, as well as directors of important organizations like the Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF), are members of the Council, which is chaired by the Assistant to the President and Deputy Chief of Staff for Policy. The Council's flexibility to create subgroups for specific tasks guarantees smooth interagency cooperation and prompt industry participation, strengthening the Federal Government's cohesive strategy for promoting AI governance, innovation, and deployment.
Conclusion
The United States' commitment to ensuring that this game-changing technology is used for the good of society while protecting against its inherent risks is demonstrated by the issue of Executive Order 14110, which represents a turning point in the regulation of artificial intelligence. The directive tackles important areas including innovation, competitiveness, equity, privacy, worker assistance, and global leadership by outlining a thorough and multidimensional framework. This well-rounded strategy highlights the need for responsible, open, and moral growth and shows that the federal government recognizes AI's dual ability to advance society and exacerbate existing problems.
The executive order establishes the foundation for a future in which artificial intelligence serves as a catalyst for equity and the common good in addition to being a tool for innovation through its extensive directions. It aims to establish an ecosystem where AI promotes economic development, fortifies national security, and protects fundamental rights by encouraging cooperation between governmental organizations, commercial firms, international allies, and civil society. This policy establishes a standard for setting an example as the US negotiates the challenges of AI governance, making sure that advancements in technology are consistent with the country's ideals of justice, accountability, and inclusivity. This executive order offers a plan for a fair and sustainable AI-driven future, not just a regulatory milestone.
Authors
References
HHS AI Usecases | HealthIT.gov. www.healthit.gov/hhs-ai-usecases.
“Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” Federal Register, 8 Dec. 2020, www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government.
“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Federal Register, 1 Nov. 2023, www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.