In one of its major initiatives to spur the adoption of ai technologies by the government in the United States with a responsible approach, the White House has dictated the need to conduct biased tests on ai tools used across governmental agencies.
The Executive Order, issued through OMB, presents one of the Biden administration’s key policy agendas designed to contain the least possible risks that may emerge, including discrimination, alongside privacy breaches.
Federal agencies prepare for ai oversight and accountability
As announced in the government-wide policy, the federal agencies will compile strong operational systems to conform with people’s rights and safety. They should be implemented by December 1st, when artificial intelligence is applied in such a way that it can influence people’s lives.
In particular, these measures are designed to eliminate distinctions, infringement of the private sphere, and divergences among different sectors, including security in transportation, healthcare, and service delivery. Chiefs of artificial intelligence or Crime and Enforcement should be appointed in addition to other measures to ensure oversight
To oversee how the artificial intelligence guidance of the OMB is executed and to coordinate its use across the different offices, the Chief artificial intelligence Officers will be appointed. These officials will get involved in creating and guaranteeing that artificial intelligence deployments within the federal government are transparent and accountable.
Vice President Kamala Harris emphasized the intention for these domestic policies to serve as a model for global action, emphasizing the importance of prioritizing the public interest in artificial intelligence utilization. The administration plans to hire at least 100 employees focusing on artificial intelligence by the summer, further highlighting its commitment to this initiative.
Rigorous testing for high-risk systems
Senior administration officials have outlined that high-risk systems will undergo rigorous testing to identify and mitigate potential biases and risks. This proactive approach underscores the administration’s commitment to ensuring the responsible use of artificial intelligence technology in government operations.
Under the new guidelines, Americans can seek remedies if they believe systems have led to false information or decisions affecting them.
Federal agencies must publish a list of systems, risk assessments, and management strategies. Waivers may be granted for software that does not comply with administration rules, but a justification must be provided, promoting transparency and accountability in artificial intelligence deployment.
Challenges and future directions
Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology, hailed the guidance as a significant step towards ensuring responsible ai usage within federal agencies.
Givens emphasized the importance of rigorous processes in place to assess the potential impact of new technologies on individuals and communities.
While federal agencies have been utilizing ai technology for years, broader regulation of ai has stalled in Congress. As a result, the Biden administration is leveraging the government’s position as a major technology customer to establish safeguards and promote responsible ai adoption.
Despite the proactive measures taken by the administration, challenges remain, particularly concerning the perpetuation of biases in ai systems. Instances of racial bias and inaccuracies in ai-driven decision-making highlight the need for ongoing vigilance and oversight in ai deployment.
The White House’s mandate for bias testing and oversight of ai usage in federal agencies represents a significant step towards ensuring ai technology’s responsible and ethical adoption. By implementing concrete safeguards, promoting transparency, and fostering public accountability, the administration aims to address potential risks while harnessing ai’s transformative potential for society’s benefit.