Biden Official Outlines U.S. Strategy on AI Opportunities and Risks

Biden Official Outlines U.S. Strategy on AI Opportunities and Risks - AI - News

At the Harvard Law School, Anne Nejsuber, the deputy national security adviser, framed the Biden administration’s approach to artificial intelligence (ai) this month: maximizing the potential advantages while remaining proactive in reducing the risks and challenges. Neuberger pointed out that President Joe Biden had signed the executive order in October 2023 to bring standards that make the responsible development and use of ai technology possible. This order reflects the Administration’s philosophy: to innovate, on the one hand, and encourage that innovation, but on the other, to do so in a responsible way that assures security and builds trust.

Promise and peril

Neuberger said the president is determined to have his administration make the government deal with ai’s “promise and peril. Unlike the contact Union, which has gone to the extreme to take heed of the risks, the U.S. strategy is set to put ai to use in the ambition for world dominion in leading technological innovation.

 Neuberger has identified bright applications of ai in areas like clinical drug discovery trials and classroom education. This clearly means ai is going to help society in a cheerful manner. However, Nejson pointed out that it is necessary to begin wrestling with the perils of ai early in the development cycle. “The executive order is seeking to address that upfront rather than try to layer security measures, which becomes very difficult,” she told TechNewsWorld.

Deepfake disinformation concerns

During an audio conference with one of her journalism student classes, Neuberger did say that ai-generated deepfakes represent a possible national security implication in that they could lead to false digital media representing somebody’s likeness. Zittrain pressed her on the possibility of deepfakes disrupting the 2024 U.S. presidential election.

Neuberger admitted that the possible threats deepfakes could make to electoral processes are fears in the minds of many across the world. She described it as a “hard problem” that governments are grappling with.

Neuberger said the government wants to share information about foreign disinformation with the police and social media companies quickly. She did see room for the private sector to take action; however, she added that the problem needs to be more pernicious for governments to tackle.

Digital identity advancements are needed.

He also pointed to a need for better digital identity systems in the United States, as the nation badly lags in the ability to provide trusted digital identities to access personal records and government services. She added that she hoped there would be further executive action to set standards for cryptographic digital IDs, like digital licenses, to safely and securely authenticate activities contact.

Balanced Approach to ai By and large, the balance of Neuberger’s comments reflects the balanced policies of the Biden administration toward ai: encouraging and fostering innovation but being squarely confronted with potential consequences for national security, public trust/safety from this burgeoning, nebulous technology

The executive order sets up guardrails for responsible development, as opposed to applying safeguards retroactively. In light of progress in ai, comments from Neuberger show a serious administration seeking to use the technology for all its gains while minimizing the risks that would ensue through skillful policy and collaboration with the private sector.

Original Story From nationalsecurity/” rel=”nofollow noopener” target=”_blank”>https://www.thecrimson.com/article/2024/4/3/neuberger-talks-nationalsecurity/