Innovation

Biden’s AI Executive Order Isn’t Nearly Enough—But It’s a Good Start

STOPPING SKYNET

It's the biggest piece of regulation yet for the tech. However, it might not go far enough to stop the worst impacts of artificial intelligence.

Illustrated gif of Joe Biden wearing glasses made out of spinning gears with dots behind him.
Illustration by Elizabeth Brockway/The Daily Beast

On Monday, President Joe Biden issued the first executive order aimed at regulating artificial intelligence. The sweeping order requires that AI companies “share their safety test results and other critical information with the U.S. government,” with the aim that these models are “safe, secure, and trustworthy before companies make them public,” according to a White House statement.

The order also includes a number of measures to protect American consumers from AI-related fraud, while also safeguarding against potential dangers of the technology, including “watermarking to clearly label AI-generated content” and strengthening “privacy-preserving research and technologies.” Biden announced that the government will create “new standards” for AI that could someday be used to develop dangerous biological materials like viruses, diseases, and pathogens.

While some critics have noted that the wording on the executive order is vague, and there’s little in the way of how exactly this will play out, it still represents some of the most concrete and substantive policy in response to AI yet.

ADVERTISEMENT

“The executive order is a really positive step for people who have been asking for safety and regulation for these systems,” Daniel Colson, the founder and executive director of the AI Policy Institute (AIPI), told The Daily Beast. “I’m happier than expected with the results and what this executive order is doing.”

U.S. President Joe Biden delivers remarks on artificial intelligence in the Roosevelt Room at the White House in Washington, U.S., July 21, 2023.

U.S. President Joe Biden delivers remarks on artificial intelligence in the Roosevelt Room at the White House in Washington, U.S., July 21, 2023. His recent sweeping order requires that AI companies “share their safety test results and other critical information with the U.S. government,” with the aim that these models are “safe, secure, and trustworthy before companies make them public,” according to a White House statement.

EVELYN HOCKSTEIN via Reuters

However, there’s still a sense that there’s much more work to do when it comes to AI regulation. Ultimately, Biden’s executive order only scratches the surface of what can and should be done in order to rein in the consequences of a very powerful piece of technology that’s already changing people’s lives by disrupting different jobs, such as developers, writers, artists, and journalists, but also contributing to the rise of cybersecurity risks by the way of more sophisticated phishing attacks.

This type of regulation is supported by Americans too—on both sides of the aisle. A rapid-response poll conducted by the AIPI and obtained by The Daily Beast following Biden’s executive order found that the vast majority of American voters largely support the announcement and AI regulation in general, with 69 percent of respondents saying that they support the order while just 15 percent oppose it. This support was bipartisan, too, with 64 percent of Republicans and 65 percent of independents voicing their support.

Meanwhile, 75 percent of voters believe that the government should do even more to regulate AI beyond the executive order, with much of their concerns having to do with the threats the technology poses to the labor market along with potential existential threats that may arise from these powerful models.

“Today, the Biden–Harris administration reaffirmed its commitment to America’s workers through a pioneering executive order on artificial intelligence,” Liz Shuler, president of the AFL-CIO, the country’s largest federation of unions, told The Daily Beast in a statement. “The AFL-CIO applauds the centrality of workers’ rights and values within this order—including the right to collectively bargain—while acknowledging there is much ground to cover in enshrining accountability, transparency and safety as bedrocks of AI.”

“This is a larger step in the right direction than we’ve seen so far,” Colson said. “The American public is really supportive, both of this executive order, but also of the government doing more to regulate AI.”

‘Safe, Secure, and Trustworthy’

It’s more clear than ever that Americans are hungry for AI regulation. Not only does the technology threaten their livelihoods, but it could pose an even bigger risk if given the power to conjure up scientific research to create some potentially nasty diseases and pathogens.

To meet these challenges then, the executive order focuses on eight areas in order to stem the risks of AI: standards for safety and security; protecting privacy; advancing equity and civil rights; protecting consumers, patients, and students; supporting workers; promoting innovation and competition; advancing AI safety worldwide; and ensuring responsible government use of AI.

Of these actions, perhaps the most consequential will be the requirement for AI developers and companies to share their safety test results with the federal government. This means that companies creating “the most powerful AI systems” must notify the government when they begin building them, and share any and all results that they get from the tests when it comes to the potential harms they pose to American consumers.

How this will ultimately look has yet to be seen. However, Colson said that this type of framework lays a strong foundation for future AI regulation. Only with data and information about what exactly these large and powerful tech companies are doing to create their AI models can the U.S. government even begin to consider how to start to regulate them.

“This executive order is really building some of the initial infrastructure necessary to allow the government to be able to track what's going on in the AI industry, what the tech companies are doing, and what the models are,” Colson said.

He adds that the vast majority of these AI companies like OpenAI and Alphabet don’t publish the models they create publicly. This creates a system also known as a “black box,” where users don’t actually know how the programs that they use and are affecting them were made. This poses an obvious danger—especially when you consider the risks that these AI models pose when it comes to bias and hallucination (i.e. the penchant for AI to make up facts).

However, with this executive order, the companies will now have to at least share some insight into how their models were created and what exactly the risks they pose are. This information can lead to future policy to determine what models get released and what never see the light of day.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023. Biden's executive order on AI means that companies like OpenAI will now have to at least share some insight into how their models were created and what exactly the risks they pose are.

ELIZABETH FRANTZ via Reuters

It’s not a complete panacea—especially if the White House allows AI companies to report safety tests themselves rather than have government officials or a third party do it for them. Suresh Venkatasubramanian, Brown University’s director of the Center for Tech Responsibility, spoke to The Daily Beast following Sen. Chuck Schumer’s AI summit in September and he decried the government’s tendency to weigh the perspective and opinions of the Big Tech companies over those of other AI safety experts.

According to him, the fear is that if we allow the likes of Mark Zuckerberg and Sam Altman to have a bigger seat at the table, it might lead to regulation and policy that favors them—rather than consumers.

“We don't let the fox design the henhouse,” Venkatasubramanian told The Daily Beast at the time. “I would hope that Congress takes the perspectives of the people affected by technology seriously, and makes sure that we the people write the rules, and not tech companies.”

It’s still too early to tell how this regulation will ultimately shake out. One thing is for sure though: This executive order isn’t going to slow down the development of powerful AI. These sophisticated generative models pose a risk to everyday consumers and even world governments.

However, Colson believes that it’s a start—albeit a very small one—towards a future with truly safe and trustworthy AI.

“A lot of AI safety people want to slow down the development of AI due to fears that near term models will be very, very powerful,” Colson explained. “This executive order definitely doesn't do anything close to that. But at least the government is attempting to learn what models are being developed and how they're being developed. That's definitely the necessary first step in order for more substantial regulations to come later.”

Got a tip? Send it to The Daily Beast here.