US Implements Stricter AI Regulations Amid Anthropic Dispute

The US government has proposed new guidelines for artificial intelligence (AI) suppliers that could significantly impact companies seeking federal contracts. The guidelines, drafted by the US General Services Administration (GSA), require that AI models be available for “any lawful” purpose. This move comes as tensions escalate between the government and AI start-up Anthropic, which recently refused to grant the Pentagon unrestricted rights to its technology.

According to documents reviewed by the Financial Times, the proposed rules would mandate that AI suppliers provide the government with an irreversible license for deploying their technology in any legal capacity. These guidelines also aim to address concerns regarding bias in AI outputs. They specify that contractors must create “a neutral, non-partisan tool” that does not favor any ideological perspectives, including those related to diversity, equity, and inclusion.

The push for stricter regulations follows an executive order from former President Donald Trump, which criticized what he termed “woke” AI models. Another key provision in the draft requires vendors to disclose modifications made to comply with foreign regulations, such as the European Union’s Digital Services Act. The GSA intends to consult industry stakeholders before finalizing these guidelines.

Dispute with Anthropic Escalates

The backdrop for these new regulations is an ongoing dispute between the US Department of War (DoW) and Anthropic. The conflict erupted last week when Anthropic declined to provide the Pentagon with unrestricted rights to use its models, citing concerns over domestic surveillance and the potential for autonomous weapons. Following this, the Pentagon cancelled a $200 million contract with the company.

DoW Secretary Pete Hegseth criticized Anthropic on social media, accusing the company of “arrogance and betrayal.” He claimed that Anthropic’s true objective was to wield veto power over military operational decisions. In the wake of this clash, Trump directed federal agencies to cancel existing contracts with Anthropic and implement a six-month phase-out.

In a swift response to the situation, rival developer OpenAI secured a deal to deploy its AI models within the Pentagon’s classified networks. CEO Sam Altman stated that the agreement would include amendments ensuring that the technology would not be used for domestic surveillance of US citizens. He emphasized that the contract has “red lines” against mass surveillance and the development of autonomous weaponry.

Concerns Raised Over Governance

The rapid development of this partnership raised eyebrows within the tech community. On March 7, 2024, OpenAI’s hardware leader, Caitlin Kalinowski, announced her resignation, expressing concerns about the implications of AI in national security. In her statement, she underscored the importance of judicial oversight in surveillance practices and the necessity of human authorization for lethal autonomy.

Kalinowski argued that decisions of such magnitude deserved thorough deliberation rather than being rushed. Her resignation highlights the growing scrutiny surrounding the intersection of AI technology and military applications, as stakeholders grapple with ethical considerations.

As the US government moves forward with its new AI regulations, the implications for both public safety and technological innovation remain to be seen. The situation continues to evolve, with industry consultations expected to shape the final guidelines.