Trump administration bans Anthropic Application after OpenAI strikes deal with US Pentagon

Tensions have been brewing for several days between the US-based tech AI industry, which has reached a new level of concern for US security. Regarding the use of Anthropic, a popular tech industry application, Donald Trump's administration has ordered all national security agencies to block the use of this application's tools. 

Meanwhile, OpenAI, which works on AI models, has reached a significant agreement with the US government regarding the use of a new model with the Pentagon. It is also being reported that this development is a key development to coordinate US national security and civilian use of certain AI module tools.

Altman stated that national security has been identified as important and central.

Late Friday night, OpenAI's chief executive, Sam Altman, announced that several of its AI tools will be used in the US National Security Agency. This agreement with the US government includes two key agreements. The first domestically-based application to be included in the national security context, which performs mass surveillance, has been called for its ban.

Due to some of the risky tools of the must cease using Anthropic’s AI tools (Anthropic application), the Trump administration declared it a risk chain for US national security. The US Secretary of Defense has also stated that the Anthropic application is among the risky tools to be banned.

Anthropic has raised questions about the application, stating that it needs to be stabilized in US military defense matters. It also stated that there are many applications included in this type of supply risk chain that have been given a different status and have raised concerns. However, the reasons for this matter are being given a different meaning. 

According to reports, Anthropic's application is under discussion with the Pentagon House, where some of the tools seen in the Anthropic application are not considered to be in the national security interest. Discussions are ongoing between Anthropic and the Pentagon regarding the use of AI systems in the US national interest, rather than the use of national weapons and the surveillance of US citizens.

Tensions are also being seen between Anthropic and the US government regarding some of the tools found in this application, leading to a potential conflict. Anthropic Tech plans to challenge the US government's perceived conflict with this application in court.

Meanwhile, OpenAI CEO Sam Altman OpenAI has also outlined its security principles in a manner that raises a significant question: why Anthropic Tech has been banned. Anthropic and OpenAI are similar companies, with a distinct style of communication. Both applications feature similar tools, including the option to monitor technology. 

Due to this, the administration has offered to maintain coordination with the government, including OpenAI directly offering its engineers to consider collaborating with the Pentagon. 

On the one hand, OpenAI has adopted a policy of collaboration with the government. On the other hand, Anthropic is currently operating under strict conditions. No official, clear clarification on these matters has yet been provided.

The ongoing dispute and competition between these two companies raises significant questions about the weapons used by the military and the soldiers deployed, as well as the protection of civilians. 

AI technology is currently being increasingly used in intelligence operations and warfare. It is also playing a significant role in cybersecurity, where Anthropic and OpenAI are currently being used extensively. In a way, there is a high risk of misuse of this application, and this is a debate that is ongoing. This is also reflecting the global use of AI in military weapons and systems.

How Anthropic navigates the legal challenges surrounding its AI systems is an important topic to watch in the future. Typically, many other open AI companies have agreed to negotiate settlements with governments or use legal compromises. 

These AI systems could potentially influence the US Department of Defense's technical and global AI regulations, as well as military strategies. This is seen as a significant challenge for the US to maintain a balance regarding national security.

Post a Comment

Previous Post Next Post