Anthropic announced Thursday, the Claude Governor, designed for its products specifically for US defense and intelligence agencies. AI models have LOser Guardials for government use and are trained to better analyze the classified information.
The company said the models that announced it were “already deployed by agencies at a high level of US national security”, and would be limited to access to models that would be limited to government agencies to handle information. The company did not confirm how long they had been in use.
The Claude Government model is specially developed for government needs, such as risk assessment and intelligence analysis, such as an entrepreneurial blog post. And although the company said that they had “strict security tests like all our cloud models,” models have certain specific explanations for national security work. For example, they “refuse to engage with the classified information”, which is fed in them, in which consumers are trained to flag and avoid the cloud.
Claude Government models have more understanding about documents and contexts within defense and intelligence, according to Anthropic, and have better skills in national security -related languages and dialects.
The use of AI by government agencies has long been examined as it has potential disadvantages and the effects of the LP effects of minorities and weak communities. There has been a long list of false arrests in several US states due to documentary evidence of the use of police facial identity, documentary evidence of predictions, and documentary evidence of discrimination in government algorithm. For years, there has also been a widespread dispute against large-tech companies like Microsoft, Google and Amazon, which allows the military to use its AI products in Israel, especially in Israel, with no campaigns and public protests under the Take Forward Movement.
The anthropic use policy specifically orders that no user should “create or provide” illegal or extremely regular weapons or equipment exchange, including “productive, design, market, or other weapons, use of weapons, or other weapons, use of weapons, or other weapons to distribute” productive, design, market, or to distribute. ”
At least eleven months ago, the company said it had created a combination of contract exceptions in its use policy, which is carefully carried out by cautiously to enable useful use by selected government agencies. “Some restrictions – such as dislike campaigns, weapons design or use, censorship system construction, and malicious cyber operations – will be prohibited. But anthropic” can decide to use restrictions for a government agency’s mission and legal authorities, “although it is aimed at reducing the use of our products.”
The Claude Government is an anthropic response for US government agencies, an open product, which it launched in January. It is also part of the AI Giants and Startups’ widespread trends that want to strengthen their business with government agencies, especially in regular regulatory landscapes.
When Open announced the Government of Chat GPT, the company said that within the last year, more than 90,000 employees of the federal, state and local governments have used their technology for translating documents, creating a draft policy memo, writing codes, construction of applications, and much more. Anthropic refuses to share similar numbers or use issues, but this company is part of a planner’s feed start program, offering for companies that want to deploy the federal government’s software.
AI Dev, who provides training data to industry leaders like Opnai, Google, Microsoft, and Meta, signed an agreement with the Defense Department for the AII Agent Program of its kind for US military planning. And since then, he has extended his business to the global governments, recently signed a five -year contract with Qatar to provide automation tools for civil service, health care, transportation and more.


