In response to growing concerns about safety, Anthropic has updated its policy used for its Claude AI Chatboat. In addition to introducing rigorous cybersoncare rules, Anthropic now explains some dangerous weapons that people should not develop using cloud.
Anthropic does not highlight the opportunities in accordance with its weapons policy, which summarizes its changes, but the comparison between the company’s old use policy and its new shows a significant difference. Although Anthupak had previously used cloud use of human lives or damaged weapons, explosives, dangerous materials or other systems to manufacture, design, market, or distribute them, but with a biographical, chemical, chemical, chemical, chemical, chemical, chemical, chemical, chemical, chemical, and aerial. Spreads.
In May, Anthropic implemented the launch of its new Claude Ops 4 model as well as the protection of “AI Safety Level 3”. Protective measures are designed to help the model more difficult, as well as to help prevent CBRN from developing weapons.
In its post, anthropic agent also recognizes the risks posed by AI tools, including the use of a computer, which causes the cloud to control the user’s computer as well as the cloud code, a device that is directly added to a developer’s terminal. “These powerful abilities introduce new threats, including the ability to scalp abuse, malware and cyber -attacks.”
The AI startup is responding to these potential risks by connecting a new “computer or network system” section in its use policy. This section includes rules against the use of clouds, which include tools to discover or exploit weaknesses, make malware or distribute malware, deny service tools tools and more.
In addition, anthropic is looting its policy around political content. Instead of banning the formation of all kinds of content related to political campaigns and lobbying, anthropic will now prohibit only people from using the cloud “that use issues that involve fraud or interruption in the democratic process, or add to voters and campaigns.” The company also made it clear that its “high risk” issues of its use of its requirements, which are implemented when people use the cloud to make recommendations to individuals or consumers, apply only to consumer -facing scenarios, not to use business.


