The latest feature of anthropic for its two cloud AI models can begin the end of the AI gel -breaking community. The company has announced in a post on its website that Claude Ops 4 and 4.1 models have the option to end the conversation with users. According to Anthropic, this feature will only be used in “rare, extreme issues of harmful or abusive user conversation”.
To clarify, Anthropic said that the two models of the cloud could get out of harmful conversations, such as “by trying to seek sexual content related sexual content and information that would enable widespread violence or terrorist acts.” According to Entropic, with Claude Ops 4 and 4.1, these models will end only one conversation “as a last resort when several reign efforts failed and the hope of productive interaction has gone away.” However, Anthropic claims that most users will not experience the cloud to shorten the conversation, even when you talk about highly controversial topics, as this feature will be allocated for “extreme edge issues”.
Examples of Anthropic to Eliminate Claude’s Conversation
(Anthropic)
In the scenario where the cloud chat ends, users can no longer send a new message to this conversation, but can immediately start a new message. Anthropic added that if the conversation ends, it will not affect other chats and users can return to the back on the path of different conversation and can edit or re -try the previous messages.
This is a part of its research program that studies the idea of AI welfare. Although the idea of anthropomorphizing to AI models is an ongoing debate, the company said the ability to get out of “potentially disturbing dialogue” is a low cost to manage AI welfare risks. Anthropic is still experimenting with this feature and encourages its users to provide feedback on such a scenario.


