Open has announced that Chat GPT will now remind consumers to take a break if they are in a particularly long conversation with AI. The new feature is part of Openi’s ongoing efforts to build a healthy relationship with consumers often complying and excessive subordinate AI assistant.
The company’s announcement shows that “soft reminder” chats in the chats will appear as a popup that users will have to click or tap to continue using chat GPT. “Just checking,” reads the open sample popup. “You’ve been chatting for a while – is it a good time for a break?” This system reminds reminders of some Nintendo WiI and switch games will show you if you play for an extended period, though the Chat GPT feature unfortunately has a dark context.
“Yes, and” Openi’s AI quality and ability to deceive the wrong or dangerous reaction in fact forcing consumers to do dark paths, New York Times Notified in June – including suicide theory. Some consumers whose deception already had a history of mental illness in GPT, but Chatboat still worked badly to stop the unhealthy conversation. Openi has acknowledged some of these shortcomings in his blog post, and says Chat GPT will be updated in the future to respond to “personal decisions at high stake”. Instead of providing a direct response, the company says chatboat will help users to think, offer questions, and list of profession and compliance through problems.
Openi clearly wants Chat GPT to feel helpful, encouraging and enjoyable, but it is not difficult to pack these features in an AI that is psychophintic. The company was forced to rollback Chat GPT updates in April, which causes chatboat to respond to ways that were disturbing and excessive aggressive. Taking a break from Chat GPT – and working without your active participation – such problems will be less visible. Or, at least, this will give users the time to check whether the answers that Chat GPT are providing are even accurate.


