The head of the open-up model and behavior policy, Joan Jung, has written a blog post on the X about human-A relations, which presents some well-thought-out ideas about the subject, and how Openi refers to the problems around it. Basically, since the AI models improve in imitating life and engaging in conversation, people are starting to treat AI chat boats as they are. This makes sense that the open will want to make it clear that they are familiar with it and are incorporating the facts into their plans.
But thinking, designing controversial approaches, including models that feel helpful and kind, but not emotional, loses something important. It doesn’t matter how clear and cautious Jing Jing tries to be, people with emotional contacts with AI, occasionally outlets, or future fake ideas, now it is happening, and it seems that a lot is happening.
Openi may have been caught by the guard, because CEO Sam Altman has surprised how far people actually act to humanity and how deeply they claim to connect consumers to models. He even acknowledged emotional dangers and its potential dangers. That is why there is a war post.
You can like
She makes it clear that the open is creating a model to serve the people and they are preferring the emotional aspect of this equation. They are researching how people create emotional attachments with AI and what it means to create future models. It makes a point to distinguish between antological consciousness, as is in real consciousness that is with humans, and is considered conscious, whether it seems conscious for consumers. Understanding consciousness is the same as that of now, because this is why people are affecting people who talk to AI. The company is trying to thunder the needle of a behavior, which makes the AI find emotions and helpless.
However, the language of sympathy cannot be hidden to a clear lost element. It felt as if someone was careful: a week after a week of wet floor marks and water -proof buildings projects left the floor knee in the flood water.
The beautiful structure of the blog post and cautious hope and its focus is on research and long -term cultural conditioning model creation, which is why people are developing deep contacts with AI chat boats, including Chat GPT. Many people are not just talking to Chat GPT like this software, but as it is a person. Some are even claiming that he is in love with an AI partner, or is using it to completely change human contacts.
Proximity
Here are the Redded Threads, Medium Articles, and Viral Videos of People that whisper their favorite chat boot in sweet notes. It may be ridiculous or sad or even angry, but what is not ideological. There is a legal action on whether AI chat boats have participated in suicide or not, and more than one person has reported to rely on AI where real relations have become difficult.
The Open notes that a permanent, decision -free focus from a model can feel like companionship. And they acknowledge that the formation of a chat boot and personality can affect how emotionally he feels alive, the growing stake for consumers who suck these relationships. But the tone of this piece is very different and educational to recognize the possible scale of this problem.
Because with AI intimate toothpaste, already out of the tube, it is a question of real -world behavior and companies behind AI form this behavior, not just in the future. For example, they will have a pre -existing system to detect dependence. If one is spending a day with Chat GPT, talking in such a way that he is his partner, this system should be able to flag and break this behavior.
And romantic contacts need some severe limits. Not banning it, it will be stupid and perhaps contradictory. But the stringent rules that any AI engaged in the romantic role have to be reminded of people whom they are talking to the boot, which is not really alive or aware. Humans are the masters of projection, and of course a model. It does not require a model to love with it, but any indicator of the trending conversation in this direction should mobilize these protocols, and when it comes to children, they should be extra tight.
This is the case with AI models as a whole. Occasionally reminded by Chattgat, “Hey, I’m not a real person,” may be, but in some cases and in general is a good proprietary. It is not a consumer’s fault that people do everything intravenous. Giving our vehicles with gigly eyes and names and personalities on Rombo is not seen more than a slightly quirky. It is not surprising that a tool like a chat jeptic and a verbal tool can begin to feel like a friend, physician, or even a partner. The point is that companies like Open have the responsibility to design and design it, and should be from the beginning.
You can argue that adding all these guards makes fun. That people should be allowed to use AI, but they want, and that artificial companionship can be a balm for isolation. And this is true in moderate food. But there are fences in the playground and roller coasters have seat belts for one reason. AI is merely neglected to imitate emotions and provocation without security checking.
I am glad that the Open is thinking about it, I wish they had done so quickly, or now got more urgent about it. The AI product design should reflect the fact that people are already in a relationship with AI, and these relationships need more than thoughtful articles to stay healthy.


