Chat GPT may not tell her deepest, darkest secrets like AI chat boot. You don’t need to take my word. Take it from a boy behind the most famous generative AI model in the market.
Chattagpat Maker Openai CEO Sam Altman raised the issue this weekend this weekend in an interview with host Thu Wan on the podcast. He advised that your conversation with AI should have similar reservations with your doctor or lawyer. On one occasion, Wan said that there was a reason why he was reluctant to use some AI tools because he did not know “who would have his personal information”.
“I think it makes sense,” Altman said, “Before you really want to explain privacy, you use it too much, a legal explanation.”
More and more AI users are treating chat boats like their physicians, doctors or lawyers, and has created a serious problem for them. There are no rules of privacy and the actual mechanics of these conversations are not surprisingly clear. Of course, there are even more problems in using AI as a therapist or reliable, such as how boats can give horrible advice or how they can reinforce stereotypes or notoriety. (My fellow Nelson Egular has compiled a list of 11 things that you should never do with Chat GPT and why.
Altman is clearly aware of the problems here, and seems to be at least a bit upset. “People use it specifically, in particular, use it as a physician, life coach, I’m having difficulty in this relationship, what should I do?” He said. “Now, if you talk to a physician or a lawyer or doctor about these problems, it has legal privileges.”
This question comes during a part of the conversation whether there should be more rules or regulations around the AI. Pressing AI companies and tech development is unlikely to gain their rights in Washington these days, as President Donald Trump’s AI Action Plan released this week, expressing his desire to minimize the technology, not much. But their protection rules can be favored.
Read more: According to our experts, AI accessories: 29 ways you can work for General AI
The opposite of the lack of legal reservations for companies like this seemed to be the most upset to force them to change private conversation in legalism. Openi has objected to requests to maintain a user’s conversation during copyright violations and legalism with the New York Times about the property of intellectual property. –
“If you talk about chatting about the most sensitive things and then there is a case or anything, we may need to create it,” said the Altman. “I think it’s very bad. I think we should have the same concept of privacy for your conversation with AI that you do your physician or anything.”
See this: Openi launched “Study Mode” for students, violation of tea app data, and what robot dog can you provide your next pizza? | Tech today
03:26
Be careful what you say to AI about yourself
For you, the problem is not so much that the open will have to turn your conversation into a legal action. This is a question as to who you trust your secrets.
William Agneo, a researcher at the University of Carnegie Melman, who was part of a team that reviews chat boats on his performance in dealing with questions like therapy, told me recently that privacy is an important issue when trusting in AI tools. The uncertainty around how the models work – and how your conversation is prevented from appearing in other people’s chats – this is hesitant.
“Even if these companies are trying to be careful with your data, these models are known for re -registering information,” said Agno.
If Chat GPT or any other tool reorganizes information from your therapy session or the medical questions you ask, it may show that if your insurance company or someone else is interested in your personal life, the same tool asks for you.
“People should really think more about privacy and just know that they tell these chat boats that almost everything is private,” said Agno. “This will be used in all kinds of ways.”


