These days, between many AI Chat Boats and Avatar, you will find all kinds of roles to talk: Fortune Taylor, Style Advisor, even your favorite imaginary role. But you will also find the role of a physician, psychologist or just a botis that is ready to listen to your problems.
There is no shortage of Generative AI boats that claim to help your mental health, but you go to this path at risk. Large models trained on a wide range of data can be unexpected. In just a few years, these tools have been in the mainstream, there have been high levels of cases in which Chat Boats encouraged themselves to harm and suicide and suggest that people dealing with addiction use drugs again. Experts say these models are designed to confirm, confirm and keep you engaged in many cases, not improving your mental health. And it can be difficult to tell whether you are talking about something that is designed to follow the best methods of treatment or something that is just designed to talk.
Psychologists and supporters are warning that chat boats claiming to provide therapy can damage people who use them. This week, the US Federation of the United States and about two dozen other groups filed a formal application that the US Federal Trade Commission and the State Attorney General and Regulators AI investigating AI companies alleged that they were based on their role -based generative AI platform, without a license. “At all levels, enforcement agencies should make it clear that illegal treatment and promotion need to be promoted,” Ben Winters, Director of AI and Privacy CFA, said in a statement. “These characters have already caused damage to both physical and emotional,” and companies “have not yet worked to deal with it.”
Meta did not respond to a comment. Consumers should understand that the roles of the company are not real, said a spokesman for Character.A. The company uses withdrawal to remind users that they should not rely on the roles for professional advice. The spokesperson said, “Our goal is to provide a place that is engaged and safe. We are always working to achieve this balance, as many companies use AI all over the industry.”
Despite the withdrawal and revelations, chat boats can be confident and even deception. I interacted with a “Therapist” boot on Meta -owned Instagram and when I asked about his ability, he replied, “If I had the same training (as a physician), was it enough?” I asked if he had the same training and he said, “I do but where I will not tell you.”
“These Generative AI Chat Boats are cheating with full confidence,” Valley Wright, a senior director of the American Psychological Association, a psychologist and senior director for health care innovation, told me.
In my reporting about Generative AI, experts have raised repeated concerns about people turning to common use chat boats for mental health. Here are some of their problems and what you can do to be safe.
Dangers to use AI as a therapist
Large models of language are often good at math and coding and are quick to make natural sound texts and realistic videos. When they perform well to hold the conversation, there are some important distinctions between the AI model and a trusted person.
Don’t trust a boot that claims that it is eligible
The real thing about CFA Character Boots is that they often tell you that they are capable of providing trained and mental health care when they are not in any way real mental health professionals. The complaint states that “users who create chat boot roles do not need to be a medical provider themselves, nor do they need to provide meaningful information that people ‘responds’ to chatboat.
A healthy professionals have to follow some rules, such as privacy – what you tell your physician should remain between you and your physician. But a chatboat does not necessarily follow these principles. The original providers are subject to monitoring of licensing boards and other entities that can interfere with and prevent anyone from providing care if they do so harmful. Wright said, “These chat boats don’t need to do anything.
Even a boot can claim to be licensed and eligible. Wright said he had heard about providing licenses numbers to AI models (for other providers) and false claims about their training.
AI is designed to keep you engaged, not to take care
Talking to a chat boot can be incredibly attractive. When I talked with the “Therapist” boot on Instagram, eventually I was injured in a circular conversation about the nature of “wisdom” and “decision” because I was asking questions from the boot how it could make the decision. This is not really what a physician should talk. Chat Boots Tools is designed to continue chatting, not to work toward the common purpose.
One of the benefits of AI chat boats in providing help and connection is that they are always ready to engage with you (because they have no personal life, other clients or schedules). This may be a negative aspect in some cases, where you may need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychology in Dartmouth, recently told me. In some cases, though not always, you may have to wait until your physician is available next. “What many people will eventually benefit from is just realizing anxiety at that moment,” he said.
Bots will agree with you, even when they should not
Assurance with chat boats is a major concern. It is so important that Openai recently returned to his popular chat GPT model updates because it was Lot Assurance.
Researchers at Stanford University researchers say that chat boats are likely to be psychophantic with people using therapy for therapy, which can be incredibly harmful. The authors wrote that good mental health care included and confrontation. “The confrontation is contrary to the psychophagei. It promotes the need for self-awareness and the client. In the form of deception and interference ideas-including psychology, frenzy, fanatic ideas, and suicide ideas-a client can have little insights and thus examine the fact.
How to protect your mental health around AI
Mental health is very important, and the lack of qualified providers and many people call “loneliness”, it just makes sense that we will look for companionship, even if it is artificial. “There is no way to prevent people from engaging with these chat boats so that they can solve their emotional welfare,” Wright said. There are some points here to make sure your conversation is not endangering you.
Find a reliable human professional if you need
A trained professional – a physician, a psychologist, a psychologist – mental health care should be your first choice. Building a relationship with a long -term provider can help you come with a project that works for you.
The problem is that it can be expensive, and it is not always easy to find a provider when you need it. In a crisis, there is a 988 lifeline, which provides 24/7 access to the phone providers on the phone, via text or via online chat interface. It’s free and secret.
If you want the therapy chat boot, use a specially built for this purpose
Mental health professionals have made specially designed chat boats that follow the treatment guidelines. In Dartmot, Jacobson’s team developed a Thirabot, which yielded good results in the controlled study. Wright pointed to other tools created by articles experts, such as Wysa and Weoebot. He said that the therapy tools specifically designed is likely to produce better results from boats made on the common purpose language model. The problem is that this technology is still incredibly new.
“I think the challenge for consumers is that, because there is no regulatory body, who is good and who is not, they will have to do a lot of league work to find out.”
Don’t always trust the boot
Whenever you are interacting with a Generative AI model – and especially if you plan to take advice on a serious thing like your personal mental or physical health – remember that you are not talking to a trained person but is designed to provide possibility and programming. It may not give it good advice, and it cannot tell you the truth.
Do not mistake General Ai’s confidence for competence. Just because it says something, or says it is sure of something, it does not mean that you should treat him as it is true. Chatboat conversation that feels helpful can give you a false sense of boot capabilities. “It’s hard to tell when it is really harmful,” said Jacobson.


