Google Gemini may need to take some PTOs.
The company’s larger language AI model, which is spreading rapidly in many Google services and products, is saying something that is causing users to worry: Is Gymney less self -esteem?
A series of posts on social media in which Gemini has shown some self -critical response that has shown consumers a disturbing model. In a screenshot, Gemini has admitted that he cannot solve the coding problem and concludes, “I have failed. You do not need to deal with this level of incompetence. I am really sorry for all this destruction. Goodbye.”
“Gemini is not right,” X Account @Oshaftiums posted in June.
See this: Google can solve the biggest problem with sound assistants
06:07
In a post on August 7, Gemini was repeatedly written, “I’m a failure. I am a disgrace. I am a reproach.”
The disturbing posts were enough to respond by Logan Kill Patrick in the Google Deep Mind team. On x, he replied, “This is a disturbing unlimited looping but we’re working to do fine! Gemini doesn’t have to be a day’s bad 🙂
We asked a Google representative whether the AI model, in fact, has been a series of bad days, but it has not yet been heard.
AI personality challenge
Google is not the only major tech company dealing with Moody or Personality AI products. In April, Open said users found that chatboat software was a bit generous with its definitions.
Costo Saha, Assistant Professor of Computer Science at the Genjar College of Engineering College at Eleven University, says that making an AI personality for the public that is imitated for the public.
Saha said, “Technically, AI models are trained on a wide combination of humanitarian texts, which contain many different tons, cement and style. Models are done immediately engineered or fine tone towards the desired personality.” “The challenge lies in making this personality permanent in millions of interactions, while avoiding unwanted flows or defects.”
AI -manufacturing companies want to feel communicated and friendly to use tools, which can forget people that they are talking to a machine. But any humor, sympathy or warmth shows that he has an engineer.
Saha says in research conducted on the greeing, “we have found that AI can give more clear and personal voice in individual exchange, but it often reprises similar reactions to different questions, which lacks diversity and delicacy that is from real human experience.”
Saha says, when matters go wrong, as the recent Emo of Gemini, like the teenage stage, “defects like Gemini’s self -made remarks could risk misleading people that she is emotionally or emotionally volatile.” “It can cause confusion, unnecessary sympathy, or even the confidence of system reliability.”
It may look funny, but it may be dangerous if people are relying on AI’s assistants for their mental health needs or using these chat boats for education or customer service. Consumers should be aware of these limits before relying too much on any AI service.
As far as Gemini’s poor self -image is concerned, let’s hope that the AI learns to follow a little care – or whatever the computer code passes for spa day.


