According to a new research, that chat boot is just telling you what you want to believe.
Whether you are using a traditional search engine like Google or the Openi Chat GPT, you use the terms that reflect your prejudices and impressions, this spring has been published in the spring this spring in the National Academy of Sciences. More importantly, search engines and chat boats often provide results that strengthen these beliefs, even if your intention is to find out more about the subject.
For example, just imagine that you are trying to find out about the effects of drinking coffee every day. If you, like me, enjoy having exactly two cups of the first thing in the morning, you “What is healthy enough?” Like you can find something? Or “the benefits of coffee health.” If you already have doubts (maybe tea purest), you can find “Is you bad enough for you?” Instead, researchers found that the preparation of questions could produce the results – I mostly get answers that show the benefits of coffee, while you get the opposite.
“When people look for information, whether Google or Chat GPT, they actually use the search terms that reflect what they already believe in.”
The abundance of AI chat boats, and confidence and custom results they give you freely, makes it easier to fall down from the rabbit hole and make it difficult to realize that you are in it. I have never got a more important time to think about how you get online information.
The question is: How do you get the best answers?
Are asking the wrong questions
Researchers conducted 21 studies with about 10,000 10,000 participants who were asked to search for certain topic topics, including the effects of caffeine, gas prices, crime rates, couch 19 and nuclear energy health. Used search engines and tools included Google, Chat GPT and customs designed search engines and AI chat boats.
The results of the researchers show that what they called “narrow search effect” was a function of both how people asked questions and how the tech platforms answered. People have a habit, in essence, false questions (or inaccurate questions). They gave a tendency to use the search terms or AI indicators that showed that they had already thought, and were designed to provide search engines and chat boats to provide the narrow, highly relevant answers provided to these answers. “Answers basically just confirm what they believe in the first place,” said Levong.
Read more: According to our experts, AI accessories: 29 ways to work for General AI for you
Researchers also checked whether the participants changed their beliefs after the search. When a narrow selection of answers was presented, which widely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom built -in search engine and a chat boot that was designed to offer a wider rows of answers, they were more likely to change.
Living said the platform can provide users with a wider, less suitable search option, which can be helpful in situations where the user is trying to find widespread sources. “Our research is not trying to suggest that search engines or algorithms should always expand their search results,” he said. “I think there is a lot of appreciation in providing very focused and extremely tight search results in certain situations.”
3 ways to ask the right questions
Living said that if you want a broader row of answers to your questions, there are some things you can do.
Be exact: Especially think about what you are trying to learn. Living uses an example of trying to decide whether you want to invest in stocks of a particular company. Ask whether it is a good stock or bad stock to buy your results – more positive news If you ask if it’s good, more negative news if you ask if it’s bad. Instead, try the term more, more neutral search. Or ask both terms and evaluate the results of each.
Get other ideas: Particularly with the AI chat boot, you can ask for a wide range of direct approaches in the gesture. If you want to know if you should drink two cups of coffee a day, ask for many feedback from the chat boot and the evidence behind them. Researchers tried it in one of their experiences and found that they had more kinds of consequences. “We asked Chat GPT to provide different perspectives to answer the question to the participants and provide maximum evidence with the help of these claims,” said Living.
At some point, quit asking: Living said the follow -up questions could not work at all. If these questions are not getting wider answers, you may have the opposite effect – even more narrow, confirming results. He said, in many cases, people who asked many questions only “fell deep into the rabbit hole”.


