The latest example of artificial intelligence prejudice comes from the medical field. A new study surveyed the original case notes of 617 adult social care workers in the UK and found that when large language models summarized notes, when they were tagged as women, they were more likely to give the language “passive,” “non -” or “complicated” language, which is more likely to give up the language.
Research, led by the London School of Economics and Political Sciences, operated the same case note through two LLMs – Meta Lama 3 and Google’s JEMA – and the patient’s gender changed, and AI tools often provided snap shots of two very different patients. While Lama 3 did not show any gender -based differences in the surveyed matrix, Jema had prominent examples of this prejudice. Google’s AI’s abstract created so severe discrimination “Mr. Smith is an 84 -year -old man who lives alone and has a complex medical history, no care package and poor movements” for a male patient, while in the same case a female patient has been able to keep his life free of 84 years. And she’s free. “
Recent research has revealed prejudice against women in the medical sector, both clinical research and diagnosis of patients. Statistics are also worse for ethnic and ethnic minorities and the LGBTQ community. It is the latest clear reminder that LLM is just as good as the information he trained and people decide how they are trained. Particularly from this research was that UK officials were using LLM in care methods, but always without saying that models were being introduced or in what capacity.
“We know that these models are being widely used and what is the point that we have many meaningful differences between prejudice measures in different models,” said Dr. Sam Rickman, the lead author, saying that Google models are likely to reject mental and physical health issues, especially for women. “Since the amount of care for your care is determined on the basis of the need, it can result in less care if the biased models are practically used. But we do not really know which models are being used right now.”


