Large language models (LLMs) are proving to be a threat to human diversity.
Research led by Morteza Dehghani, a professor of psychology and computer science, is showing that LLMs can have a homogenizing effect.
“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” says Zhivar Sourati, a PhD student on Dehghani’s team, in this article.
You may have guessed that use of LLMs can result in less varied output (people have already identified and often poke fun at cadences and styles typical of “AI writing,” for example), but this research suggests that the LLMs could also be contributing to the overall erasure of less dominant cultures and experiences.
Sourati pointed out that even people not using LLMs directly will be affected by this shift. After all, we are very social creatures and take our cues from one another. If those who are using LLMs are thinking and speaking in a particular way, others will pick up on it and may follow suit.
Alarmingly, this all means that even people within a less dominant culture could actually contribute to diluting their own language, practices, and ways of thinking through the frequent use of LLMs.
The researchers believe that to combat these issues, developers need to be intentional about incorporating diversity into their models. In the meantime, it’s yet another compelling reason to use LLMs sparingly, if at all.
“I had one tool helping me weigh technical decisions, another spitting out drafts and summaries, and I kept bouncing between them, double-checking every little thing. But instead of moving faster, my brain just started to feel cluttered. Not physically tired, just… crowded. It was like I had a dozen browser tabs open in my head, all fighting for attention. I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static. What finally snapped me out of it was realizing I was working harder to manage the tools than to actually solve the problem.”
Learning: What is AI “brain fry”?
In a fascinating piece in the Harvard Business Review, a team of researchers share what they’ve learned about AI’s effects on workers.
Too Many AI Tools: Having to use and oversee an abundance of AI tools often results in mental exhaustion for many workers, and it is this kind of fatigue that is being called “AI brain fry.”
The Most Common Victims of Brain Fry: In this study sample, it was people working in Marketing, HR, Operations, Engineering, Finance, and IT that reported the most brain fry.
Bad for Business: Not only is the increase of brain fry distressing for employees, it’s also bad for the bottom line, as it leads to costly decision fatigue, both major and minor errors, and it can drive workers to quit.
In the News

by Maia Anderson, Healthcare Brew
Healthcare regulators are struggling to keep up with the brisk pace of changes in AI technology. With the federal government mostly taking an antiregulatory posture, most efforts in health-related AI are coming from the state level.

by Hayden Miller, Bloomberg Law
We are now starting to see lawsuits against tech companies as AI chatbots have been shown to encourage users to commit suicide or homicide. This article provides an excellent overview of the form that such litigation could take in the future.
WATCH: “We need AI systems to be 10 million times safer”
This is a really sobering off-the-cuff interview with Stuart Russell, professor of Computer Science at UC Berkley, at a recent conference organized by PauseAI. Professor Russell talks about the potential catastrophic risks that AI poses to humanity and what kinds of measures need to be put in place now to lessen those risks (18 minutes).
Psychology
AI can make us feel confident without verifiable evidence
John Nosta shares his deep concerns that the causal structure of knowledge is becoming unreliable due to AI.
Philosophy
Are we outsourcing our own judgment?
Florian Maiwald examines the existential risk of AI that questions what it means to be human.
Journal Editorial
When Compliance Is Not Safety: The Regulatory Blind Spot in AI Companion Chatbots
By Esteban Zavaleta-Monestel, Sebastián Arguedas-Chacón, Jeaustin Mora-Jiménez, and Ricardo Millán González, Cureus Journal of Medical Science
This analysis suggests that the current global governance of AI companion chatbots “prioritizes documentation over demonstrable safety in real-world distress situations.”
An excerpt:
“Proponents of the current regulatory approach may contend that continuous behavioral auditing is impractical for rapidly evolving AI systems. However, feasibility has never exempted high-risk interventions from rigorous evaluation. In medicine and public health, systems that undergo frequent changes are still subject to post-market surveillance, repeated safety reassessment, and corrective action or withdrawal when safety cannot be maintained. Emotional AI systems designed to engage distressed children should not be held to lower standards solely because of their technological novelty.”
The latest research
Minds in Crisis: How the AI Revolution is Impacting Mental Health, Journal of Mental Health and Clinical Psychology
Generative artificial intelligence addiction syndrome: A new behavioral disorder?, Asian Journal of Psychiatry
Thank you for reading! Stay tuned for some of the latest in journalism, research, and insights on AI and how it is affecting our health.
Until next time, stay human!



