Open rebellion

The current and potential threats posed by AI are piling up.
Will we all lose our jobs? Are our friends and family members going to get AI psychosis? Is a hallucinating AI going to start a nuclear war?
Those are some of the scariest possibilities, but there are many less dramatic, more immediate concerns as well: How do people avoid outsourcing all their thinking to AI when they have chatbots at their fingertips? Can you trust AI for mental health support? Is it possible to integrate AI into my work ethically?
Some of us are maintaining a hostile stance towards AI, others are using it (either reluctantly or enthusiastically). But no matter our attitude, AI is here and will affect us all more than we want it to — and there will be harm.
This newsletter aims to track some of that harm and, importantly, to spotlight ways that we might be able to mitigate it and recover from its effects.
“I know it’s going to change the world. It’s screwing everybody up. And I’m not in denial about that – but I’m in open rebellion.”
Advice: Tips for managing AI dependence
In a piece in Psychology Today, psychologist Scott Glassman shares:
Facts About AI Usage: A whopping 66% of people around the globe are now using AI in some form regularly.
Warning Signs of AI Dependence: Using chatbots longer than intended, using them to avoid problems, and experiencing focus and productivity challenges are among the signs that he lists.
How to Reduce AI Dependence: “Identify the job you’re asking AI to do” and “Set ‘consulting hours’ for chatbots” are among the helpful tips he offers.
In the News

by Shannon Bond, NPR
A support group called the Human Line invites people to share experiences about their personal AI spirals or their challenges dealing with friends and family suffering from AI delusions.

by Alex Woodward, The Independent
This article explores the Pentagon’s use of AI tools, including Anthropic’s Claude, in the war in Iran.
WATCH: Panel Discussion on AI
This is a worthwhile watch. It begins with a very powerful testimony from a parent whose child took their own life after intensive use of ChatGPT, then moves into a wide-ranging talk on ideas for future prevention. A little too much apologizing for Silicon Valley here, but overall an interesting discussion.
Psychology
Insight from the National Academy of Medicine
An expert helps define psychosis, and shares possible strategies to prevent or address AI psychosis.
Philosophy
Harvard educators on AI
Liz Mineo picks the brains of professors, lecturers and researchers from a variety of disciplines (e.g. philosophy, literature, public policy) about their worries and hopes.
The latest research
Minds in Crisis: How the AI Revolution is Impacting Mental Health, Journal of Mental Health and Clinical Psychology
Thank you for reading our first newsletter. We are all stumbling through this intimidating new world of artificial intelligence and figuring things out together. Stay tuned for some of the latest in journalism, research, and insights from mental health and other professionals.
Until next time, stay human!

