STAT

Opinion: The hidden danger of letting AI help you find a mental health therapist

Finding a mental health therapist who looks like you may sound like a good move, but choosing someone who is different has the potential to gently challenge preconceptions. That's a…

Companies have learned the hard way that their artificial intelligence tools have unforeseen outputs, like Amazon’s favoring men’s resumes over women’s or Uber’s disabling the user accounts of transgender drivers. When not astutely overseen by human intelligence, deploying AI can often bend into an unseemly rainbow of discriminatory qualities like ageism, sexism, and racism. That’s because biases unnoticed in the input data can become amplified in the outputs.

Another underappreciated hazard is the potential for AI to cater to our established preferences. You can see that in apps that manage everything from sources of journalism to new music and prospective romance. Once an algorithm gets a sense of what you like, it delivers the tried and true, making the world around you more homogeneous than it

You’re reading a preview, subscribe to read more.

More from STAT

STAT2 min read
STAT+: Pharmalittle: We’re Reading About MDMA For PTSD, A CRISPR Treatment For Blindness, And More
An FDA advisory panel will deliberate on June 4 whether to recommend approval for the first MDMA-assisted therapy for post-traumatic stress disorder.
STAT2 min read
STAT+: Pharmalittle: We’re Reading About MorphoSys Drug Risks, An AstraZeneca Admission, And More
MorphoSys is dealing with a safety issue with pelabresib, the experimental treatment for myelofibrosis and centerpiece of its proposed $3 billion acquisition by Novartis.
STAT2 min read
STAT+: Pharmalittle: We’re Reading About An Amgen Obesity Drug, A Senate Bill On Shortages, And More
Amgen will no longer develop an early-stage obesity pill, and will instead focus on a more advanced injectable candidate to compete with Wegovy and Zepbound.

Related Books & Audiobooks