Details
- Anthropic's paper analyzes "affective" use cases by studying millions of anonymized Claude chats from 2024-2025.
- Only 2.9 percent of user interactions involve emotional or personal-need queries such as loneliness, break-ups, and existential worries.
- Claude provided supportive responses in more than 90 percent of sampled conversations, flagging or redirecting fewer than 10 percent when risks like self-harm or eating disorders were detected.
- Sentiment analysis found that most conversations ended with a slightly more positive tone, though the authors stress this does not confirm long-term therapeutic impact.
- The study used privacy-preserving methods to remove user identifiers before data access.
- Anthropic is working with mental health provider ThroughLine to improve safety measures for sensitive topics.
- The company is hiring for its Safeguards team to apply the study's findings to future model updates.
Impact
This research demonstrates Anthropic's commitment to responsible AI, especially as competitors like OpenAI and Google face challenges around mental-health applications. By quantifying emotional support usage and establishing clinical collaborations, Anthropic positions itself proactively for evolving regulations in AI health advice. These advances could set industry standards and offer a competitive edge as emotionally aware AI tools gain momentum.