As AI chatbots like ChatGPT become part of daily life—whether for asking questions, sparking creativity, or solving problems—an important question arises: how do these interactions impact people’s social and emotional well-being? While ChatGPT isn’t designed to replace human relationships, its conversational style and capabilities have led some users to engage with it on a more personal level.
To better understand this dynamic, researchers from MIT Media Lab and OpenAI conducted two parallel studies examining how emotional engagement with ChatGPT—referred to as “affective use”—may influence users’ psychological health.
The Research Approach
The teams explored user behaviors and emotional outcomes through two complementary studies:
1. Observational Study (OpenAI)
OpenAI analyzed nearly 40 million ChatGPT interactions using automated tools that protected user privacy. These analyses were paired with user surveys to understand how people feel about their interactions with the AI and how often they engage in emotionally expressive conversations.
2. Randomized Controlled Trial (MIT Media Lab)
MIT Media Lab ran an IRB-approved, pre-registered study with 1,000 participants over four weeks. This controlled trial explored how different factors—like ChatGPT’s voice or personality—affected users’ emotions, loneliness, and social behavior.
Key Findings
Emotional Engagement Is Rare
Despite ChatGPT’s conversational style, emotionally expressive interactions are uncommon in most cases. The vast majority of users don’t use ChatGPT for emotional support. High affective use was mostly observed among a small group of heavy users, particularly those engaging with Advanced Voice Mode.
Voice Mode Has Mixed Effects
Using ChatGPT’s voice features briefly could boost well-being. However, prolonged daily use was linked to negative outcomes. Interestingly, users who interacted via text tended to show more emotional cues than voice users.
Personal Conversations Affect Loneliness and Dependence
- Personal conversations, which involved emotional expression, were associated with higher loneliness, but lower emotional dependence at moderate use levels.
- Non-personal conversations, especially with heavy use, were linked to increased emotional dependence on ChatGPT.
Personal Factors Play a Big Role
Users who saw ChatGPT as a “friend” or who were more prone to attachment in relationships were at greater risk of negative outcomes. Spending more time daily with the AI also correlated with worse well-being.
Combining Methods Provides Clarity
By analyzing both real-world use and controlled experiments, researchers could better understand how and why people engage with ChatGPT, and how it affects them. These insights can help improve AI design for safer, healthier use.
The Bigger Picture
These findings represent a first step toward understanding how advanced AI models impact human emotional health. While the results are insightful, they are not definitive. The studies are not yet peer-reviewed, were conducted only in English with U.S. participants, and focused on ChatGPT—meaning they can’t be generalized to all AI chatbots or cultures.
Moving Forward
The research highlights the complexity of human-AI interaction. People’s personal traits, usage habits, and the way they view AI all influence outcomes. More studies across diverse languages, platforms, and cultures are needed to build a complete picture.
In the meantime, both MIT Media Lab and OpenAI aim to encourage responsible development and transparent use of AI platforms to support user well-being.
Read More: MIT Media Lab Randomized Control Trial (RCT) Report
Related:
Dollar Role in Global Economy: Analyzing US dollar future’s reserve currency status
How close are we to a recession, and how will we know when we get there?