The Risks of the 'Observer Effect' from Being Watched by AI
Imagine confiding in an AI chatbot late at night. You ask the AI about your relationship struggles, a health scare, workplace conflict, or the anxiety that’s been keeping you up. You believe it is private: just you and your personal computer or phone.
But what if you later learned that your words could become part of the chatbot’s training data, helping refine the system, and fragments of your intimate conversation might actually appear in someone else’s conversations with the chatbot? This question sits at the heart of an uncomfortable truth about AI: most people—including myself as an AI computer scientist—do not fully understand how these systems are trained or what truly happens to our data once we interact with them.
Only recently, several families filed lawsuits against major AI companies, claiming that chatbots contributed to delusions and suicides. These tragic cases reignited urgent debates among industry leaders, academics and among policy makers over how conversational AI is designed, how data is used, and what responsibilities developers bear when their systems shape real human emotions and choices.