AI Systems that Perform Intimacy Need New Governance Frameworks
Technologies are often governed as vessels that hold and transmit information, content, and data. This way of thinking has shaped decades of digital regulation. But, in the last three years, something important has shifted: for the first time, we are governing systems that don’t simply deliver information, but actively perform relationships.
This shift has particular consequences for how we think about the safety of AI products for minors. For many young people, AI chatbots are no longer peripheral tools in the digital ecosystem; they are marketed as tutors, assistants, and companions that sound calm, certain, and reassuring, and are endlessly available when few others are.
This shift — from chatbots as information systems to chatbots as relational systems — is not a marginal technical change. It creates a governance rupture that existing frameworks are struggling to name, let alone address. Until AI chatbots are governed as systems that perform relationships, not just process information, we will continue to regulate peripherally while the most consequential impacts remain structurally unaddressed.
Over the past six months, I have been co-leading a national citizens assembly called Gen(Z)AI — a partnership between Simon Fraser University’s Dialogue on Technology Project and the Center for Media, Technology, and Democracy — that brings together young people aged 17 to 23 to deliberate on AI governance. Initial discussions in this assembly have shown that young people view chatbots as systems that shape attention, trust, dependence, and cognition over time, often in ways that are subtle, cumulative, and difficult to see while they are happening.