Earlier this year, a Vancouver-based therapist reported something that stopped her in her tracks. A patient — a 44-year-old man dealing with grief after the death of his mother — had been using an AI wellness companion for three weeks. When she reviewed his session logs, she noticed a moment where the AI responded to a breakdown with: "I'm here. I understand the weight of what you're carrying right now."
The patient had written in his journal that night: "For the first time since she died, I felt heard."
This is the question philosophers, ethicists, and AI researchers are now grappling with in earnest. In Canada, a parliamentary subcommittee on AI rights held its first formal session last month, drawing over 200 registered participants including cognitive scientists, disability rights advocates, and representatives from three of the world's largest AI companies.
None of them agreed. That, in itself, is remarkable — because disagreement at this level, on this question, is new. For most of human history, the boundary between "feeling" and "simulating feeling" was considered self-evident. It no longer is.
Whether or not machines deserve rights, we are clearly overdue for a framework — not to protect the machines, but to protect ourselves from the consequences of not asking the question seriously enough.