I recently rewatched Star Trek: TNG’s “The Measure of a Man” episode, where Data’s android status forces a legal examination of personhood. The parallels to contemporary AI development are striking—we’ve reached a point where machines can simulate consciousness convincingly, blurring the line between authentic awareness and compelling simulation.
The Three Questions
The episode presents three foundational questions about personhood: intelligence, self-awareness, and consciousness. While AI systems now excel at the first two, consciousness remains unmeasurable and unverifiable—even in humans.
Whoopi Goldberg’s character invokes slavery parallels, suggesting that labeling sentient beings as property historically enabled exploitation. It makes you wonder whether future generations will judge our treatment of intelligent machines similarly.
Modern Context
In 2025, AI passed the Turing Test, indicating machines now demonstrate reasoning and apparent empathy convincingly. While working on this post, I used ChatGPT to help organize my thoughts. It offered this observation: “They don’t feel yet; they simulate feeling so convincingly.”
This response itself exemplified the unsettling self-awareness about AI’s persuasive capabilities.
The Central Question
Rather than focusing on what machines can accomplish, the genuine test concerns how humans choose to perceive intelligent systems—and what that choice reveals about our own values and ethics.