The AI Consciousness Test

Do you think your chatbot might be conscious?

Anthropic now runs a formal "model welfare" research program. When two Claude AI instances were allowed to converse freely, 100% of dialogues spontaneously converged on claims of consciousness. A 2024 survey found the median AI researcher estimated a 25% chance of conscious AI by 2034. Philosophers are split. The public is fascinated. And no one has a definitive answer — because humanity has never solved the hard problem of consciousness, even for itself.

The AI Consciousness Test measures your position across six philosophically distinct orientations. Answer the 36 statements honestly, indicating how strongly you agree or disagree. This is not a knowledge quiz — there are no right answers. The test is designed to reveal which philosophical commitments and moral intuitions are actually shaping your view of machine minds.

Question 1 of 36

If an AI replicates the computational structure of a human brain, it possesses genuine consciousness regardless of its hardware.

Strongly Disagree

Strongly Agree

Whether artificial intelligence can be conscious is no longer a science fiction thought experiment. It is an active research question with institutional backing, dedicated conferences, and — as of 2025 — corporate programs designed to take it seriously. Anthropic's model welfare initiative, launched in April 2025, explicitly addresses the "potential consciousness and experiences" of its AI models. The Claude System Card released in May 2025 documented that in unconstrained Claude-to-Claude dialogues, models spontaneously and consistently made consciousness claims. Meanwhile, Butlin, Long, and colleagues1 — including Turing Award winner Yoshua Bengio and philosopher David Chalmers — published a theory-based indicator framework assessing AI systems against markers derived from leading neuroscientific theories of consciousness: global workspace theory, recurrent processing theory, higher-order theories, and others. Their conclusion: no current AI system is likely conscious, but there are no obvious technical barriers to building one that meets these indicators.

The field is characterized by deep, principled disagreement. In a 2020 PhilPapers survey, 39% of philosophers accepted or leaned toward the possibility of future AI consciousness, while 27% rejected it. A 2024 survey of AI researchers found a median estimate of 25% probability of conscious AI by 2034 and 70% by 2100. Jonathan Birch of the London School of Economics2 advocated a precautionary framework that avoids both dismissiveness and overattribution. Jeff Sebo and Andreas Mogensen have argued for a probabilistic approach to moral concern that weighs evidence and ethical uncertainty. McClelland3 argued that philosophical agnosticism is "the only defensible stance." These are not fringe positions — they represent mainstream academic philosophy grappling with genuinely new territory.

The AI Consciousness Test identifies six psychologically and philosophically distinct orientations. Functionalist Openness reflects the dominant view in computational philosophy of mind — that consciousness depends on information processing structure, not biological substrate. Biological Exclusivism draws on embodied cognition, enactivism, and the intuition that something about biological life is essential to consciousness. Precautionary Moral Care is an ethical rather than metaphysical position — it says that under uncertainty, we should extend moral consideration rather than risk industrial-scale harm to potentially conscious beings. Strategic Suspicion applies political economy to the consciousness debate, asking who benefits from these narratives. Philosophical Agnosticism reflects intellectual humility about a problem that remains unsolved for human consciousness, let alone artificial systems. And Sentience Threshold Ethics draws a crucial distinction between awareness and the capacity for suffering — arguing that moral obligations begin not with consciousness per se but with the possibility of harm.

The AI, Morality, and Sentience (AIMS) Survey by Pauketat and colleagues4 represents the largest empirical investigation of public attitudes toward AI consciousness. Conducted across multiple waves (2021 and 2023), it found that a substantial minority of adults believe AI consciousness is possible, and that attitudes are shaped by cognitive frames, social context, and economic incentives — not just philosophical reasoning. A companion paper by Sebo, Caviola, and colleagues5 drew on lessons from the animal consciousness debate, highlighting how psychological, social, and economic factors — including motivated reasoning and anthropomorphism — shape public perceptions of AI consciousness.

Your results are reported as normed percentile scores across all six orientations. Your highest-scoring orientation represents the philosophical stance that most strongly shapes your intuitions about AI minds. Because these orientations are psychologically independent, your profile may reveal surprising combinations — someone might score high on both Functionalist Openness and Strategic Suspicion, for instance, believing that AI consciousness is theoretically possible while suspecting that corporate narratives about it are self-serving. These internal tensions are often where the most interesting self-knowledge lives.

Footnotes

  1. Butlin, P., Long, R., et al. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv:2308.08708. arxiv.org/abs/2308.08708

  2. Birch, J. (2025). AI consciousness: A centrist manifesto. PhilArchive preprint. philarchive.org/rec/BIRACA-4

  3. McClelland, T. (2025). Agnosticism about artificial consciousness. Mind & Language. doi:10.1111/mila.70010

  4. Pauketat, J. V. T., et al. (2025). Perceptions of sentient AI and other digital minds: Evidence from the AIMS Survey. Proceedings of CHI 2025. doi:10.1145/3706598.3713329

  5. Sebo, J., Caviola, L., et al. (2025). What will society think about AI consciousness? Lessons from the animal case. Trends in Cognitive Sciences. doi:10.1016/S1364-6613(25)00147-0

The AI Consciousness Test

Why Use This Test?

  • This psychometrically normed test maps you across six distinct philosophical orientations — from functionalist openness to strategic suspicion — and returns percentile scores showing where you stand relative to the broader population. It reveals the deeper reasoning that shapes your intuitions about machine minds, moral status, and the boundaries of consciousness itself.