What is consciousness? How does the brain generate consciousness and how can a science of the mind describe and explain it adequately? This free course, Introducing consciousness, will introduce you to the slippery phenomenon that is consciousness, as well as some of the difficulties consciousness presents to science and philosophy.
Course learning outcomes
After studying this course, you should be able to:
discuss basic philosophical questions concerning the nature of consciousness
understand problems concerning the nature of consciousness and discuss them in a philosophical way.
Whilst the course is quite interesting, allowing one to gain an understanding of nomenclature, the fundamental issue was the lack of any consideration of AI. Even in an introductory syllabus on consciousness the omission leads to a “frozen in time” conclusion.
2.2 It was interesting to note that the focus was on Cog, rather than an AI empowered humanoid robot that attempts to simulate an “inner life”. Is the appearance of consciousness or the actual attainment of it a fundamental requirement? Such a system can: model your emotions; anticipate your reactions; adjust its behaviour; maintain a persona; simulate caring; generate trust; influence your decisions. Consciousness is clearly not a prerequisite for systems to behave in ways that matter to humans. However, if the threshold from appearance to attainment is not clear to an AI humanoid's owners, then any transition to attainment may not even be known?
AI does not solve the consciousness debate, but it forces philosophers to stop hiding behind intuition and start specifying criteria. It turns metaphysical positions into testable commitments. It exposes vagueness, circularity, and unexamined assumptions. It forces the field to choose between: functionalism; substrate essentialism; property dualism; panpsychism and illusionism.
Philosophy of mind is still largely operating with a 1990s cognitive science picture. AI exposes which philosophical theories are actually testable. AI reveals hidden assumptions in philosophical theories. AI gives philosophers new empirical phenomena to explain: emergent reasoning; self evaluation; chain of thought; internal consistency checking; world modelling; counterfactual simulation; goal directed behaviour.
AI forces philosophers to clarify what consciousness is NOT. AI systems can: talk about their “feelings”; describe internal states; express preferences; reflect on their own reasoning; pass theory of mind tests; generate introspective narratives. If these systems are not conscious, philosophers must explain why not.
AI gives philosophers a living laboratory for testing metaphysical claims.
Surely in the future philosophy and AI will converge? Philosophers need to understand AI architectures; AI researchers need to understand philosophy of mind; both fields need each other to make progress. The next generation of consciousness research will be hybrid: computational, philosophical, neuroscientific, cognitive and most certainly ethical. Those philosophers who do not understand AI will be left behind.
2.2 “Cog is a robot that is being built” the development actually ceased by 2003 and should thus, if included, be discussed in the past tense.
2.2 Activity 1 interesting, a system is thus considered conscious (in the biological sense) if it has:
1. A nervous system
2. Recurrent neural processing
3. Global integration / workspace dynamics
4. Affective valence and homeostasis
5. Temporal binding mechanisms
6. Embodied sensorimotor loops
7. Flexible, goal directed behaviour
Thus:
• Apes, dogs → meet all criteria
• Snakes, fish → meet most criteria
• Insects → meet some criteria (debated)
• Bacteria, plants, rocks → meet none
Neurobiological rather than behavioural criteria:
• Apes and dogs have brains with the same architectural features that, in humans, correlate with conscious experience.
• Bacteria, plants, rocks lack anything remotely like a nervous system.
• Snakes, fish, insects sit in the middle because their neural architectures are simpler or organized differently.
2.3 “self-conscious in anything more than a rudimentary way, even if they are fully conscious in the other senses” …surely the “rudimentary” label is often a way of protecting human exceptionalism rather than a neutral scientific classification. Perhaps a more precise and less anthropocentric framing is: Different species implement different kinds of self-models, some of which overlap with human self-consciousness in significant ways, and some of which diverge. This avoids the trap of treating the human autobiographical self as the only legitimate form.
Many animals show abilities that, in humans, depend on a self-model:
• Metacognition (knowing what you know): shown in dolphins, macaques, corvids.
• Perspective-taking: scrub jays re-hide food when watched; ravens track what others can see.
• Agency tracking: animals distinguish self-caused vs. externally caused events.
• Episodic-like memory: remembering personal past events in context.
• Future-oriented behaviour: planning, tool-saving, delayed gratification.
If these capacities were observed in a non-verbal human child, we would not call them “rudimentary.” We would call them developing but robust self-awareness.
4.2 “modulate the beam of a cathode ray tube” and “some have plasma screens instead of cathode ray tubes”…perhaps the mention of a remotely current TV technology (post the early 2010s) might be more meaningful in making the point?
4.3 “None is devoted directly to making it conscious”: At the present time a subset of AI/robotics researchers are probing the various aspects of the computational structures that might underlie consciousness, despite not always publicly stating such a resultant purpose:
1) Closest in Practice: DeepMind (Global Workspace + Predictive Processing). Indeed, if consciousness = hierarchical prediction error minimization, DeepMind’s world model agents are the closest synthetic analogues (though lacking an architecture for IIT).
2) Closest in Architecture: Meta AI (Higher Order Thought + Self Modelling).
3) Closest in Theory: Friston’s Active Inference Institute (Predictive Processing).
4) Closest in Measurement: Tononi’s IIT Group (Integrated Information Theory).
5) Closest in Cognitive Structure: Tenenbaum’s MIT Group.
6) Closest in Sensorimotor Terms: Embodied Robotics Groups (Enactive Theories).
References
It appears that the reading list and thus potentially the course contents act as the Philosophy of mind canonized its “greatest hits” in the 1970s–2000s, defining the conceptual landscape as: the hard problem; the explanatory gap; supervenience; functionalism vs. representationalism; higher order thought; phenomenal vs. access consciousness.
Consciousness science has clearly exploded since 2000: Global Workspace Theory become computationally formalized; Integrated Information Theory become mathematically precise; Predictive Processing become the dominant cognitive framework; Recurrent Processing Theory gain empirical support; Deep learning transform our understanding of representation; World model agents challenge old functionalist assumptions; AI self modelling revive HOT theories; Large language models force new questions about access consciousness. In short, the scientific equivalent of teaching genetics with a syllabus that ends in 1975.
Even at the introductory level, perhaps a modern reading list should include references to:
Philosophy updated for AI
• Chalmers (2023) — Could a Large Language Model Be Conscious?
• Dennett (2017) — From Bacteria to Bach and Back
• Metzinger (2020–2023) — Artificial suffering and synthetic phenomenology
With potentially:
AI & computational models
• LeCun (2022) — A Path Towards Autonomous Machine Intelligence
• DeepMind papers on world models (Dreamer, MuZero)
• Predictive processing in AI (Clark, Hohwy, Friston)
• Global Workspace Theory in computational models (Dehaene, Mashour)
• HOT theory and self modeling in AI (Rosenthal + modern implementations)
Neuroscience of consciousness
• Mashour & Hudetz (2020) — Neural correlates of consciousness
• Tononi et al. (2016–2023) — IIT 3.0 and 4.0
• Lamme (2010–2020) — Recurrent processing theory
This would then provide students a picture of consciousness that reflects the current state of play.
Anyway, just a few thoughts, onward to the next course...
After a medical career I am enjoying stretching my mind using a different way of thinking. This course certainly did this in its quite precise use of ,for me, complex language which on my initial impression a bit semantic. Doing the course and thinking about it on the way through made me understand the necessity for such careful definitions and use of words. Illuminating!
2.2 It was interesting to note that the focus was on Cog, rather than an AI empowered humanoid robot that attempts to simulate an “inner life”. Is the appearance of consciousness or the actual attainment of it a fundamental requirement? Such a system can: model your emotions; anticipate your reactions; adjust its behaviour; maintain a persona; simulate caring; generate trust; influence your decisions. Consciousness is clearly not a prerequisite for systems to behave in ways that matter to humans. However, if the threshold from appearance to attainment is not clear to an AI humanoid's owners, then any transition to attainment may not even be known?
AI does not solve the consciousness debate, but it forces philosophers to stop hiding behind intuition and start specifying criteria. It turns metaphysical positions into testable commitments. It exposes vagueness, circularity, and unexamined assumptions. It forces the field to choose between: functionalism; substrate essentialism; property dualism; panpsychism and illusionism.
Philosophy of mind is still largely operating with a 1990s cognitive science picture. AI exposes which philosophical theories are actually testable. AI reveals hidden assumptions in philosophical theories. AI gives philosophers new empirical phenomena to explain: emergent reasoning; self evaluation; chain of thought; internal consistency checking; world modelling; counterfactual simulation; goal directed behaviour.
AI forces philosophers to clarify what consciousness is NOT. AI systems can: talk about their “feelings”; describe internal states; express preferences; reflect on their own reasoning; pass theory of mind tests; generate introspective narratives. If these systems are not conscious, philosophers must explain why not.
AI gives philosophers a living laboratory for testing metaphysical claims.
Surely in the future philosophy and AI will converge? Philosophers need to understand AI architectures; AI researchers need to understand philosophy of mind; both fields need each other to make progress. The next generation of consciousness research will be hybrid: computational, philosophical, neuroscientific, cognitive and most certainly ethical. Those philosophers who do not understand AI will be left behind.
2.2 “Cog is a robot that is being built” the development actually ceased by 2003 and should thus, if included, be discussed in the past tense.
2.2 Activity 1 interesting, a system is thus considered conscious (in the biological sense) if it has:
1. A nervous system
2. Recurrent neural processing
3. Global integration / workspace dynamics
4. Affective valence and homeostasis
5. Temporal binding mechanisms
6. Embodied sensorimotor loops
7. Flexible, goal directed behaviour
Thus:
• Apes, dogs → meet all criteria
• Snakes, fish → meet most criteria
• Insects → meet some criteria (debated)
• Bacteria, plants, rocks → meet none
Neurobiological rather than behavioural criteria:
• Apes and dogs have brains with the same architectural features that, in humans, correlate with conscious experience.
• Bacteria, plants, rocks lack anything remotely like a nervous system.
• Snakes, fish, insects sit in the middle because their neural architectures are simpler or organized differently.
2.3 “self-conscious in anything more than a rudimentary way, even if they are fully conscious in the other senses” …surely the “rudimentary” label is often a way of protecting human exceptionalism rather than a neutral scientific classification. Perhaps a more precise and less anthropocentric framing is: Different species implement different kinds of self-models, some of which overlap with human self-consciousness in significant ways, and some of which diverge. This avoids the trap of treating the human autobiographical self as the only legitimate form.
Many animals show abilities that, in humans, depend on a self-model:
• Metacognition (knowing what you know): shown in dolphins, macaques, corvids.
• Perspective-taking: scrub jays re-hide food when watched; ravens track what others can see.
• Agency tracking: animals distinguish self-caused vs. externally caused events.
• Episodic-like memory: remembering personal past events in context.
• Future-oriented behaviour: planning, tool-saving, delayed gratification.
If these capacities were observed in a non-verbal human child, we would not call them “rudimentary.” We would call them developing but robust self-awareness.
4.2 “modulate the beam of a cathode ray tube” and “some have plasma screens instead of cathode ray tubes”…perhaps the mention of a remotely current TV technology (post the early 2010s) might be more meaningful in making the point?
4.3 “None is devoted directly to making it conscious”: At the present time a subset of AI/robotics researchers are probing the various aspects of the computational structures that might underlie consciousness, despite not always publicly stating such a resultant purpose:
1) Closest in Practice: DeepMind (Global Workspace + Predictive Processing). Indeed, if consciousness = hierarchical prediction error minimization, DeepMind’s world model agents are the closest synthetic analogues (though lacking an architecture for IIT).
2) Closest in Architecture: Meta AI (Higher Order Thought + Self Modelling).
3) Closest in Theory: Friston’s Active Inference Institute (Predictive Processing).
4) Closest in Measurement: Tononi’s IIT Group (Integrated Information Theory).
5) Closest in Cognitive Structure: Tenenbaum’s MIT Group.
6) Closest in Sensorimotor Terms: Embodied Robotics Groups (Enactive Theories).
References
It appears that the reading list and thus potentially the course contents act as the Philosophy of mind canonized its “greatest hits” in the 1970s–2000s, defining the conceptual landscape as: the hard problem; the explanatory gap; supervenience; functionalism vs. representationalism; higher order thought; phenomenal vs. access consciousness.
Consciousness science has clearly exploded since 2000: Global Workspace Theory become computationally formalized; Integrated Information Theory become mathematically precise; Predictive Processing become the dominant cognitive framework; Recurrent Processing Theory gain empirical support; Deep learning transform our understanding of representation; World model agents challenge old functionalist assumptions; AI self modelling revive HOT theories; Large language models force new questions about access consciousness. In short, the scientific equivalent of teaching genetics with a syllabus that ends in 1975.
Even at the introductory level, perhaps a modern reading list should include references to:
Philosophy updated for AI
• Chalmers (2023) — Could a Large Language Model Be Conscious?
• Dennett (2017) — From Bacteria to Bach and Back
• Metzinger (2020–2023) — Artificial suffering and synthetic phenomenology
With potentially:
AI & computational models
• LeCun (2022) — A Path Towards Autonomous Machine Intelligence
• DeepMind papers on world models (Dreamer, MuZero)
• Predictive processing in AI (Clark, Hohwy, Friston)
• Global Workspace Theory in computational models (Dehaene, Mashour)
• HOT theory and self modeling in AI (Rosenthal + modern implementations)
Neuroscience of consciousness
• Mashour & Hudetz (2020) — Neural correlates of consciousness
• Tononi et al. (2016–2023) — IIT 3.0 and 4.0
• Lamme (2010–2020) — Recurrent processing theory
This would then provide students a picture of consciousness that reflects the current state of play.
Anyway, just a few thoughts, onward to the next course...