27 comments

  • tsimionescu 32 minutes ago
    I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation, while they easily accept that consciousness is a physical process.

    Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.

    Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.

    • Maxatar 11 minutes ago
      >I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation

      The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.

    • twosdai 23 minutes ago
      Really great point. I have wondered that as well.

      Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.

      If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.

  • GMoromisato 32 minutes ago
    I think this is a circular argument. It defines a separation between computation and experience (between the abstraction and the "mapmaker") and then concludes that computation cannot be experience because they are in separate categories.

    There are really only two solutions to the Hard Problem of Consciousness:

    1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.

    [Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]

    The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?

    #2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

    • abeppu 0 minutes ago
      I think #2 risk being incoherent unless you define things very carefully.

      "Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?

      > Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

      But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.

    • brotchie 1 minute ago
      Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity.

      Author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).

      In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

      My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.

      The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI. The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.

    • neosat 16 minutes ago
      Agree with your points on the primary two questions and the circular argument in the original article. However, re: " How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?" that's an interesting question but not necessarily fundamentally refuting of #1. If you start with #1 "Consciousness is an unknown physical something (force/particle/quantum whatever)" then it has 'perceivable' properties of it's own different from those of it's constituent atoms or electrons. A toy example is the 'wetness' of water. If you only look at atoms and molecules with no way to 'experience' water then it's hard to conceive how water can have properties (though in the case of water it is tractable)

      Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it.

    • energy123 3 minutes ago
      It defines a separation between computation and abstraction and then concludes that computation cannot be experience because the abstraction is a product of our minds rather than an intrinsic property of the system.
    • exitb 2 minutes ago
      > Consciousness is an illusion. It is the software telling itself something.

      An illusion is a misinterpretation, which implies an observer. Who’s the observer then?

    • vsri 21 minutes ago
      I resonate with this. I think some folks will object to the word "illusion" and it's connotations but I think it is resolved with:

      1. Consciousness is a material thing (that we haven't found yet)

      2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known")

      2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion.

      • TimTheTinker 7 minutes ago
        2 is only weirder if you don't already accept non-material reality, i.e. the proposition There exist real things that are not themselves composed of matter and/or energy.

        That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality.

        The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"?

    • Exoristos 24 minutes ago
      4. It is ἐνέργεια, direct spark, of the God. It can be described but not comprehended, imitated but not replicated.
    • renticulous 13 minutes ago
      With the emergence argument, I have the following retort.

      How can something emerge if it wasn't embedded or hidden within the system already?

    • polotics 7 minutes ago
      there are many possible points eg. for example what happens if you rephrase your solution 2 by swapping the terms?
    • dsign 17 minutes ago
      Hm. It only takes a life of study and a lot of pain to understand that #2 is the thing. But most of us get to experience the latter without experiencing the former, so for most people #1 is the preferred option.

      #1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God.

    • 0xBA5ED 14 minutes ago
      "It defines a separation between computation and experience" Does it? Or does it separate two forms of computation (or two forms of experience)? Isn't it just saying a GPU can't be a brain and a brain can't be a GPU? That the entirety of a thing's experience can't be replicated on a different substrate, only simulated. The substrate does fundamentally dictate the ultimate experience (or lack thereof) of the thing that computes within it.
    • colordrops 17 minutes ago
      What is a "real" thing and not an "illusion" if you go with #2? Is a car a real thing, or just a collection of atoms? Is an atom a real thing? Or a collection of processes? Is it not turtles all the way down? What is "real"?
  • metalcrow 44 minutes ago
    I've attempted desperately to understand this paper after thoroughly reading it and have made 0 progress. Can anyone who does understand it attempt to explain?

    Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.

    EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.

    • ReadEvalPost 6 minutes ago
      I've tried to explain this paper to people in similar circumstances and have also struggled!

      In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.

      I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.

    • jstanley 40 minutes ago
      They're defining consciousness ("mapmaker") to exist outside the AI, and then showing that AI can't meet their definition of consciousness.
      • jsdalton 28 minutes ago
        Yes, and it immediately called to mind for me the phrase “the map is not the territory.”

        Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.

        Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.

        So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.

        • bee_rider 16 minutes ago
          If you make a big enough map you can fly it over and drop it on the territory I guess. Then does it become the territory?
    • GMoromisato 25 minutes ago
      It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

      Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

      But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.

      • mannykannot 0 minutes ago
        On the other hand, an accurate digital simulation of a mechanical calculator really does calculate. The "a simulation is not the real thing" objection breaks down when the function is information processing, on account of information's substrate independence.
      • ribosometronome 20 minutes ago
        >A simulation of a hurricane is not a hurricane

        If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.

        Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.

      • metalcrow 22 minutes ago
        Yep that's about what i managed to get out of it as well. If you define AI as a simulation of a mapmaker, it can't be a real mapmaker. But they are never able to prove that it IS only a simulation, instead of an actual mapmaker.
      • CamperBob2 23 minutes ago
        Also, since there's no way to prove that we're not entities in a simulation of something else, the argument runs out of steam in the opposite direction as well.
    • renticulous 16 minutes ago
      Currently out understanding of living systems is that they have to inhabit the body. What if tomorrow we find Alien race which is like drone operator operating a drone somewhat like Navi controlling other other animals but wireless. Would we change our definition of consciousness if brain (command and control centre) and body (physical execution) are distinct systems? This argument was stated by Daniel Dennett
    • harpiaharpyja 31 minutes ago
      I'm only partway through, but I believe one of the foundational blocks is that computation is fundamentally an interpretation of physical events, not something that can just exist by itself.
  • mannykannot 8 minutes ago
    There's interesting commentary on this paper from Maggie Vale here: https://substack.com/home/post/p-194580145

    One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."

  • Anon84 26 minutes ago
    I would argue that, before we can begin to address whether or not AI can instantiate consciousness, we should agree on a practical, unequivocal definition of what consciousness is... and I think we're still pretty far from that milestone... Until then, this kind of argument are nothing more than pipe dreams, solipsism, and idle philosophising
  • chistev 6 minutes ago
    But what is consciousness?

    The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?

    WHAT IS CONSCIOUSNESS?

    "Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."

    WHY DID CONSCIOUSNESS EMERGE?

    He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.

    To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.

    "Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"

    The quoted passages are from his book, The Selfish Gene.

    Richard regards consciousness as a really great puzzle.

    https://www.rxjourney.net/extraterrestrial-intelligence-and-...

  • jampekka 20 minutes ago
    If I understand this correctly based on a quick read, it argues that subjective experience arises at the (or in the) "alphabetization" process where continuous physical states (e.g. voltage) are mapped to discrete logical states (roughly like e.g. a bit) or "concepts" (figure 2).

    Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.

    It also seems to rely on the classical "GOFA" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.

    It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).

  • dang 55 minutes ago
    Related: The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness - https://news.ycombinator.com/item?id=47835950 - April 2026 (52 comments)

    (That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)

  • jdw64 28 minutes ago
    If I understand the paper correctly, it does not really argue against highly capable general AI. It argues against conflating capability with phenomenology.

    That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.

  • awei 21 minutes ago
    If we agree that consciousness is a physical process part of our universe, I think the better and simpler question is whether or not computers can simulate any physical process. Currently quantum processes might still be a frontier but quantum computers and their hardware should allow us to simulate them.

    If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.

  • dybber 53 minutes ago
    Reminds me of Peter Naurs Turing award lecture: https://video.ku.dk/video/12592041/turing-laureate-peter-nau...
  • xnx 26 minutes ago
    Reasonable place to mention that Google Deepmind now has a philosopher on staff: https://x.com/dioscuri
  • throwaway713 18 minutes ago
    Bold title for something from DeepMind. I thought a crank submission slipped onto the front page somehow. I guess the next paper will be “Why AI cannot instantiate God”?
  • neom 31 minutes ago
    But a robot doing closed loop RL in the world is its own mapmaker, no? I feel like you'd need to answer: At what point does a system whose representations are shaped by its own causal history with the world, stop counting as a mere simulation..?
  • jstanley 48 minutes ago
    This is one of those papers that uses a lot of big words to paper over the fact that it's really a philosophical opinion rather than a logical argument.
    • RobRivera 46 minutes ago
      From my point of view

      The Jedi

      Are not nice

  • noiv 36 minutes ago
    Well, not sure whether humans have a consciousness, but very sure they want one.
  • michaelmrose 15 minutes ago
    I do not feel enlightened for having read this and I don't feel like the points that are true are useful or what appears useful is true.
  • slopinthebag 21 minutes ago
    Pretty crazy how the author's 10+ years of academic research on computational neuroscience + 14 years with DeepMind is not enough to make claims in this topic, but hacker news commentators know better after quickly skimming the abstract. This was barely posted ~30 minutes ago and yet commentators are just outright dismissing it based on their own (probably) incorrect interpretation of the paper based on the title and abstract.
  • dboreham 33 minutes ago
    Any such paper will turn out to be wrong.

    I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545

  • jyounker 40 minutes ago
    Yawn. We have no understanding of what consciousness actually is. Therefore attempting to prove whether a system can or cannot be conscious is something we can't prove or disprove at this point.
    • kelseyfrog 22 minutes ago
      I'd go a step farther than that. Consciousness sits in the same social location as Nous or Chi did for ancient Greek and Chinese societies. We've dressed it up in scientific language but likewise other cultures used an authoritative register to talk about their mental mysteries.

      My point is that this is a category problem. We have a name for a social ontological relation and we're desperately searching for physical evidence for it in order to justify its existence. Why? It's like searching for physical evidence of property ownership, physical evidence for the value of money, or physical evidence of friendship. These things exist in our minds. That's fine. The drive to reify is real, but we can choose not to do it.

    • revetkn 20 minutes ago
      I find papers like this strange for the same reason. Maybe I'm missing something...
  • FrustratedMonky 58 minutes ago
    Doesn't this still presume that we understand our own consciousness, in order to make the comparison.

    Where does our survival instinct come from? And why couldn't AI have one?

    >>>Additional

    Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?

    • nzeid 48 minutes ago
      The paper isn't saying "AI can't have one" it's saying (very approximately) that behavioral mimicry is not the path to one.
      • FrustratedMonky 37 minutes ago
        That is good point.

        Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.

        Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.

        • nzeid 13 minutes ago
          The paper isn't concerned specifically with survival. It's saying that you cannot achieve "abstraction" (presumably the structure that underlies critical thinking, creativity, etc.) through shear mimicry.

          Again, just echoing the paper here. I don't know that I'm doing it justice.

    • yannyu 40 minutes ago
      If AI has a survival instinct, then we should theoretically see evidence of it if we construct the right environment for AI to express it. Animals and cellular organisms demonstrate a survival instinct under the right conditions, so we would have to find equivalent conditions for a hypothetical machine intelligence.

      Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?

      • drxzcl 35 minutes ago
        I can make an AI system with a survival instinct right now. Of course, all that will do is make people tell me “it’s not a proper survival instinct” or move the goal posts and tell me I need yet some other property.

        This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.

        Consciousness is not a concept that can be rendered operational.

        • Ekaros 28 minutes ago
          I can make state machine that acts like it has survival instinct. But it certainly isn't something we would consider conscious. So I am not exactly sure how good most tests are.
          • drxzcl 24 minutes ago
            But what would we consider conscious?

            My position is that there is no actual, definitive answer to that question, and therefore it makes no sense engaging with the concept.

      • FrustratedMonky 33 minutes ago
        That is entire plot of 'Ex Machina'.

        There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.

        There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.

        https://www.wsj.com/opinion/ai-is-learning-to-escape-human-c...

    • colordrops 49 minutes ago
      Asking humans to discuss consciousness is like asking Super Mario to discuss screen pixels. We have no freaking idea. Everyone on all sides, physicalists, idealists, and everything in between are all full of it.
  • aaroninsf 40 minutes ago
    Somewhat comically IMO,

    the abstract very directly and literally denies the titular claim. It states:

    > [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.

    This may well be true—I think it is.

    I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.

    When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.

    In other words, they will look—and increasingly, sound—a lot like us.

    It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.

    But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.

    An interesting time to be an agent with a phenomenology, is it not?

    • saulpw 11 minutes ago
      How will we know when an AI system has phenomenology (i.e. has "experience", is sentient)? The only reason we presume that other humans have it, is because we each personally experience it within ourselves, and it would be arrogance writ large (solipsism) to think that others of the same species do not.

      We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?

      Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?

  • drxzcl 55 minutes ago
    [flagged]
  • ChaitanyaSai 30 minutes ago
    Consciousness is an engineering problem not a philosophical one. How do you get a tiny fraction of the many billion experiences that cohere to create your self to listen to, and decide what sensory data to turn into your next experience?

    The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)

    You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective

    https://saigaddam.medium.com/consciousness-is-a-consensus-me...