Matthew Larkum
mattlark.bsky.social
Matthew Larkum
@mattlark.bsky.social
Neuroscientist at the Humboldt University of Berlin, violinist and chamber music enthusiast
But now there are now two kinds of “nothing”. With green light, the “feedback replay” doesn't need to do anything. If we simply turn the replay device off, it “can’t” do anything. According to theories that depend on causality (e.g. IIT), the two kinds of nothing are fundamentally different.
May 26, 2025 at 10:13 AM
A computational functionalist must decide:
Does consciousness require dynamic flexibility and counterfactuals?
Or is a perfect replay, mechanical and unresponsive, still enough?
May 26, 2025 at 10:13 AM
So we ask: is consciousness just the path the system did take, or does it require the paths it could have taken?
May 26, 2025 at 10:13 AM
In Turing terms: for the same input, the same state transitions occur. But if you change the input (e.g. shine red light), things break. Some states become unreachable. The program is intact but functionally inert. It can’t see colours anymore. Except arguably green - or can it?
May 26, 2025 at 10:13 AM
For congruent input (here, the original green light), no corrections are needed. The replay “does nothing”. Everything flows causally just as before. Same input drives the same neurons to have the same activity for the same reasons. If the original system was conscious, should the re-run be, too?
May 26, 2025 at 10:13 AM
Back to the new thought experiment extension, where we add a twist: “feedback replay”. Like how patch clamping a cell works, the system now monitors the activity of neurons, only intervening if needed.
May 26, 2025 at 10:13 AM
Could the head be feeling something? Is it still computation?
May 26, 2025 at 10:13 AM
In the original thought exp, we imagined “forward replay”. Here, the transition function (the program) is ignored, which amounts to a “dancing head”. This feels like a degenerate computation (Unfolding argument? doi.org/10.1016/j.co...).
May 26, 2025 at 10:13 AM
To analyze this, we model it with a Universal Turing Machine. Input: “green light.” The machine follows its transition rules and outputs “experience of green.” Each step we record 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m).
May 26, 2025 at 10:13 AM
So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?
May 26, 2025 at 10:13 AM
Then we replay it back into the same neurons. The system behaves identically. No intervention needed. So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?
May 26, 2025 at 10:13 AM
We record the entire sequence of what happens when “seeing green”. Then we replay it back into the same simulated neurons. If computational functionalist is right, this drives the “right” brain activity for a 1st-person experience.
May 26, 2025 at 10:13 AM
Now, imagine a person looking at a green light. If the computational functionalist is right, the correct brain simulation algorithm doesn't just process green, it experiences green. Here, we start by assuming some deterministic algorithm can simulate all crucial brain activity.
May 26, 2025 at 10:13 AM
This extends a thought experiment from our earlier paper: doi.org/10.1371/jour...
We (Albert Gidon and @jaanaru.bsky.social) asked: does brain activity cause consciousness, or is something essential lost when the brain's dynamics are bypassed?
Does brain activity cause consciousness? A thought experiment
The authors of this Essay examine whether action potentials cause consciousness in a three-step thought experiment that assumes technology is advanced enough to fully manipulate our brains.
doi.org
May 26, 2025 at 10:13 AM