Hessam Akhlaghpour
thehessam.bsky.social
Hessam Akhlaghpour
@thehessam.bsky.social
Check out my blog: https://lifeiscomputation.com and my personal website: https://www.akhlaghpour.info
To add to all the other relevant papers mentioned in the comments: Gaby Maimon, my current advisor, from his grad school years

www.cell.com/neuron/fullt...
Beyond Poisson: Increased Spike-Time Regularity across Primate Parietal Cortex
Cortical areas differ in their patterns of connectivity, cellular composition, and functional architecture. Spike trains, on the other hand, are commonly assumed to follow similarly irregular dynamics...
www.cell.com
January 16, 2025 at 7:01 PM
As far as I’m aware it’s still an unproven conjecture
January 9, 2025 at 5:31 AM
That’s super exciting!! Congrats!!
January 8, 2025 at 8:04 PM
unbounded =/= infinite
December 8, 2024 at 2:49 AM
Exactly. Same with TMs and memory. At any given moment it uses a finite amount of memory. Any computation requires finite amount of resources to complete. So where is the difference in terms of being realistic?
December 8, 2024 at 2:45 AM
Would you say the same about unbounded time and energy? Because RNNs and finite state machines are unbounded in time and energy. Are they unrealistic idealizations too?
December 8, 2024 at 2:29 AM
Had you said memory expansion in RNNs is expansion in precision, that would be a totally valid response. Unbounded precision is fine, just like unbounded tape length is fine. But the problem with those models is something else - not the unboundedness of memory. Does that make sense? 2/2
December 8, 2024 at 2:11 AM
The only way I can understand “RNN with unbounded memory” is as “RNN with unbounded precision”. I cannot see how an RNN can use unbounded number of neurons, because # neurons is intrinsically bounded as part of the system’s dynamics (just like FSA is intrinsically memory bounded). 1/2
December 8, 2024 at 2:11 AM
There is this paper that discusses one part of your question. doi.org/10.1016/j.nl...

But I really don’t know. 100 years ago, when people were proposing genetic information is stored as atomic arrangement in molecules, they had no idea about readout (polymerases, ribosomes, etc).
Redirecting
doi.org
December 8, 2024 at 12:12 AM
Yes, that is still the subject. It sounds like you are claiming that RNNs are universal because you can build an RNN that simulates the physical memory-bounded digital computer (say the one on my desk). You can use the same exact argument to say finite state automata are universal, which is wrong.
December 8, 2024 at 12:08 AM
(Relatedly, Dijkstra once said "Computer science is no more about computers than astronomy is about telescopes.")

RNNs are supposed to be an abstract system of computation. Each RNN solves a certain input output function. The comparison should be to other systems of computation. 2/2
December 7, 2024 at 10:38 PM
I think the last statement is misleading. It depends on how you define "digital computer". The way you are treating it here is as a device that runs programs, not as an abstract system for computation. In that case it is meaningless to assign a scope of solvable functions to that device.
1/3
December 7, 2024 at 10:38 PM
the state of an RNN cannot be described independent of the number of neurons. In fact, if you assume bounded precision, you can describe the RNNs progression in time as a string of fixed (unchanging) length. This is not the case for, say, combinatory logic or any other universal system. 3/3
December 7, 2024 at 10:10 PM
The description of a program (or the description of its state at any time during its progression) can be specified as an arbitrary length string. One would need to add memory *during* computation (as part of the rules of the system) in order to execute it properly.

In contrast, 2/3
December 7, 2024 at 10:10 PM
In the same sense that you can "expand" memory of a finite state automata by adding more states?

There is a fundamental difference between that and memory expansion in computer programs. The difference is that a progam's state can be described independent of the available memory. 1/3
December 7, 2024 at 10:10 PM
How are you imagining memory expansion in RNNs? Through increasing precision (like the 1990s models that suffer from structural stability)? Or by increasing number or neurons? If you can clarify this, maybe I’ll be able to convey my point better.
December 7, 2024 at 8:47 PM
To elaborate with an example: If Moore's 1998 conjecture is wrong and physically realizable RNNs are universal, then RNNs would be a useful system for organisms to have and tweak/learn/evolve upon. That doesn't mean a specifically universal RNN (capable of running any TM) would be advantageous. 3/3
December 3, 2024 at 4:33 AM
What kinds of problems are in the reach of biological organisms? What kind of computational problems are solvable through evolution, or learning, or some other biological process? Having a language/system with the scope of *all solvable problems* would be adaptive / advantageous. 2/3
December 3, 2024 at 4:33 AM