Richard Lange
bonsai-lab.bsky.social
Richard Lange
@bonsai-lab.bsky.social
Assistant Professor in CS/CogSci at RIT.
All I can think of is the Calvin and Hobbes strip: "Verbing weirds language"
July 25, 2025 at 7:31 PM
We can falsify hypotheses about particular vars being represented by particular neurons in particular formats. Marginalizing a theory prediction over all possible formats/implementations sounds, uh... like a good problem for "future work" 😬
July 11, 2025 at 2:33 PM
I'm just trying to come at this from the perspective that mechanisms are hierarchical. I'd explain the behavior of some software on my computer in terms of data structures and algorithms, not in terms of bits and transistors. So head-direction mechanisms don't need to jump straight to synapses
July 10, 2025 at 11:30 AM
TBH so am I since I'm new to the Bechtel/Craver perspective, but let's try out your framework:
- entities = points on the manifold
- activities = movement between points
- organization= a ring

@dlbarack.bsky.social what does the hopfieldian view say? Can manifolds be mechanisms?
July 10, 2025 at 11:21 AM
rather than jump from behavior to synapses, let's introduce some intermediate level of abstraction. how would you feel about the claim that "ring manifolds" are a mechanistic explanation of head direction, but it's unclear which sub-mechanism (RNN or attractor) underlies the ring?
July 9, 2025 at 3:38 PM
I want to appreciate this diagram more fully but there's just so much going on! My working marmory just isn't up for the task
June 22, 2025 at 12:34 AM
I suppose "creating" might be too strong of a word, but I also have no problem with neuroscientists describing the process of "making information more accessible" as a kind of "increase in information" through processing.
June 7, 2025 at 9:37 PM
So, the brain cannot create [Shannon] information but it can create [usable] information about the world. Still agree?

If you replaced "Shannon" with "usable" in neuro papers, would that resolve your grievances with the term "information"?
June 7, 2025 at 6:11 PM
For some definitions of information, sure. But processing is more than just removing irrelevant bits. Reformatting can make information usable or accessible, which increases the "usable information." This is a relevant alternative notion of info for neuro, no?
June 6, 2025 at 9:42 PM
Character limits are hard. Let's try anyways!

CNN = dense, one feature vec per location
R-CNN = one id per "region" where regions might be overlapping
YOLO = pre-set regions (coarse grid) with possibly 1-2 IDs per region

All do some WTA post processing but in CV it's called non-maximal-suppression
May 13, 2025 at 10:09 PM
Part of the original vision (ahem) for convnets was that you get a vector representation for each location in the image and can in theory read out "dense" information from these feature maps. Surprisingly hard to find SOTA architectures doing this. Variations on YOLO and R-CNN are popular
May 13, 2025 at 11:58 AM
Just a few weeks ago I got some new theory results for ridge regression on neural-like data (with power law cov spectrum)! Paper with background linked below. it depends not on the *lowest* singular value but on the full SV spectrum and the amount of ridge regularization.
projecteuclid.org
February 27, 2025 at 5:53 PM
I'll be happy to try out some different tools. thanks!
January 3, 2025 at 9:22 PM
It's just such a contrast to the benchmark one-upsmanship you see in ML. Both have flaws, but it's hard to justify new & bespoke animal experiments when a model hasn't already been tested on extant data. Curious if others see it differently. Glad that neuro culture is moving towards benchmarks.
January 3, 2025 at 3:14 PM
I often wish I had apps for speaking notes and listening to articles while driving. (Maybe these exist? What do people use?)
December 7, 2024 at 12:28 PM