Sam
banner
perceptions420.bsky.social
Sam
@perceptions420.bsky.social
Generating Entropy
The scientist I presented above is working in the field. His work has been crucial to making novel predictions about the human brain based on these models. His position:

Uncertainty and an acceptance thereof is the epistemically honest position not certainty.
October 5, 2025 at 11:20 AM
This isn't to say that I don't believe that it might be disastrous to replace human cognition even if it will lead to higher efficiency(I can't predict the future).

But my point is that the foundations of this field have always implied that this would be the end result.
From Norvig-Russell.
February 15, 2025 at 5:57 PM
Lmfao. What an analogy.
December 19, 2024 at 10:12 PM
It's actually really ridiculous because ML Researchers think they have a solid understanding of neuro/cognitive science, and Neuroscientists consistently say, "Guys, these models are far more similar to how our minds act than you know!"
x.com/niko_kukushk...
December 1, 2024 at 6:09 PM
My focus is currently more on the computational and representational aspects of ANNs more generally and how these representations hold across human and ANN neuronal structures more broadly. As Professor Sinz told me after someone tried to gaslight me on the findings of his research:
November 27, 2024 at 11:14 PM
Yes lol.
November 27, 2024 at 10:25 PM
Skimmed the paper and am I missing something? Seems to coincide with what Dare's saying. While I get that the point of the paper was to evaluate whether the use of an LLM could enhance a doctor's diagnostic reasoning abilities it doesn't seem to me that Dare inferred wrongly here.
November 20, 2024 at 12:02 PM
November 17, 2024 at 10:21 PM