AI Notes
banner
ai-notes.bsky.social
AI Notes
@ai-notes.bsky.social
The value of a person in no way depends on their intelligence.
Reposted by AI Notes
So first version of an ml anon starter pack. go.bsky.app/VgWL5L Kept half-anons (like me and Vic). Not all anime pfp, but generally drawn.
November 24, 2024 at 4:55 PM
Reposted by AI Notes
The OpenAI emails are interesting in that they make clear that the goal was to build an AGI and then have 1-5 people control it: www.lesswrong.com/posts/5jjk4C...

That seems...wrong.
OpenAI Email Archives (from Musk v. Altman) — LessWrong
As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockma…
www.lesswrong.com
November 16, 2024 at 8:02 PM
Future AI capabilities are already here—they're just not very evenly distributed.
November 4, 2024 at 12:41 PM
Such a good paper! And at the end there's a great summary of counterarguments and counter-counterarguments.
Happy 30,000 downloads to this little one -- about 18 months on the lingbuzz top downloads, a handful of rebuttal papers on what an absolute moron I am.
lingbuzz.net/lingbuzz/007...
A lot has happened since, but I know where I'd put my money in predicting which approach will figure out language.
Modern language models refute Chomsky’s approach to language - lingbuzz/007180
Modern machine learning has subverted and bypassed the theoretical framework of Chomsky’s generative approach to linguistics, including its core claims to particular insights, principles, structures, ...
lingbuzz.net
October 20, 2024 at 4:23 PM
Reposted by AI Notes
OTD in 1881, Charles Darwin published his last book, on earthworms.

It reflected a long interest in animal minds: “One alternative alone is left, namely, that worms, although standing low in the scale of organization, possess some degree of intelligence.”

🧪 🦋🦫 #HistSTM #philsci #pschsky #cogsci
October 10, 2024 at 2:16 PM
This article from 2014 has nothing to do with neural nets, but if you replaced "SAT-solving algorithms" and "Complexity theorists" with "LLMs" and "linguists" it would read as entirely current. Maybe tech discourse always follows the same grooves. cacm.acm.org/opinion/bool...
Boolean Satisfiability – Communications of the ACM
cacm.acm.org
October 7, 2024 at 10:10 PM
Reposted by AI Notes
Impressive attempt to evaluate how well LLMs can summarize novels. They use 26 recent books (so summaries won't be in the training set), extract 3158 claims from the LLM summaries, and have humans evaluate the claims' accuracy.
FABLES: Evaluating faithfulness and content selection in book-length summarization
While long-context large language models (LLMs) can technically summarize book-length documents (>100K tokens), the length and complexity of the documents have so far prohibited evaluations of input-d...
arxiv.org
October 7, 2024 at 1:28 AM
Reposted by AI Notes
Predictive coding has been one of the rare theories in neuroscience with bold, testable predictions at circuit level, and it’s been under scrutiny for years.

It’s exciting to see recent experiments pushing it to its limits, hopefully leading to new directions.

🧠📈
This paper may be very important:

www.biorxiv.org/content/10.1...

tl;dr: if you repeatedly give an animal a stimulus sequence XXXY, then throw in the occasional XXXX, there are large responses to the Y in XXXY, but not to the final X in XXXX, even though that's statistically "unexpected".

🧠📈 🧪
Stimulus history, not expectation, drives sensory prediction errors in mammalian cortex
Predictive coding (PC) is a popular framework to explain cortical responses. PC states that the brain computes internal models of expected events and responds robustly to unexpected stimuli with predi...
www.biorxiv.org
October 5, 2024 at 1:41 AM
Trying the new ChatGPT "canvas" feature. It worked really well for a while, but then started to deteriorate in an interesting way: it kept inserting redundant code, and also "failed" many times to edit due to "pattern matching." It's definitely a promising start, though.
October 4, 2024 at 1:20 AM
Reposted by AI Notes
New study confirms:
Thinking hard feels unpleasant

The unpleasantness of thinking:
A meta-analytic review of the association between mental effort and negative affect. 🏺🧪
psycnet.apa.org/record/2025-...
October 2, 2024 at 4:31 PM
Reposted by AI Notes
Not all visual features are treated equally in brains or ANNs; some are favored by more neurons.

What are the behavioural and learning consequences of these biased representations?

I discuss this question in a new blog post: tinyurl.com/32ys9k8d

(1/4)

#neuroscience 🧠🤖 #VisionScience
Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.
A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
snailab.notion.site
October 2, 2024 at 9:07 AM
Reposted by AI Notes
Mental programming of spatial sequences in working memory in the macaque frontal cortex
doi.org/10.1126/scie...
#neuroscience
Mental programming of spatial sequences in working memory in the macaque frontal cortex
How the brain mentally sorts a series of items in a specific order within working memory (WM) remains largely unknown. We investigated mental sorting using high-throughput electrophysiological recordi...
doi.org
September 30, 2024 at 12:41 AM
The exhausting thing about AI discourse is that so many technical and philosophical debates turn out to be proxy battles between groups of people who just plain dislike each other.
September 28, 2024 at 1:10 PM
Fascinating paper about the effect of scale in animal brains! Could our "uniquely human" ways of thinking simply reflect an increase in information capacity?
New perspective in @natrevpsych.bsky.social: human intelligence is a matter of scale of information processing, not genetic changes to one domain. Implications for AI, evolution, and development. - with
@cantlonlab.bsky.social
rdcu.be/dDoBt
September 28, 2024 at 1:03 AM
Sam Altman now says it will take at more than a decade ("a few thousand days") to achieve superintelligence. The way to read this, given his audience, is that he's desperately trying to lower expectations.
September 26, 2024 at 12:04 AM
Reposted by AI Notes
How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/
May 23, 2024 at 6:58 PM
How can neural nets represent ordered lists? This extremely interesting paper describes a geometrical technique that seems to apply for both real and artificial systems!
September 23, 2024 at 7:49 PM
I am going to tear my hair out if I see another take about how "AI researchers think X" where X is marketing copy from a corporate press release.
September 22, 2024 at 8:36 PM
Revising my opinion on this. I was like a chef who felt disoriented hearing emergency-room doctors talk about knives.
It is so disorienting to hear people complaining about LLM "slop," even as I watch ChatGPT produce beautiful clean code and lucidly explain complicated math. Is there more "slop" in some domains? Is it a skill issue? Is the slop we see produced largely by lazy or malicious people?
September 20, 2024 at 10:35 AM
It is so disorienting to hear people complaining about LLM "slop," even as I watch ChatGPT produce beautiful clean code and lucidly explain complicated math. Is there more "slop" in some domains? Is it a skill issue? Is the slop we see produced largely by lazy or malicious people?
September 19, 2024 at 8:55 PM
I once found a quick proof of a certain obscure theorem. It's not really worth publishing, since other proofs are known, and it's nowhere on the internet. I had a dialog with OpenAI's o1-preview about the theorem, and it actually suggested the idea of the "secret" proof to me! Color me impressed.
September 18, 2024 at 11:47 PM
The amount of time o1-preview takes to answer a question feels like a score in a video game, or a measure of how deep the question is. Yes, this is irrational.

But my high score is 53 seconds.
September 17, 2024 at 11:23 AM
Who has the clearest view of AI, and why is it Terence Tao?
www.youtube.com/watch?v=_sTD...
The Potential for AI in Science and Mathematics - Terence Tao
YouTube video by Oxford Mathematics
www.youtube.com
September 16, 2024 at 6:58 PM
Excellent short essay on the harder-than-it-seems world of linear algebra: www.argmin.net/p/linear-doe...
Linear Doesn't Mean Easy
Applied linear algebra is much harder than advertised.
www.argmin.net
September 14, 2024 at 7:50 PM
It is still remarkably easy to fool the new OpenAI models by asking them to prove false theorems! Maybe an RLHF issue, but I think there might be a deeper problem with how these systems check their own answers.
September 14, 2024 at 10:29 AM