AI Notes
banner
ai-notes.bsky.social
AI Notes
@ai-notes.bsky.social
The value of a person in no way depends on their intelligence.
I think for most tasks, the bottleneck is reliability, not capability. So even though capability is definitely increasing on some dimensions (for whatever reason, scaling or otherwise, I don't know) most people just don't notice. Very, very few people need the math abilities of o1-preview.
November 16, 2024 at 11:53 PM
To put it another way: some folks in the NLP community would be horrified if they knew what people actually use search engines for!
November 4, 2024 at 12:50 PM
It's a funny analogy, but I think the situation might be subtler than this. People use search engines for all sorts of things, not just information retrieval. For some of these other tasks, isn't it conceivable that AI would be more fit for purpose?
November 4, 2024 at 12:48 PM
People in science and technology are seeing something very different from people in the humanities, but I think that's a temporary phase.
November 4, 2024 at 12:46 PM
Isn't this just a matter of different subdisciplines using the word "model" in different ways? I feel like I'm watching a mathematician complaining that fields aren't just a bunch of grass, they have to be commutative.
October 29, 2024 at 1:10 AM
Real-world usage spans a very broad set of tasks. Look at the data yourself if you don't believe me, e.g.:
www.nber.org/papers/w32966
And true generality is definitely an engineering goal—it's the famous G in "AGI." All frontier model companies are public and explicit about this.
The Rapid Adoption of Generative AI
Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, an...
www.nber.org
October 27, 2024 at 12:31 AM
I don't know of any technology adopted as fast as ChatGPT. Examples that are close (personal computers, the internet) indeed became pervasive and foundational. E.g. see www.stlouisfed.org/on-the-econo...
The Rapid Adoption of Generative AI
An analysis suggests that generative AI has been quickly and widely adopted at home and in the workplace, with about 40% of the U.S. population ages 18 to 64 using it to some degree.
www.stlouisfed.org
October 26, 2024 at 7:55 PM
I've met a lot of people who are 100% certain that AI will flop. That's probably who this kind of language is aimed at. I completely agree it would be better if they hedged and said, "There's a decent chance AI will be pervasive, and we want you to help decide how we use it."
October 26, 2024 at 7:49 PM
LLM-based chatbots are built for general use and in practice are used for a wide variety of things. I'm genuinely curious: what leads you to see them as application-specific artifacts? Or is this more of a normative statement, that you wish they'd be built and used in a more targeted way?
October 26, 2024 at 7:14 PM
I think it sets a baseline, but not a ceiling. And LLMs have blown way past my baseline expectations for what I guessed next-token prediction would produce. Isn't it at least a reasonable hypothesis they may be learning something deep as a byproduct of a superficial training task?
October 26, 2024 at 11:53 AM
LLMs are a technique, not a tool: they're not "meant" for anything. (Is the fast Fourier transform "meant" for audio engineering or detecting nuclear tests? Why not both?) And at this point, the best LLM-based systems are far better than the average person at math. Surely that's worth exploring?
October 26, 2024 at 11:34 AM
Oh, I see what you're saying! That is interesting, and I don't know of any studies.
October 19, 2024 at 11:01 AM
The belief was that this made it easier to learn to translate the first word, which then made it easier to learn to translate the second, etc. I don't know if they ran careful experiments to show this was the mechanism.
October 19, 2024 at 12:55 AM
I think there might be more to the story. One of the biggest AI believers I know is (1) a socially adept extrovert; and (2) was incredibly skeptical, up until LLMs became good enough that they helped him write a certain type of specialized code much faster.
October 17, 2024 at 6:02 PM
I believe you. There seem to be dramatic differences between subdisciplines. In your work it's useless, but in chemistry, it just won a Nobel. As we figure out what universities should do, I find it helpful to take into account how different our various experiences are.
October 17, 2024 at 5:56 PM
I think her analysis of the structural pressures on universities is excellent! But what I'm seeing on the ground is a mix of those pressures with "endogenous" aspects of the technology itself: its enormous utility for certain kinds of work, and its rapid improvement. Those are critical factors, too.
October 17, 2024 at 1:35 PM
Excellent mini-talk! One missing variable is that many profs (in physics, chemistry, CS) are now finding AI extremely useful for their own work. That makes it harder to see as a "cheating device." This seems like a huge factor in the "pivot," and which may not be equally visible in all disciplines.
October 17, 2024 at 1:29 PM
So is it fair to say your level of belief (or disbelief) would be the same if they'd used the p < 0.05 standard?
October 14, 2024 at 7:37 PM
I suppose the converse question is interesting too: what grand-but-incorrect discoveries would we have made without an understanding of null hypothesis testing?
October 14, 2024 at 2:56 PM
Great essay! You ask, "What are the grand discoveries that we wouldn’t have made without an understanding of null hypothesis testing?" Would the discovery of the Higgs boson count? As I understand it, the transition from "cool theory" to "Nobel prize" hinged on a p-value.
October 14, 2024 at 2:55 PM
Yep! The argument in your paper makes sense. It was just the nonstandard use of "structural stability" that threw me. (In standard usage, e.g., the identity map on a manifold is *not* structurally stable.) Anyway, it's a great article, whatever the terminology you use!
October 13, 2024 at 8:04 PM