burkleypatterson.bsky.social
@burkleypatterson.bsky.social
What you’re describing is called an interpolative database or a stochastic parrot. People suspected that was all they were doing early on. Since then it’s become very clear that’s not the case. Plausible words aren’t enough to solve novel problems, but I see them find the right answers every day
April 26, 2025 at 12:52 AM
It’s also worth pointing out that the environmental impact of *using* them is overblown. All the scary early estimates amortized the initial cost of training. But most models can run from a decent laptop. A person usually burns much more metabolic energy solving a problem than an LLM
April 24, 2025 at 10:35 AM
My take is that the future is here, and it’s shockingly well distributed. Now nearly everyone has access to an expert in (almost) any topic. With appropriate caution, you can get legal advice, medical advice, or deep dive into any research topic without paywalls or $200 textbooks
April 24, 2025 at 10:31 AM
Experts still have to frequently consult their textbooks/notes/reference material. ChatGPT often saves me hours per day doing just that. I can ask it fuzzy questions peppered with context, and it will respond with the combination of physical principles I need. Then I can verify it quickly
April 24, 2025 at 10:24 AM
What you’re describing sounds true of the earliest generations of LLMs. But I’d encourage you to try using a new one. The accuracy, problem solving skills, and both breadth and depth of knowledge is pretty extraordinary.

I work in photonics engineering, and get enormous value from chatGPT daily
April 24, 2025 at 10:19 AM