hcarver.bsky.social
@hcarver.bsky.social
Yep, Richard Feynman said this and is at least influential in some parts of the manosphere. E.g. m.youtube.com/watch?v=IaWf...

I'm sorry to report that reality is beyond parody.
#richardfeynman on brushing teeth 🪥 🤯 #nobelprize
YouTube video by Wise Owl Wealth (WoW)
m.youtube.com
January 12, 2025 at 4:45 PM
100% agree about wasted human potential. Automating all forms of toil and letting humans do the interesting bits is a vision I can get behind.

And my views on the state of ~most learning (and what I'm trying to do about it) are well-documented so I'll spare you a repetition of that rant.
January 12, 2025 at 12:07 PM
Is that 80% stat from research someone's actually done? That'd be awesome (and seems evidently achievable for someone to perform)
January 11, 2025 at 4:09 PM
Honestly, I was trying to do the opposite. I wanted to be impressed! It got the logic right, but translated it into maths (integers up to 10) incorrectly.

And I thought the ARC benchmarking involved pre-training on related examples, which seems like... cheating?
January 11, 2025 at 4:04 PM
Solving problems of increased complexity with arbitrary values (which you couldn't learn by searching) seems like a big hurdle yet to be overcome.
January 10, 2025 at 5:34 PM
Again, I really appreciate the replies. I tried O4 myself - asked it two (simple) questions, and it got them both wrong.

I think the thing that is hard is the interface between explicit / rule-based reasoning (like doing novel maths) and implicit reasoning (language).
January 10, 2025 at 5:33 PM
Super interesting, thanks for the reply. What do you mean when you say "We've discovered the optimisation function"? And what makes you determine that it's skewing to AGI?
January 9, 2025 at 10:49 PM
Hard agree. Similarly, I find myself talking about it to try to push back against for hype.
November 29, 2024 at 11:23 PM
I definitely have a horse in this race (skillerwhale.com)

If by struggle, we mean "understand, try, fail, improve" then that's an excellent way to learn skills. If we mean "try, fail, try, fail, bang head on wall, admit defeat, get peer to do it", then the learning is "I can't" rather than "I can".
Skiller Whale
Fast, Flexible, Live Learning for Engineering Teams
skillerwhale.com
November 27, 2024 at 10:34 AM
Is it? The dialogue aspect seems weaker for coding, and seems mainly useful for giving information in the flow of work, rather than providing a skill.
November 26, 2024 at 5:41 PM
It's important to distinguish knowledge, process knowledge, and skills. Google replaces knowledge, Intellisense implements process knowledge, but coding assistants are trying to emulate a skill. That's a big difference, that goes beyond automating boilerplate.
November 26, 2024 at 12:09 PM
What makes you think that? And is it just a problem that's solved by better UX?
November 22, 2024 at 9:59 AM
Or that previous iterations of search are dead. I can imagine a thesis that "the whole web is better with liberal AI sprinkles".

Agree about it being essential infrastructure - and the Chromium project exists! - but I think paid-for solutions that build on it can win the market (cf Ubuntu, maybe?)
November 21, 2024 at 12:59 PM
I can imagine OpenAI putting in a bid
November 21, 2024 at 11:33 AM
All GenAI? I'd agree with the Transformer model being hard to extend...
November 19, 2024 at 5:24 PM
Thanks for sharing this link. I definitely agree that we're at least one breakthrough away from genuinely world-changing generative AI.
November 19, 2024 at 4:47 PM
My answer is the same though - how do you get good enough to supervise a code-writing AI if you never write code? Or a contract-writing AI if you never write contracts? Or a car-driving AI if you never drive a car?
November 19, 2024 at 12:32 PM
If juniors using AI tools can go straight to "applying" because the tools are so good, how do we make sure the juniors don't skip the "understanding phase" themselves?

How do you get good enough to be a supervisor if you never get to do the thing yourself?
November 18, 2024 at 5:48 PM
Until someone develops a workable form of generative AI that can "understand" the "world", we need people to supervise the stuff they produce.

But that creates a new problem.
November 18, 2024 at 5:43 PM
It's hard to test for understanding. But we can see its absence when an AI generates something that is obviously, to us, daft.

The third hand with 6 fingers in the generated picture, the assertion about a legal precedent that doesn't exist.

Humans only make those mistakes intentionally.
November 18, 2024 at 5:40 PM
Not so for a generative AI.

"Knowing" maps neatly onto information recall. They're great at that!

"Applying" maps pretty well onto generating new content. They can do that too!

But they skip a step that people have to pass through - "Understanding".
November 18, 2024 at 5:37 PM
So you, a presumed human, have to learn and understand some ideas before you can apply them to a problem.

And you have to be able to apply them to a problem before you can, say, evaluate what someone else has done.
November 18, 2024 at 5:35 PM