So long as our approach to intelligence remains vectorizing and compressing all human-identified concepts and their relationships, there will always be more work to do.
March 23, 2025 at 7:02 PM
So long as our approach to intelligence remains vectorizing and compressing all human-identified concepts and their relationships, there will always be more work to do.
Curious how "thinking" in LLMs involves spitting out a bunch of extra words, but our best moments of thinking often involve a profound silence right before. They both "work," but hmm curious
March 23, 2025 at 12:22 AM
Curious how "thinking" in LLMs involves spitting out a bunch of extra words, but our best moments of thinking often involve a profound silence right before. They both "work," but hmm curious
For private, desktop computing (each of those words chosen carefully) we'll move away from fancy overcomplicated interfaces back to basic file formats that map to fundamental tools for thought (document, video, table, flowchart, etc.) and AI doing the heavy lifting.
March 22, 2025 at 5:50 PM
For private, desktop computing (each of those words chosen carefully) we'll move away from fancy overcomplicated interfaces back to basic file formats that map to fundamental tools for thought (document, video, table, flowchart, etc.) and AI doing the heavy lifting.
Claude 3.7 (and probably the others) are so good at generating "good looking" design with terrible information hierarchy that I wonder how many teams are shipping stuff that unintentionally and subtly misdirects users.
March 22, 2025 at 4:39 PM
Claude 3.7 (and probably the others) are so good at generating "good looking" design with terrible information hierarchy that I wonder how many teams are shipping stuff that unintentionally and subtly misdirects users.
In some ways, the internet will be fine post LLM steady state. Some of its best content has been raw data and research put up for free from financially protected academic resources, which have pushed hard to open up in the last decade.
March 21, 2025 at 2:10 AM
In some ways, the internet will be fine post LLM steady state. Some of its best content has been raw data and research put up for free from financially protected academic resources, which have pushed hard to open up in the last decade.
The ideal human data collection UI is screen recording everything on both sides of a remote work session with a thinkaloud, with AI usage is allowed. Scaled oversight.
March 20, 2025 at 7:03 PM
The ideal human data collection UI is screen recording everything on both sides of a remote work session with a thinkaloud, with AI usage is allowed. Scaled oversight.
I want a biomechanics-vision-language model. I half joke I should do yoga teacher training so I still have a job. But but the physiotherapy part would be cool to encode.
Anyone know of one?
March 18, 2025 at 4:57 PM
I want a biomechanics-vision-language model. I half joke I should do yoga teacher training so I still have a job. But but the physiotherapy part would be cool to encode.
Wow is all of @bsky.app still built in @expo.dev? Kind of amazingly buttery smooth, didn’t realized react native could do that… does it use any standard UI libraries or rolls its own?
March 18, 2025 at 3:30 AM
Wow is all of @bsky.app still built in @expo.dev? Kind of amazingly buttery smooth, didn’t realized react native could do that… does it use any standard UI libraries or rolls its own?
Programming languages (their most common use) has a thinking to writing ratio. JSX is probably the lowest. But if you spend a lot of time in it you might think language models can do anything.
March 18, 2025 at 2:59 AM
Programming languages (their most common use) has a thinking to writing ratio. JSX is probably the lowest. But if you spend a lot of time in it you might think language models can do anything.