Josh Brake
@joshbrake.com
I write about technology, education, and human flourishing all through the lens of a prototyping mindset. | Husband, Dad, Engineering Prof. at Harvey Mudd, Coffee roaster, Pizzaiolo
Substack → joshbrake.substack.com
Website → joshbrake.com
Substack → joshbrake.substack.com
Website → joshbrake.com
Interesting. That is a new term to me.
April 13, 2025 at 1:52 AM
Interesting. That is a new term to me.
Maybe I’m just salty, but for all the discussion around using AI to create something new, it sure seems like a lot of it is focused on diluting the value and beauty of something that already exists.
This is not the only way! But the fact that this is the path being taken should tell us something.
This is not the only way! But the fact that this is the path being taken should tell us something.
March 30, 2025 at 1:48 AM
Maybe I’m just salty, but for all the discussion around using AI to create something new, it sure seems like a lot of it is focused on diluting the value and beauty of something that already exists.
This is not the only way! But the fact that this is the path being taken should tell us something.
This is not the only way! But the fact that this is the path being taken should tell us something.
What we need is something like fair trade AI. Models that are built on a dataset that is clearly disclosed so that we can trace the origins. Even then, there are issues around proper citation of sources that will remain, but at least we’ll know the ingredients in the soup.
March 29, 2025 at 9:30 PM
What we need is something like fair trade AI. Models that are built on a dataset that is clearly disclosed so that we can trace the origins. Even then, there are issues around proper citation of sources that will remain, but at least we’ll know the ingredients in the soup.
LLMs by their very structure cut against this grain. By nature, LLMs mix things together into a soup in such a way that it is very hard to untangle the provenance of the inputs. Of course, this is a feature, not a bug. Particularly helpful if you want to obscure the training data.
March 29, 2025 at 9:30 PM
LLMs by their very structure cut against this grain. By nature, LLMs mix things together into a soup in such a way that it is very hard to untangle the provenance of the inputs. Of course, this is a feature, not a bug. Particularly helpful if you want to obscure the training data.
Thanks for coming to my TED talk.
March 13, 2025 at 9:49 PM
Thanks for coming to my TED talk.
Let’s find ways to focus our energy on the important and valuable activities that support learning.
March 13, 2025 at 9:49 PM
Let’s find ways to focus our energy on the important and valuable activities that support learning.
With that said, I understand (and agree!) with the underlying thesis: there are lots of way that schooling wastes energy. This is parasitic friction which saps us. But let’s not throw the baby out with the bathwater.
March 13, 2025 at 9:49 PM
With that said, I understand (and agree!) with the underlying thesis: there are lots of way that schooling wastes energy. This is parasitic friction which saps us. But let’s not throw the baby out with the bathwater.
Friction is key for traction. Without it, we can’t move anywhere. Think about trying to run on ice. Friction itself is not the enemy.
March 13, 2025 at 9:49 PM
Friction is key for traction. Without it, we can’t move anywhere. Think about trying to run on ice. Friction itself is not the enemy.
Thanks for the love, Marcus.
March 5, 2025 at 9:13 PM
Thanks for the love, Marcus.