Gamble McAdam
banner
ramblingambler.bsky.social
Gamble McAdam
@ramblingambler.bsky.social
👨‍💻 Software engineering lead

💻 Technophile

🙄 Part-time misanthrope

🧑‍🧑‍🧒‍🧒 Full-time father

📚 Lifelong learner

🔮 Let’s predict the future by building it together!
The Deepseek phenomenon is what happens when firms that sell shovels in a gold rush forget they are not the only seller in town and think they can sell you shovels made of gold because of it.
January 27, 2025 at 6:04 PM
When looking to uplevel technical skills there is a tendency to look online for the latest content on a subject.

While it’s important to be up to date, I have found that I prefer books. They may be more outdated but the effort to write them tends to mean the info they contain is higher quality.
January 27, 2025 at 2:42 PM
Can the smart people in ML please, for the love of god, stop writing such great books so frequently?!

I am buried by my backlog and I feel like I’ll need to become a recluse to dig myself out. Please think of my children the next time you want to put pen to paper about a great idea.
January 19, 2025 at 3:53 PM
True generational wealth isn’t built by simply accruing economic wealth and giving it to your kids, it’s built by accruing important life lessons and instilling them in your kids so that they can go out into the world and more readily earn that wealth themselves.
January 9, 2025 at 7:22 PM
As both a father and developer that works with AI, I have to admit that of all the culturally enshittifying AI powered slop I have seen presented as “good ideas” this has to be the most smoothbrained application of AI I have ever witnessed. Don’t be a shit parent; read to your goddamn kids yourself.
booxtory
www.ces.tech
January 9, 2025 at 1:46 AM
Hearing the understandable hand wringing over top tier LLMs environmental impact compared to its usefulness is a bit like hearing someone complain about 4k TVs before formats like Blu Ray existed.

Top LLMs are an inefficient semantic interface to content that’s not yet ready for the format.
December 28, 2024 at 6:46 PM
While the ARC-AGI scores for o3 are impressive, it is the equivalent of heating a hotdog with a nuclear reactor.

Until our compute efficiency goes up, the amount of energy and raw compute it requires to get those scores means it’s still a far cry from this capability becoming ubiquitous.
December 21, 2024 at 4:45 PM
The advancement of AI as another form of “intelligence” will require humanity to better define what its own capabilities and aspirations are. A thing in itself is also defined by what it is not.🧵
December 16, 2024 at 8:23 PM
While many SOTA ML models are pushing the upper boundaries of what’s possible, I see the potential for an innovators dilemma to happen where smaller models begin to eat into the market. I’ll be keeping an eye on the TinyML space and quantization methods that can run capable models on consumer HW.
December 7, 2024 at 8:52 PM
This is a direction I hope the future leads us in. This technology is too powerful to shroud in ethically dubious intentions and ambiguous origins. Take a look and help outfits like this continue to push for the open and ethical dispersion of this powerful frontier.
“They said it could not be done”. We’re releasing Pleias 1.0, the first suite of models trained on open data (either permissibly licensed or uncopyrighted): Pleias-3b, Pleias-1b and Pleias-350m, all based on the two trillion tokens set from Common Corpus.
December 5, 2024 at 5:06 PM
The transformer architecture will likely hit a performance ceiling in the next few iterations. If there is no other architectural breakthrough, I believe the gains in LLM effectiveness will come from the systems they interact and integrate with such as knowledge graph leveraged RAG and agentic flows
December 3, 2024 at 5:47 PM
Always remember: No matter how accomplished someone is, from Grace Hopper to Albert Einstein, they started as a beginner.

For the past decade of my SWE career I have moved from web dev to full stack to data engineering and beyond; dabbling in ML throughout, knowing its time would eventually come..🧵
November 29, 2024 at 8:07 PM
I am making a concerted effort to learn the underpinnings of machine learning, specifically the maths. Thinking of sharing some learnings here as I go. Let me know if anyone would be interested in my musings as I learn. Currently reading this book:

themlbook.com
The Hundred-Page Machine Learning Book by Andriy Burkov
All you need to know about Machine Learning in a hundred pages. Supervised and unsupervised learning, support vector machines, neural networks, ensemble methods, gradient descent, cluster analysis and...
themlbook.com
November 24, 2024 at 4:37 PM
“Information isn't the raw material of truth, but it isn't a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely.” ~ Yuval Noah Harari

As BlueSky takes off we see this truth.
November 19, 2024 at 7:10 PM
Tomorrow and plans for tomorrow can have no significance at all unless you are in full contact with the reality of the present, since it is in the present and only in the present that you live. ~ A. Watts

Hey future friends! Join me as I navigate a life of introspection, parenthood, and technology.
November 17, 2024 at 4:16 PM