Andy Goldschmidt
subtlemachinery.substack.com
Andy Goldschmidt
@subtlemachinery.substack.com
Marketing, Data and AI
Pinned
🚀 Excited to launch Subtle Machinery – my newsletter on how GenAI is reshaping marketing, analytics, and beyond!
The first issue dives into how tools like Claude Artifacts enable analytics solutions at conversation speed. Join me here: subtlemachinery.substack.com
Subtle Machinery | Andy Goldschmidt | Substack
Exploring how AI is changing marketing and analytics. Click to read Subtle Machinery, by Andy Goldschmidt, a Substack publication. Launched an hour ago.
subtlemachinery.substack.com
Your team’s biggest technical debt isn’t in your codebase.

It’s in the unwritten knowledge trapped in people’s heads.

Every time someone asks “How do we do X again?” you’re paying interest on that debt.

Document the answer once. Save everyone the interruption forever.

How AI can help 👇
July 29, 2025 at 11:17 AM
The easiest way to fail at AI adoption: Try to revolutionize your entire workflow on day one.

The easiest way to succeed: Pick one tedious 10-minute task and let AI handle it for a week. Build from there.

Most people think too big and quit too early. Start embarrassingly small.
July 28, 2025 at 6:04 PM
From "I don't code anymore" to building three apps in two months. Here's how #AI changed my relationship with #coding 🧵
How I Built Three Apps Without Writing Code: My Journey with Cursor
From a 4-year coding hiatus to launching multiple projects in weeks: How AI-powered development is transforming the way we build software
open.substack.com
February 13, 2025 at 3:33 PM
Watching my wife's reaction to #Cursor went from skeptical after my explanation to 'oh wow!' after a 2-min demo. Reminded me: #AI tools often seem too good to be true until you see them in action. Skip the pitch - just show it working.
February 4, 2025 at 10:55 AM
💡 The Power of Small AI Wins

I've been exploring how teams successfully adopt #AI, and there's a consistent pattern: the biggest impacts often start with the smallest steps.
January 30, 2025 at 4:44 PM
It's curious when people claim "the #AI bubble popped". Nothing popped. A bubble is something getting hyped and promised value not materializing (hello NFTs 👋). #Deepseek R1 lead to a correction on the possible economics of AI, but didn't change its potential and usefulness(on the contrary, even).
January 27, 2025 at 1:17 PM
Reposted by Andy Goldschmidt
Every few months, I write an opinionated guide for general purpose users about which AI to pick, especially for newcomers.

Here is my brand new one, which I actually had to update multiple times in the few days I was writing it. Things are changing fast. open.substack.com/pub/oneusefu...
Which AI to Use Now: An Updated Opinionated Guide
Picking your general-purpose AI
open.substack.com
January 26, 2025 at 2:08 PM
#Cursor #AI continues to impress me. I was able to build a fully functional website and deploy it to Netlify without writing a single line of code or terminal command (agent mode is insane!). The whole thing took less than 90 minutes. It’s a new world for software/web development.
January 25, 2025 at 6:39 PM
🧵 Leading our team's #AI adoption journey has been quite interesting. The key insight? Let it develop naturally through curiosity and collaborative learning. Our best progress came when we stopped forcing adoption and created space for organic growth.
January 23, 2025 at 3:22 PM
Success in your team's AI adoption comes from organic growth through experimentation and peer learning. Key strategies:

- Create space for experimentation
- Start with small, achievable tasks
- Build momentum through team champions
- Focus on hands-on learning

Read more in my latest newsletter.
Making AI Work in Data Teams: A Practical Guide
Create an Environment Where AI Adoption Flourishes.
open.substack.com
January 21, 2025 at 3:22 PM
Just read this fascinating piece about returning to coding after years away from it. Like the author, I've spent years delegating tech projects as an analytics manager. That changed when I discovered Cursor ("The AI Code Editor") over Christmas break.
My September Moment
When it clicked and I re-awakened the builder I used to be
www.meditationsontech.com
January 16, 2025 at 11:07 AM
OpenAI just released a Tasks feature for ChatGPT that lets you schedule tasks like doing a 15-minute workout daily. It's clear OpenAI recognized how far ahead Claude was in terms of user-friendliness, and they've made great strides to catch up—now they're back in the lead.
January 15, 2025 at 7:07 AM
🧵 Just published my first newsletter of 2025! Three major AI developments are reshaping how we work with data:
Reasoning, Multimodality, and Agents: Three developments every data professional needs to know
Exploring how OpenAI's o1, Advanced Voice Mode, and autonomous systems are reshaping the future of data analysis and decision-making
open.substack.com
January 13, 2025 at 9:38 AM
I am using my Christmas/NYE break to build a few hobby projects that are in my head for some time already. It’s the first time I tried Cursor AI and it completely changed my workflow. Most of the suggestions are relevant, the composer does a very good job… totally different developer experience.
December 29, 2024 at 3:23 PM
One of the best uses of o1 during the Christmas holiday is its ability to accurately calculate the scores for our family‘s Yahtzee matches. A game-changer that brings peace to the table 🎄
December 26, 2024 at 9:15 AM
Analytics leaders: AI isn’t just for coding or dashboards. It’s a game-changer for strategy, communication, and hiring. Imagine freeing up time for real leadership. Let’s dive into how I’ve integrated AI into my workflow.
December 25, 2024 at 7:58 PM
Reposted by Andy Goldschmidt
So many critics have chosen to adopt a set of beliefs that AI is going to go away - sometimes it is an insistence that it is all fake, sometimes “model collapse” etc. The evidence just doesn’t support this.

We need good criticism & policy in this space, and that requires recognizing where we are.
December 24, 2024 at 7:05 AM
Anthropic shares practical insights from working with dozens of teams on AI agents. Key takeaway: success often comes from simple, composable patterns rather than complex frameworks. Great guide for anyone interested in Agentic AI.
Building effective agents
A post for developers with advice and workflows for building effective AI agents
www.anthropic.com
December 23, 2024 at 5:59 PM
Been testing o1 extensively over the past few days. It’s faster & slightly better than o1-preview, but for most of *my* use cases, the improvements are incremental.
December 13, 2024 at 5:11 PM
Reposted by Andy Goldschmidt
I would put this even more strongly: open source AI is probably our only realistic chance to avoid a terrifying increase in concentration of power. I do not want to live in a world where the people with all the money also have all the intellectual power.
The most realistic reason to be pro open source AI is to reduce concentration of power.
"money has flowed to tech giants and others in their orbit... [and] raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth" www.bloomberg.com/opinion/arti...
November 29, 2024 at 9:35 PM
ChatGPT's new Canvas feature demonstrates that everyday LLM users shouldn't worry about diminishing returns from larger model training. Current models are already incredibly powerful for consumers, with vast untapped potential in how we interact with them.
December 11, 2024 at 6:15 AM
Paying $200/mo for ChatGPT Pro sure seems crazy. But if it’s as impressive as the demos and benchmarks suggest it’s a small price to pay for people working on hard problems o1 (pro) is made for.
December 5, 2024 at 8:30 PM
Reposted by Andy Goldschmidt
Been playing with o1 and o1-pro for bit before this release.

They are very good & a little weird. They are also not for most people most of the time. You really need to have particular hard problems to solve in order to get value out of it. But if you have those problems, this is a very big deal.
December 5, 2024 at 6:44 PM
Hallucinations are a part of using GenAI and you need to be aware of it. But it should not prevent you from using AI tools!
I think firms worrying about AI hallucination should consider some questions:
1) How vital is 100% accuracy on a task?
2) How accurate is AI?
3) How accurate is the human who would do it?
4) How do you know 2 & 3?
5) How do you deal with the fact that humans are not 100%?
Not all tasks are the same.
December 5, 2024 at 10:27 AM
Some might say I put too much trust in ChatGPT... (spoiler alert: the avocado was just fine, no problems whatsoever)
December 5, 2024 at 10:23 AM