Leo Meyerovich
banner
lmeyerov.bsky.social
Leo Meyerovich
@lmeyerov.bsky.social
Makes: graphistry.com/get-started / louie.ai / graphtheplanet.com

OSS: pygraphistry, gfql graph lang, apache arrow, GPU dataframes

Before: web FRP, socio-plt, parallel browsers, project domino

Data-intensive investigations with LLMs, GPUs, & graphs
5/5

... But fundamentally, as an AI team, it's hard to get excited by steam engine vendors & their VCs when our day-to-day is about electricity. Our team has been enjoying the fun interfaces, but abstaining: that's not where our work is.
April 27, 2025 at 3:24 AM
4/

The Python-only-era tools are flat, with occasional step improvements when OpenAI releases something 10% better

AI-native teams think in learning loops that compound over users and time

These look similar early on, and building loops is hard so probably < 5% of LLM devs do them today
April 27, 2025 at 3:22 AM
3/

Today's AI-native teams: We think about learning. If an agent is doing some MCP flow today, will it work better tomorrow? And how much better next week? The month after?
April 27, 2025 at 3:21 AM
2/

Before: Python frameworks competed on being thin LLM + RAG API wrappers. That means minimize # lines of code for RAG/chat/CoT demos, and maximize # of connectors. Adding "agents/workflows" is checkboxing a few more patterns that, largely, look the same across them.
April 27, 2025 at 3:20 AM
5/

Curious what others are seeing and thinking about here!

(+, DM if at #RSAC / graph the planet next week!)
April 21, 2025 at 6:36 PM
4/

I’ve been surprisingly OK with the AI messing up

As we build louie.ai and I use it for my own work, I'm thinking a lot more about Vibes Investigating:

- what's working
- differences with software-centric vibes flows vs data-centric
- dovetailing with automation as we bring AI to operations
April 21, 2025 at 6:35 PM
3/

Recent examples:

- Pairing on a big lawsuit. Live-editing viz + stats skipped a week of back-and-forth

- Identifying stats for that case. Ex: Median absolute deviation instead of stdev

- Mapping cyber pen test team logs + repurposing as a dashboard. Joins are 💩 to code but easy to describe!
April 21, 2025 at 6:34 PM
2/

In Vibes Investigating, I make requests, see results, adjust, and repeat

No manual DB querying, data wrangling, or plotting API fiddling

At the end, I trash it as I would a Splunk/Google search result, or I share my AI notebook just like a regular Google document or Python notebook

(cont)
April 21, 2025 at 6:27 PM
following your analogy, sounds more like either we solve RL fine-tuning so we can do end-to-end DL, or we find a better path. e.g., learn qlora-like RL patches learned from CoT traces
February 5, 2025 at 4:02 AM
O-series models still do not have any fine-tuning documented. I am seeing teams gravitate to manual staging - reasoner for initial planning, and feed to gpt4 rest. We are looking at r1 - interesting times!
February 4, 2025 at 9:53 PM
Technically, I don't know if it's even viable via current tools like qlora. I'm guessing CoT will be fine as base layers can be taught the common case, but true new reasonings may be harder?
February 4, 2025 at 9:53 PM