OSS: pygraphistry, gfql graph lang, apache arrow, GPU dataframes
Before: web FRP, socio-plt, parallel browsers, project domino
Data-intensive investigations with LLMs, GPUs, & graphs
... But fundamentally, as an AI team, it's hard to get excited by steam engine vendors & their VCs when our day-to-day is about electricity. Our team has been enjoying the fun interfaces, but abstaining: that's not where our work is.
... But fundamentally, as an AI team, it's hard to get excited by steam engine vendors & their VCs when our day-to-day is about electricity. Our team has been enjoying the fun interfaces, but abstaining: that's not where our work is.
The Python-only-era tools are flat, with occasional step improvements when OpenAI releases something 10% better
AI-native teams think in learning loops that compound over users and time
These look similar early on, and building loops is hard so probably < 5% of LLM devs do them today
The Python-only-era tools are flat, with occasional step improvements when OpenAI releases something 10% better
AI-native teams think in learning loops that compound over users and time
These look similar early on, and building loops is hard so probably < 5% of LLM devs do them today
Today's AI-native teams: We think about learning. If an agent is doing some MCP flow today, will it work better tomorrow? And how much better next week? The month after?
Today's AI-native teams: We think about learning. If an agent is doing some MCP flow today, will it work better tomorrow? And how much better next week? The month after?
Before: Python frameworks competed on being thin LLM + RAG API wrappers. That means minimize # lines of code for RAG/chat/CoT demos, and maximize # of connectors. Adding "agents/workflows" is checkboxing a few more patterns that, largely, look the same across them.
Before: Python frameworks competed on being thin LLM + RAG API wrappers. That means minimize # lines of code for RAG/chat/CoT demos, and maximize # of connectors. Adding "agents/workflows" is checkboxing a few more patterns that, largely, look the same across them.
I’ve been surprisingly OK with the AI messing up
As we build louie.ai and I use it for my own work, I'm thinking a lot more about Vibes Investigating:
- what's working
- differences with software-centric vibes flows vs data-centric
- dovetailing with automation as we bring AI to operations
I’ve been surprisingly OK with the AI messing up
As we build louie.ai and I use it for my own work, I'm thinking a lot more about Vibes Investigating:
- what's working
- differences with software-centric vibes flows vs data-centric
- dovetailing with automation as we bring AI to operations
Recent examples:
- Pairing on a big lawsuit. Live-editing viz + stats skipped a week of back-and-forth
- Identifying stats for that case. Ex: Median absolute deviation instead of stdev
- Mapping cyber pen test team logs + repurposing as a dashboard. Joins are 💩 to code but easy to describe!
Recent examples:
- Pairing on a big lawsuit. Live-editing viz + stats skipped a week of back-and-forth
- Identifying stats for that case. Ex: Median absolute deviation instead of stdev
- Mapping cyber pen test team logs + repurposing as a dashboard. Joins are 💩 to code but easy to describe!
In Vibes Investigating, I make requests, see results, adjust, and repeat
No manual DB querying, data wrangling, or plotting API fiddling
At the end, I trash it as I would a Splunk/Google search result, or I share my AI notebook just like a regular Google document or Python notebook
(cont)
In Vibes Investigating, I make requests, see results, adjust, and repeat
No manual DB querying, data wrangling, or plotting API fiddling
At the end, I trash it as I would a Splunk/Google search result, or I share my AI notebook just like a regular Google document or Python notebook
(cont)