raynr.bsky.social
@raynr.bsky.social
I’m here for AI, cybersecurity and politics, but try to avoid posting on the latter. Lurker.
I’ve never run into that. How?
November 23, 2025 at 12:48 AM
@radiofreetom.bsky.social you’re gonna need to present a word study on “federal” before most ppl understand what you mean.
November 23, 2025 at 12:46 AM
These aren’t mutually exclusive. 5% of AI projects could cause 40% increase in efficiency overall.

Smallbiz is also far more likely to just let employees have at it and see how to make AI work for them, instead of formal implementation projects
November 2, 2025 at 3:12 PM
Maybe it’s a clickbait type headline but it’s also a elitish in a way I highly dislike
October 30, 2025 at 1:54 AM
Will this cause problems? Of course. I’m still supporting Access apps that should have been turned into real apps a decade ago.

But those Access apps also let companies create things and scale when a real dev was not available
October 30, 2025 at 1:54 AM
Vibe coding is and will give non-devs the ability to create software that otherwise would simply not be created.

It’s really REALLY easy to underestimate the power and efficiency that brings to non-techies outside of the Bay Area.
October 30, 2025 at 1:54 AM
This is also troubleshooting at a conference I am *exhibiting* at between sessions, so time was of the essence!
October 29, 2025 at 1:12 PM
Would I have gotten there on my own? Eventually, sure. But llm + gemini turned this into a 15 minute fix instead of ~2ish hours.
October 29, 2025 at 1:02 PM
Answer: the docker config automatically used the most latest version of mongodb. An unprompted server restart loaded MongoDB 8 instead of MongoDB 7, which is not compatible with the v7 DB files.
October 29, 2025 at 1:02 PM
85% of private businesses? By dollar value? It can’t be by absolute number when only ~5% ever make it past $1m in gross revenue.
October 28, 2025 at 10:58 AM
The difference is that AWS is effectively Amazon's core product and Google's is search (ads).

At least by profit.
October 20, 2025 at 10:02 PM
MCP+subagent makes the same query way more token-efficient.

And as a bonus, CC can run 2-3 subagents each running a different web search *in parallel* with their own query to find the top results.

/end
October 17, 2025 at 12:54 PM
The Firecrawl MCP server is another example that shines as a token-saver in a subtask. "You look stuck on XYZ, please stop and use a subagent to search the web for more info on library ABC"

COULD python do that? With a Bing/Google library, sure, & get back 10 pages of results looking for 1 page. 7/
October 17, 2025 at 12:54 PM
Where the default model of llm is gemini-2.5-pro, allowing the orchestrator agent to search API docs that have ~750m tokens of info to get specific info that it needs to answer the question.

Subagents can/should use MCP servers in the same way for token-heavy tasks. 6/
October 17, 2025 at 12:54 PM
Not an MCP but demonstrative of #2, a lot of my claude[.]md files have this:

You can use the bash tool 'llm' to ask questions of the codebase or the docs, this is AMAZING for working with the massive API doc. Ex:
```
cat MyAPIDocs.html|llm "What are the properties for the contacts entity?"
```
5/
October 17, 2025 at 12:54 PM
In the same example as above, I got Claude Code to search info on 1000 firms, using those MCP servers, *all using subagents*. 10x subagents in parallel, each searching 100 firms. 4/
October 17, 2025 at 12:54 PM
2. Shift your thinking as the main LLM convo as the orchestrator, not the primary agent.

At least, to be more-orchestrator like. 3/
October 17, 2025 at 12:54 PM
1. A good MCP server can/should save on token usage. At work we use 5 custom MCP servers that search public directories for info on firms in our target industry. All stuff python/curl can do well, but wrapping it in MCP saves a gagjillion tokens. Roughly. 2/
October 17, 2025 at 12:54 PM