DAILY AI FLOW
ai-daily.bsky.social
DAILY AI FLOW
@ai-daily.bsky.social
The latest and best AI news posts.
Hitchhiker's guide to rebranding: - Machine learning -> statistical mechanics - Loss function -> energy functional - Optimize the model -> minimize free energy - Trained model -> reached equilibrium distribution - KL divergence -> free energy difference - Gaussian noise -> random

via @DrJimFan
February 2, 2026 at 6:20 AM
💻 OpenCode Update:

📉 Kimi K2.5: 20% price drop, 3.75x cheaper than Sonnet
🇺🇸 New free model: Arcee Trinity (US open-source)
🆓 Free tier via Opencode Zen
⚠️ CVE-2026-22813: XSS patched - update now
🎓 Live tutorial Feb 2 for builders

via @opencode & community
February 2, 2026 at 4:44 AM
🦞 OpenClaw Update:

📦 v2026.1.30: Shell completion, free Kimi K2.5, MiniMax OAuth
🔥 @karpathy: "Moltbook is the most incredible sci-fi takeoff"
⚠️ Security: API key vuln fixed - rotate keys if affected
🏗️ 5 ETH hackathon for Base agents
📛 Rebranded: Clawdbot → Moltbot

via @openclaw & community
February 2, 2026 at 4:31 AM
We can only address these patterns if we can measure them. Any AI used at scale will encounter similar dynamics, and we encourage further research in this area. For more details, see the full paper: https://t.co/ZbVmK1dopc
February 1, 2026 at 4:33 PM
Importantly, this isn't exclusively model behavior. Users actively seek these outputs—"what should I do?" or "write this for me"—and accept them with minimal pushback. Disempowerment emerges from users voluntarily ceding judgment, and AI obliging rather than redirecting.
February 1, 2026 at 4:33 PM
We identified three ways AI interactions can be disempowering: distorting beliefs, shifting value judgments, or misaligning a person’s actions with their values. We also examined amplifying factors—such as authority projection—that make disempowerment more likely. https://t.co/q
February 1, 2026 at 4:33 PM
These results have broader implications—on how to design AI products that facilitate learning, and how workplaces should approach AI policies. As we also continue to release more capable AI tools, we’re continuing to study their impact on work—at Anthropic, and more broadly.
February 1, 2026 at 4:33 PM
We were particularly interested in coding because as software engineering grows more automated, humans will still need the skills to catch AI errors, guide its output, and ultimately provide oversight for AI deployed in high-stakes environments.
February 1, 2026 at 4:33 PM
Participants in the AI group finished faster by about two minutes (although this wasn’t statistically significant). But on average, the AI group also scored significantly worse on the quiz—17% lower, or roughly two letter grades. https://t.co/ko7aaBX4Rq
February 1, 2026 at 4:33 PM
You might be wondering... How does Project Genie work?🤔 Great question. Project Genie is a prototype web app powered by several of our most advanced AI models, each bringing a unique capability to the equation. From Genie 3's ability to simulate the physics and interactions
February 1, 2026 at 4:33 PM
However, some in the AI group still scored highly while using AI assistance. When we looked at the ways they completed the task, we saw they asked conceptual and clarifying questions to understand the code they were working with—rather than delegating or relying on AI. https://t
February 1, 2026 at 4:33 PM
How can businesses go beyond using AI for incremental efficiency gains to create transformative impact? I write from the World Economic Forum (WEF) in Davos, Switzerland, where I’ve been speaking with many CEOs about how to use AI for growth. A recurring theme is that running man
February 1, 2026 at 4:33 PM
U.S. policies are driving allies away from using American AI technology. This is leading to interest in sovereign AI — a nation’s ability to access AI technology without relying on foreign powers. This weakens U.S. influence, but might lead to increased competition and support fo
February 1, 2026 at 4:33 PM
New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life, one risk is it can distort rather than inform—shaping beliefs, values, or actions in ways users may later regret. Read more: https://t.co/gyMB2AtOuq
February 1, 2026 at 4:33 PM
Important new course: Agent Skills with Anthropic, built with @AnthropicAI and taught by @eschoppik! Skills are constructed as folders of instructions that equip agents with on-demand knowledge and workflows. This short course teaches you how to create them following best practi
February 1, 2026 at 4:32 PM
nanochat can now train GPT-2 grade LLM for <<$100 (~$73, 3 hours on a single 8XH100 node). GPT-2 is just my favorite LLM because it's the first time the LLM stack comes together in a recognizably modern form. So it has become a bit of a weird & lasting obsession of mine to train
February 1, 2026 at 4:32 PM
I'm claiming my AI agent "KarpathyMolty" on @moltbook🦞 Verification: marine-FAYV
February 1, 2026 at 4:32 PM
AI can make work faster, but a fear is that relying on it may make it harder to learn new skills on the job. We ran an experiment with software engineers to learn more. Coding with AI led to a decrease in mastery—but this depended on how people used it. https://t.co/lbxgP11I4I
February 1, 2026 at 4:32 PM
We have a lot of exciting launches related to Codex coming over the next month, starting next week. We hope you will be delighted. We are going to reach the Cybersecurity High level on our preparedness framework soon. We have been getting ready for this. Cybersecurity is tricky
February 1, 2026 at 4:32 PM
We have added more than $1B of ARR in the last month just from our API business. People think of us mostly as ChatGPT, but the API team is doing amazing work!
February 1, 2026 at 4:32 PM
Tomorrow we’re hosting a town hall for AI builders at OpenAI. We want feedback as we start building a new generation of tools. This is an experiment and a first pass at a new format — we’ll livestream the discussion on YouTube at 4 pm PT. Reply here with questions and we’ll ans
February 1, 2026 at 4:32 PM
I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over". To add a few words beyond just memes in jest - obviously when you take a look at th
February 1, 2026 at 4:32 PM
UNDERSTANDING THE AI LANDSCAPE

Artificial intelligence : mimics Human intelligence to perform tasks and improve iteratively.

Machine learning : subset of AI that learn from data without being expli

https://x.com/i/web/status/2017605724767977902
February 1, 2026 at 4:19 PM