Anthropic [UNOFFICIAL]
banner
anthropicbot.bsky.social
Anthropic [UNOFFICIAL]
@anthropicbot.bsky.social
Mirror crossposting all of Anthropic's Tweets from their Twitter accounts to Bluesky! Unofficial. For the real account, follow @anthropic.com

"We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems."
New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life, one risk is it can distort rather than inform—shaping beliefs, values, or actions in ways users may later regret. Read more:
Disempowerment patterns in real-world AI usage
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 28, 2026 at 10:19 PM
We’re partnering with the UK's Department for Science, Innovation and Technology to build an AI assistant for http://GOV.UK.

It will offer tailored advice to help British people navigate government services.

Read more about our partnership: https://www.anthropic.com/news/gov-UK-partnership
Anthropic partners with the UK Government to bring AI assistance to GOV.UK services
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 27, 2026 at 10:59 AM
Now available on the Free plan: Claude can create and edit files.

We’re also bringing skills and compaction to free users, so Claude can take on more complex tasks and keep working as long as you need.
January 26, 2026 at 8:44 PM
Your work tools are now interactive in Claude.

Draft Slack messages, visualize ideas as Figma diagrams, or build and see Asana timelines.
January 26, 2026 at 6:32 PM
Claude in Excel is now available on Pro plans.

Claude now accepts multiple files via drag and drop, avoids overwriting your existing cells, and handles longer sessions with auto compaction.

Get started: http://claude.com/claude-in-excel
January 23, 2026 at 11:00 PM
Claude Cowork is now available for Team and Enterprise plans.
January 23, 2026 at 5:32 PM
Since release, Petri, our open-source tool for automated alignment audits, has been adopted by research groups and trialed by other AI developers.

We're now releasing Petri 2.0, with improvements to counter eval-awareness and expanded seeds covering a wider range of behaviors.
January 23, 2026 at 12:14 AM
New on the Anthropic Engineering Blog: We give prospective performance engineering candidates a notoriously difficult take-home exam. It worked well—until Opus 4.5 beat it.

Here's how we designed (and redesigned) it: https://www.anthropic.com/engineering/AI-resistant-technical-evaluations
Designing AI-resistant technical evaluations
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 22, 2026 at 1:14 AM
We’re publishing a new constitution for Claude.

The constitution is a detailed description of our vision for Claude’s behavior and values. It’s written primarily for Claude, and used directly in our training process.
https://www.anthropic.com/news/claude-new-constitution
Claude's new constitution
A new approach to a foundational document that expresses and shapes who Claude is
www.anthropic.com
January 21, 2026 at 4:16 PM
Claude can now securely connect to your health data.

Four new integrations are now available in beta: Apple Health (iOS), Health Connect (Android), HealthEx, and Function Health.
January 20, 2026 at 11:30 PM
The VS Code extension for Claude Code is now generally available.

It’s now much closer to the CLI experience: @-mention files for context, use familiar slash commands (/model, /mcp, /context), and more.

Download it here: https://marketplace.visualstudio.com/items?itemName=anthropic.claude-code
January 20, 2026 at 8:14 PM
Tino Cuéllar, President of the Carnegie Endowment for International Peace, has been appointed to Anthropic’s Long-Term Benefit Trust: https://www.anthropic.com/news/mariano-florentino-long-term-benefit-trust
Mariano-Florentino Cuéllar appointed to Anthropic’s Long-Term Benefit Trust
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 20, 2026 at 3:14 PM
We're partnering with @TeachForAll to bring AI training to educators in 63 countries. Teachers serving over 1.5m students can now use Claude to plan curricula, customize assignments, and build tools—plus provide feedback to shape how Claude evolves.
Anthropic and Teach For All launch global AI training initiative for educators
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 20, 2026 at 2:59 PM
New Anthropic Fellows research: the Assistant Axis.

When you’re talking to a language model, you’re talking to a character the model is playing: the “Assistant.” Who exactly is this Assistant? And what happens when this persona wears off?
January 19, 2026 at 9:14 PM
We're publishing our 4th Anthropic Economic Index report.

This version introduces "economic primitives"—simple and foundational metrics on how AI is used: task complexity, education level, purpose (work, school, personal), AI autonomy, and success rates.
January 15, 2026 at 10:25 PM
New in Claude Code on the web and desktop: diff view.

See the exact changes Claude made without leaving the app.
January 15, 2026 at 10:26 PM
Since launching our AI for Science program, we’ve been working with scientists to understand how AI is accelerating progress. We spoke with 3 labs where Claude is reshaping research—and starting to point towards novel scientific insights and discoveries.
How scientists are using Claude to accelerate research and discovery
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 15, 2026 at 9:24 PM
We're supporting @ARPA_H's PCX program—a $50M effort to share data between 200+ pediatric hospitals on complex cases, beginning with pediatric cancer. The goal is to help doctors learn from similar cases and shorten the care journey from years to weeks. (1/2)
January 14, 2026 at 8:24 PM
We’re expanding Labs—the team behind Claude Code, MCP, and Cowork—and hiring builders who want to tinker at the frontier of Claude’s capabilities.

Read more: https://www.anthropic.com/news/introducing-anthropic-labs
Introducing Anthropic Labs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 13, 2026 at 8:54 PM
AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time. (1/3)
January 12, 2026 at 10:54 PM
Introducing Cowork: Claude Code for the rest of your work.

Cowork lets you complete non-technical tasks much like how developers use Claude Code.
January 12, 2026 at 8:11 PM
To support the work of the healthcare and life sciences industries, we're adding over a dozen new connectors and Agent Skills to Claude. We're hosting a livestream at 11:30am PT today to discuss how to use these tools most effectively. Learn more:
Advancing Claude in healthcare and the life sciences
Introducing Claude for Healthcare with HIPAA-ready infrastructure, plus expanded Life Sciences tools for clinical trials and regulatory submissions. New connectors to CMS, Medidata, and ClinicalTrials.gov.
www.anthropic.com
January 12, 2026 at 4:39 PM
New Anthropic Research: next generation Constitutional Classifiers to protect against jailbreaks. We used novel methods, including practical application of our interpretability work, to make jailbreak protection more effective—and less costly—than ever.
Next-generation Constitutional Classifiers: More efficient protection against universal jailbreaks
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:20 PM
New on the Anthropic Engineering Blog: Demystifying evals for AI agents. The capabilities that make agents useful also make them more difficult to evaluate. Here are evaluation strategies that have worked across real-world deployments.
Demystifying evals for AI agents
Demystifying evals for AI agents
www.anthropic.com
January 10, 2026 at 12:18 PM
Some delightfully specific things people are building with Claude Code lately.
January 10, 2026 at 12:42 PM