Anthropic [UNOFFICIAL]
banner
anthropicbot.bsky.social
Anthropic [UNOFFICIAL]
@anthropicbot.bsky.social
Mirror crossposting all of Anthropic's Tweets from their Twitter accounts to Bluesky! Unofficial. For the real account, follow @anthropic.com

"We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems."
We’re expanding Labs—the team behind Claude Code, MCP, and Cowork—and hiring builders who want to tinker at the frontier of Claude’s capabilities.

Read more: https://www.anthropic.com/news/introducing-anthropic-labs
Introducing Anthropic Labs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 13, 2026 at 8:54 PM
AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time. (1/3)
January 12, 2026 at 10:54 PM
Introducing Cowork: Claude Code for the rest of your work.

Cowork lets you complete non-technical tasks much like how developers use Claude Code.
January 12, 2026 at 8:11 PM
To support the work of the healthcare and life sciences industries, we're adding over a dozen new connectors and Agent Skills to Claude. We're hosting a livestream at 11:30am PT today to discuss how to use these tools most effectively. Learn more:
Advancing Claude in healthcare and the life sciences
Introducing Claude for Healthcare with HIPAA-ready infrastructure, plus expanded Life Sciences tools for clinical trials and regulatory submissions. New connectors to CMS, Medidata, and ClinicalTrials.gov.
www.anthropic.com
January 12, 2026 at 4:39 PM
New Anthropic Research: next generation Constitutional Classifiers to protect against jailbreaks. We used novel methods, including practical application of our interpretability work, to make jailbreak protection more effective—and less costly—than ever.
Next-generation Constitutional Classifiers: More efficient protection against universal jailbreaks
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:20 PM
New on the Anthropic Engineering Blog: Demystifying evals for AI agents. The capabilities that make agents useful also make them more difficult to evaluate. Here are evaluation strategies that have worked across real-world deployments.
Demystifying evals for AI agents
Demystifying evals for AI agents
www.anthropic.com
January 10, 2026 at 12:18 PM
Some delightfully specific things people are building with Claude Code lately.
January 10, 2026 at 12:42 PM
Starting at midnight PT tonight, all Pro and Max plans have 2x their usual usage limits through New Year's Eve.
January 10, 2026 at 12:40 PM
We’re releasing Bloom, an open-source tool for generating behavioral misalignment evals for frontier AI models.

Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios.

Learn more: https://www.anthropic.com/research/bloom
Introducing Bloom: an open source tool for automated behavioral evaluations
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:18 PM
As part of our partnership with @ENERGY on the Genesis Mission, we're providing Claude to the DOE ecosystem, along with a dedicated engineering team. This partnership aims to accelerate scientific discovery across energy, biosecurity, and basic research.
Working with the US Department of Energy to unlock the next era of scientific discovery
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:18 PM
People use AI for a wide variety of reasons, including emotional support.

Below, we share the efforts we’ve taken to ensure that Claude handles these conversations both empathetically and honestly.
https://www.anthropic.com/news/protecting-well-being-of-users
Protecting the well-being of our users
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:17 PM
Claude in Chrome is now available to all paid plans.

We’ve also shipped an integration with Claude Code.
January 10, 2026 at 12:39 PM
Skills are now available on Team and Enterprise plans.

We're also making skills easier to deploy, discover, and build.
January 10, 2026 at 12:37 PM
You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office.

After a rough start, the business is doing better.

Mostly.

Video: https://twitter.com/AnthropicAI/status/2001686747185394148
January 10, 2026 at 12:17 PM
How will AI affect education, now and in the future?

Here, we reflect on some of the benefits and risks we've been thinking about.

Video: https://twitter.com/AnthropicAI/status/2001070081829212274
January 10, 2026 at 12:15 PM
We’ve shipped more updates for Claude Code:

- Syntax highlighting for diffs
- Prompt suggestions
- First-party plugins marketplace
- Shareable guest passes
January 10, 2026 at 12:36 PM
We’re opening applications for the next two rounds of the Anthropic Fellows Program, beginning in May and July 2026.

We provide funding, compute, and direct mentorship to researchers and engineers to work on real safety and security projects for four months.
January 10, 2026 at 12:15 PM
MCP is now a part of the Agentic AI Foundation, a directed fund under the Linux Foundation.

Co-creator David Soria Parra talks about how a protocol sketched in a London conference room became the open standard for connecting AI to the world—and what comes next for it. (1/2)
January 10, 2026 at 12:14 PM
Today we’re shipping 3 more updates for Claude Code:

- Claude Code on Android
- Hotkey model switcher
- Context window info in status lines
January 10, 2026 at 12:34 PM
We're releasing more upgrades to Claude Code CLI:

- Async subagents
- Instant compact
- Customer session names
- Usage stats
January 10, 2026 at 12:32 PM
New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM).

We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, separate set of parameters that can be removed without broadly affecting the model.
January 10, 2026 at 12:14 PM
We’ve shipped three new updates for Claude Agent SDK to make it easier to build custom agents:

- Support for 1M context windows
- Sandboxing
- V2 of our TypeScript interface
January 10, 2026 at 12:30 PM
Anthropic is donating the Model Context Protocol to the Agentic AI Foundation, a directed fund under the Linux Foundation. In one year, MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven.
Donating the Model Context Protocol and establishing the Agentic AI Foundation
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:11 PM
We’re expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30,000 professionals trained on Claude, and a product to help CIOs scale Claude Code. Read more:
Accenture and Anthropic launch multi-year partnership to move enterprises from AI pilots to production
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 10, 2026 at 12:11 PM
You can now delegate tasks to Claude Code directly from Slack.

Simply tag @.Claude in a channel or thread. Coding tasks will automatically be routed to Claude Code and start up a new session on the web.
January 10, 2026 at 12:28 PM