FAR.AI
banner
far.ai
FAR.AI
@far.ai
Frontier alignment research to ensure the safe development and deployment of advanced AI systems.
Why might malicious actors choose less lethal but easier-to-hide weapons? Olivia Shoemaker argues current AI evaluations focus on scientific capabilities, often missing operational factors that determine real-world misuse patterns like evading detection and acquiring resources.👇
October 23, 2025 at 3:31 PM
AI experts have completely opposite views on the technology's future. @hlntnr.bsky.social on the open debates: Will scaling hit a wall? Can AI improve itself? “If we can't agree what's happening with AI, how can we agree what to do about it?” 👇
October 16, 2025 at 3:32 PM
AI agents can find and exploit real-world cybersecurity vulnerabilities today. Daniel Kang saw this major threat vector when ChatGPT launched, while others didn't. His research shows that more capable models are more proficient hackers. The threat is real and getting worse.👇
October 9, 2025 at 3:31 PM
We all agree AI audits are needed, but we can't agree what that means.

@mbogen.bsky.social says policymakers can make sense out of this chaotic landscape if they define what they're trying to accomplish. The challenge isn't just mitigating known risks but identifying ones still emerging.👇
October 2, 2025 at 3:32 PM
"AI that understands biology well enough to cure diseases can design extremely potent bioweapons." @alexbores.nyc, NY Assembly's 1st Democrat with a CS degree, who worked in AI, says state reps answer their own phones. Use that power to change the course of AI safety.👇
September 18, 2025 at 3:31 PM
How do we verify what AI companies are doing? Right now we just trust them. Lennart Heim: Trusting the math is sometimes better than trusting people, but “a good AI system” isn’t a technical property. We need engineers to verify AI policy goals. 👇
September 11, 2025 at 3:31 PM
Industry & government share the same goal: win the AI race. Sara McNaughton: We need synergy between the two, and we can't let perfect be the enemy of good. Each day of policy confusion helps rivals.👇
September 4, 2025 at 3:32 PM
Mark Beall warns that AGI is a 'black swan' that will invalidate our assumptions. He envisions AI designing bioweapons in minutes, evolving cyber weapons of mass destruction, and making autonomous kill decisions. “No army can defeat it, no firewall can contain it.” 👇
September 2, 2025 at 3:32 PM
If everyone had AGI software today, how many copies could you deploy? That's limited by compute. @repbillfoster.bsky.social: Lucky all chip chokepoints (ASML lithography, Korean device physics, Japanese photoresist) are in the free world. When the singularity hits, compute is what matters. 👇
August 28, 2025 at 3:31 PM