Orpheus Lummis
banner
orpheuslummis.info
Orpheus Lummis
@orpheuslummis.info
Advancing AI safety through convenings, coordination, software
https://orpheuslummis.info, based in Montréal
Montréal event, Tuesday November 25, 7 PM:

In which Emma Kondrup asks whether AI is truly exceptional, using pessimistsarchive.org to compare today’s AGI/ASI fears with past panics over cars, radio and TV.

RSVP luma.com/nq50jf0u
Pessimists Archive · Luma
EN An activity led by Emma Kondrup. AI has already shown its important differences from previous technologies (economically, socially and politically). This…
luma.com
November 19, 2025 at 2:30 AM
Join us for the Defensive Acceleration Hackathon, to prototype defensive systems that could protect us from AI-enabled threats.

This Friday evening, Nov 21, to Sunday evening.

It is an online event. We have a jam site in Montréal. RSVP: luma.com/gnyqha4a
Defensive Acceleration Hackathon · Luma
Important registration information: To participate in this event, please sign up through Apart Research's event page before registering. What is defensive…
luma.com
November 18, 2025 at 12:38 AM
Montréal event, Thursday November 20, 7 PM:

A hands-on intro to Neuronpedia’s models, sparse autoencoders, and feature exploration using example prompts, ending with a discussion of evidence standards and how to start contributing.

RSVP luma.com/s3umszm7
Neuronpedia 101 · Luma
https://www.neuronpedia.org/ EN A discussion with demo introducing Neuronpedia’s core concepts: models, sparse autoencoders, features, lists, and the anatomy…
luma.com
November 15, 2025 at 6:00 PM
Montréal event, Tuesday November 18, 7 PM:

Co-design a National Citizens’ Assembly on Superintelligence

RSVP luma.com/0b7muzt0
Co-design a National Citizens’ Assembly on Superintelligence · Luma
EN: A brief workshop to co-design a National Citizens’ Assembly on Superintelligence for Canada. We’ll align on mandate of the project, who should be involved,…
luma.com
November 14, 2025 at 4:13 PM
Reposted by Orpheus Lummis
"When AI met Automated Reasoning"
by Clark Barrett, director of the Stanford Center for Automated Reasoning and co-director of the Stanford Center for AI Safety.

The event occurred today on the Guaranteed Safe AI Seminars.

The recording is now available: www.youtube.com/watch?v=AxAS...
When AI met AR – Clark Barrett
YouTube video by Horizon Omega
www.youtube.com
November 13, 2025 at 10:00 PM
Reposted by Orpheus Lummis
Extremely excited to launch this report; the second report from World Internet Conference's International AI Governance Programme that I co-Chair with Yi Zeng. It goes further than any similar report I've seen in recommending robust governance interventions 1/4

www.wicinternet.org/pdf/Advancin...
www.wicinternet.org
November 11, 2025 at 2:04 PM
Montréal, Thursday 13, 7pm

Event & discussion on Canada's 2025 Budget vs AI risk.

luma.com/3tivj3yf
Canada's 2025 Budget vs AI risk · Luma
EN Canada’s 2025 federal budget tackles AI, innovation, and “responsible” development, and this session asks how that maps to the reduction of AI risk. In 90…
luma.com
November 9, 2025 at 6:25 PM
Reposted by Orpheus Lummis
I’m thrilled to share that I’ve been helping out my brother David who is starting a new org, Evitable.com, focused on informing and organizing the public around societal-scale risks and harms of AI, and countering industry narratives of AI inevitability and acceleration! 1/n
Evitable
Evitable.com
October 29, 2025 at 5:31 PM
Reposted by Orpheus Lummis
They’re here! 🎉 After months of rigorous evaluations, our 2025 Charity Recommendations are out! Learn more about the organizations that can do the most good for animals with additional donations at https://bit.ly/2025-charity-recs 🙌🐥 Together, we’re helping people help more animals. 💙
November 4, 2025 at 6:53 PM
Montréal event on the International AI Safety Report, First Key Update: Capabilities and Risk Implications

Tuesday Nov 4, 7PM
RSVP: luma.com/09j4095g
October 30, 2025 at 9:12 PM
Reposted by Orpheus Lummis
This workshop follows one we ran in July, adding optional specialized talks, and light moderation in the breakout sessions. To see how that one went, and videos of the talks, see this thread:

www.lesswrong.com/posts/csdn3e...
Summary of our Workshop on Post-AGI Outcomes — LessWrong
Last month we held a workshop on Post-AGI outcomes.  This post is a list of all the talks, with short summaries, as well as my personal takeaways. …
www.lesswrong.com
October 28, 2025 at 10:06 PM
thanks X25519MLKEM768
October 24, 2025 at 8:11 PM
We call for a prohibition on the development of superintelligence, not lifted before there is
- broad scientific consensus that it will be done safely and controllably, and
- strong public buy-in.

superintelligence-statement.org
Statement on Superintelligence
“We call for a prohibition on the development of superintelligence, not lifted before there is (1) broad scientific consensus that it will be done safely and controllably, and (2) strong public bu...
superintelligence-statement.org
October 22, 2025 at 10:11 AM
AI safety coworking spaces:

- LISA (London)
- FAR Labs (Berkeley)
- Constellation (Berkeley)
- SASH (Singapore)
- Mox (SF)
- Trajectory Labs (Toronto)
- Meridian (Cambridge)
- CEEALAR (Blackpool)
- SAISS (Sydney)
- PEAKS (Zurich)
- AISCT (Cape Town)
- Monoid (Moscow)

Let's have one for Montréal?
October 21, 2025 at 3:56 PM
Reposted by Orpheus Lummis
AI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️

(1/10)
October 15, 2025 at 10:49 AM
Tonight the Montréal AI safety meetup is on the Global Call for AI Red Lines luma.com/vjgi2npr

My slides: docs.google.com/presentation...
Global Call for AI Red Lines · Luma
EN: At UNGA-80 this September, Nobel laureates, former heads of state, AI pioneers like Yoshua Bengio, and leaders from across diplomacy, human rights, and…
luma.com
October 14, 2025 at 9:57 PM
Reposted by Orpheus Lummis
Our work, purpose, mission are now more clearly expressed as we updated our website.

www.horizonomega.org
Reducing risks from AI through collaboration, research, and education
www.horizonomega.org
October 7, 2025 at 4:56 PM
Give your input towards Canada's renewed AI strategy.

> Canada is running a national sprint to shape a renewed AI strategy. Tell us where Canada should focus.
> The consultation is open from October 1 to October 31.

ised-isde.canada.ca/site/ised/en...
Help define the next chapter of Canada's AI leadership
Current status: Open from October 1 to October 31, 2025 Canada helped invent modern AI. To stay a leader—and protect our digital sovereignty—we're running a 30-day national sprint to shape a renewed...
ised-isde.canada.ca
October 3, 2025 at 8:03 PM
Demanding governments to establish verifiable prohibitions on the most dangerous AI uses and behaviors (e.g., lethal autonomy, enablement of weapons of mass destruction, self-replicating systems). Seeking international agreement with enforcement mechanisms by the end of next 2026.

red-lines.ai
200+ prominent figures endorse Global Call for AI Red Lines
AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children...
red-lines.ai
September 24, 2025 at 12:28 PM
In Montréal and interested in AI safety, gov, and ethics?

You may find it interesting to subscribe to the Montréal AI safety, ethics, and governance events calendar.

luma.com/montreal-ai-....
Montréal AI safety, ethics, and governance · Events Calendar
View and subscribe to events from Montréal AI safety, ethics, and governance on Luma. Montréal community of researchers, builders, policymakers, and lively persons interested in advancing AI safety, e...
luma.com
September 23, 2025 at 12:15 PM
joyeux équinoxe ! 🌞
September 22, 2025 at 11:11 AM
Next Guaranteed Safe AI Seminar:

Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power – Jobst Heitzig

Thursday, October 9, 1 PM EDT

RSVP: luma.com/susn7zfs
Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power – Jobst Heitzig · Zoom · Luma
Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power Jobst Heitzig – Senior Mathematician AI Safety Designer Power is a key concept in AI…
luma.com
September 18, 2025 at 8:53 AM
Upcoming on the Montréal AI safety, ethics, and governance meetup:

Verifying a toy neural network, by Samuel Gélineau.
Thu Oct 2, 7 PM.

RSVP: luma.com/bc8rlwxr
Verifying a toy neural network · Luma
Samuel Gélineau will present his AI Safety side project, gelisam.com/parity-bot, which demonstrates that it is possible to verify that a neural network…
luma.com
September 17, 2025 at 3:33 PM
Are you in Montréal, interested in AI safety? Join our meetup Tuesday Sep 16, 7PM. I'll be presenting the paper Towards Guaranteed Safe AI (arxiv.org/abs/2405.06624).

RSVP luma.com/exh4xs42
Towards Guaranteed Safe AI · Luma
Orpheus will present the core ideas from Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems for ~half hour, then we will have…
luma.com
September 10, 2025 at 9:19 PM
Celebrating my first half marathon 🏃!
August 10, 2025 at 5:46 PM