Horizon Events
banner
horizonevents.info
Horizon Events
@horizonevents.info
Non-profit dedicated to advancing AI safety R&D through targeted events and community initiatives. https://horizonevents.info/
Reposted by Horizon Events
This event was postponed to next Thursday, September 11, 1PM EDT. Join the discussion! luma.com/g88qnql1
Towards Safe and Hallucination-Free Coding AIs – GasStationManager · Zoom · Luma
Towards Safe and Hallucination-Free Coding AIs GasStationManager – Independent Researcher Modern LLM-based AIs have exhibited great coding abilities, and have…
luma.com
September 4, 2025 at 3:25 PM
Reposted by Horizon Events
Guaranteed Safe AI Seminars 2024 review
horizonomega.substack.com/p/guaranteed...

The monthly seminar series grew to 230 subscribers in 2024, hosting 8 technical talks. We had ~490 RSVPs, with ~76 hours and ~900 views of the recordings. Seeking 2025 funding; plans include bibliography and debates.
Guaranteed Safe AI Seminars 2024 review
Dear Guaranteed Safe AI enjoyers,
horizonomega.substack.com
December 15, 2024 at 6:16 PM
Using PDDL Planning to Ensure Safety in LLM-based Agents by Agustín Martinez Suñé
Thu January 9, 18:00-19:00 UTC
Join: lu.ma/08gr7mrs
Part of the Guaranteed Safe AI Seminars
Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé · Zoom · Luma
Using PDDL Planning to Ensure Safety in LLM-based Agents Agustín Martinez Suñé – Ph.D. in Computer Science | Postdoctoral Researcher (Starting Soon), OXCAV,…
lu.ma
December 13, 2024 at 3:43 AM
Compact Proofs of Model Performance via Mechanistic Interpretability
by Louis Jaburi
Thu December 12, 18:00-19:00 UTC
Join: lu.ma/g24bvacw

Last Guaranteed Safe AI seminar of the year
Compact Proofs of Model Performance via Mechanistic Interpretability – Louis Jaburi · Zoom · Luma
Compact Proofs of Model Performance via Mechanistic Interpretability Louis Jaburi – Independent researcher Generating proofs about neural network behavior is a…
lu.ma
December 8, 2024 at 3:29 PM
Our goals for 2025:
- Guaranteed Safe AI Seminars
- AI Safety Unconference 2025
- AI Safety Events & Training newsletter
- Monthly Montréal AI safety R&D events
- Grow partnerships

We are looking for donations to support this work. More info:
manifund.org/projects/hor...
Horizon Events 2025
Non-profit facilitating progress in AI safety R&D through events
manifund.org
November 19, 2024 at 12:24 PM
Reposted by Horizon Events
Today on the Guaranteed Safe AI Seminars series:

Bayesian oracles and safety bounds by Yoshua Bengio

Relevant readings:
- yoshuabengio.org/2024/08/29/b...
- arxiv.org/abs/2408.05284

Join: lu.ma/4ylbvs75
Bayesian oracles and safety bounds – Yoshua Bengio · Zoom · Luma
Bayesian oracles and safety bounds Yoshua Bengio – Scientific Director, Mila & Full Professor, U. Montreal Could there be safety advantages to the training of…
lu.ma
November 14, 2024 at 12:37 PM
Bayesian oracles and safety bounds
by Yoshua Bengio, Scientific Director, Mila & Full Professor, U. Montreal
November 14, 18:00-19:00 UTC
Join: lu.ma/4ylbvs75
Part of the Guaranteed Safe AI Seminars
Bayesian oracles and safety bounds – Yoshua Bengio · Zoom · Luma
Bayesian oracles and safety bounds Yoshua Bengio – Scientific Director, Mila & Full Professor, U. Montreal Could there be safety advantages to the training of…
lu.ma
October 11, 2024 at 2:20 PM
Announcing the Guaranteed Safe AI Seminars. This monthly series brings together researchers to discuss and advance the field of GS AI, which aims to produce AI systems equipped with high-assurance quantitative safety guarantees.
horizonomega.substack.com/p/announcing...
Guaranteed Safe AI Seminars
Horizon Events announces the Guaranteed Safe AI Seminars. It is a monthly series bringing together researchers to discuss and advance the field. GS AI aims to produce AI systems equipped with high-ass...
horizonomega.substack.com
July 17, 2024 at 10:38 PM
Constructability: Designing plain-coded AI systems
by ​Charbel-Raphaël Ségerie & Épiphanie Gédéon
August 8, 17:00-18:00 UTC
Join: lu.ma/xpf046sa
As part of the Guaranteed Safe AI Seminars
Constructability: Designing plain-coded AI systems – Charbel-Raphaël Ségerie & Épiphanie Gédéon · Zoom · Luma
Constructability: Designing plain-coded AI systems Charbel-Raphaël Ségerie & Épiphanie Gédéon – Executive director at CeSIA & Independent Researcher Current AI…
lu.ma
July 12, 2024 at 6:16 PM
You are invited to the Guaranteed Safe AI Seminars, July 2024 edition.

Proving safety for narrow AI outputs – Evan Miyazono, Atlas Computing

Thursday, July 11 11:30-12:30 UTC-5
RSVP: lu.ma/2715xmzn
Proving safety for narrow AI outputs – Evan Miyazono · Zoom · Luma
Proving safety for narrow AI outputs Evan Miyazono, Founder of Atlas Computing User demand for new AI capabilities is growing even as risks from foreseeable AI…
lu.ma
June 19, 2024 at 12:56 PM
Introducing Horizon Events: A non-profit consultancy dedicated to advancing AI safety R&D through high-impact events and initiatives.
horizonomega.substack.com/p/introducin...
Introducing Horizon Events
Events consultancy dedicated to advancing research and development in AI safety
horizonomega.substack.com
June 9, 2024 at 10:38 PM
Next edition of the Provable AI Safety seminars:
Gaia: Distributed planetary-scale AI safety
By ​Rafael Kaufmann, Co-founder and CTO.
Thursday June 13, 13:00-14:00 Eastern, online.
Join us!
lu.ma/qn8p4wp4
Gaia: Distributed planetary-scale AI safety · Zoom · Luma
Gaia: Distributed planetary-scale AI safety Rafael Kaufmann, Co-founder and CTO, Gaia In the near future, there will be billions of powerful AI agents deployed…
lu.ma
May 10, 2024 at 4:22 PM
You are invited to the 2nd edition of the Provable AI Safety Seminars:

**Provable AI Safety, Steve Omohundro**
May 9th, 13:00-14:00 EDT, online
lu.ma/3fz12am7
April 11, 2024 at 6:57 PM
Announcing the first edition of the Provable AI Safety Seminars.

April 11th, 13:00-14:00 EDT. Monthly, on 2nd Thursday.

RSVP: lu.ma/provableaisa...

Talks:
- Synthesizing Gatekeepers for Safe Reinforcement Learning (Sefas)
- Verifying Global Properties of Neural Networks (Soletskyi)
March 21, 2024 at 11:26 PM
AI Safety Events Tracker, February 2024 edition.
A newsletter listing upcoming events and open calls related to AI safety.
aisafetyeventstracker.substack.com/p/ai-safety-...
February 12, 2024 at 9:59 AM
AI Safety Events Tracker, December 2023 edition
Listing upcoming events and open calls related to AI safety
aisafetyeventstracker.substack.com/p/ai-safety-...
December 10, 2023 at 7:43 AM
Reposted by Horizon Events
At Devconnect Istanbul tomorrow and interested in AI risk? You are invited to a 2h participative discussion tackling topics at the intersection of web3 and AI risk. lu.ma/vh9hrgme
Web3 x AI risk · Luma
Web3 x AI Risk, Devconnect 2023 A participatory discussion focused on exploring and critically examining emergent risks at the intersection of AI and Web3 technologies. Join this...
lu.ma
November 15, 2023 at 12:22 PM