Gilles Louppe
glouppe.bsky.social
Gilles Louppe
@glouppe.bsky.social

AI for Science, deep generative models, inverse problems. Professor of AI and deep learning @universitedeliege.bsky.social. Previously @CERN, @nyuniversity. https://glouppe.github.io

Computer science 62%
Physics 22%
Pinned
<proud advisor>
Hot off the arXiv! 🦬 "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation" 🌍 Appa is our novel 1.5B-parameter probabilistic weather model that unifies reanalysis, filtering, and forecasting in a single framework. A thread 🧵

I have been playing with openclaw for a week on an isolated Linux box. It's been really fun to experiment and study, seeing it organizing its files, writing and running code, modifying its internals, or download packages as needed. But for the same reasons, this is also such a security nightmare

We're back to the wild west, and I don't think we're ready. No sandboxing that actually works, no immune system, just vibes and crossed fingers. Yikes 😬

Now with AI agents running continuously in their loop (like Openclaw), things are getting spicy again. Prompt injections can trigger deep changes in their internals, manipulate their goals, exfiltrate data.

When I was a kid, computer viruses of all kinds were around. Worms, trojan horses, rootkits... Then things settled down for years. Firewalls got better, antivirus became standard, and honestly? It got boring.

Most users (non core CS) of programming languages never learn assembly. I am just thinking we will have soon reached the next abstraction level, where today's programming languages will have become a hidden intermediate layer only exposed to (fewer and fewer) experts

Yes, it is a security disaster waiting to happen. However, I am thinking about the large majority of high-level users of programming languages that we form. Is this still relevant for them? I am more and more convinced programming is now too low level and will become a niche.

Seriously, shall we still teach students how to even program?

Reposted by Gilles Louppe

opus 4.6 oneshot a C compiler of the course of *2 weeks*
Building a C compiler with a team of parallel Claudes
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com

Can't wait to continue hacking my openclaw instance with these new superpowers. It's been pretty fun so far :-)
Opus 4.6 is here!

biggest wins on agentic search, HLE & ARC AGI 2

claude.com/blog/opus-4-...

Reposted by Gilles Louppe

Opus 4.6 is here!

biggest wins on agentic search, HLE & ARC AGI 2

claude.com/blog/opus-4-...

Reposted by Gilles Louppe

It's always frustrating to me how the term AI has evolved. AI was an umbrella term from 1950s. Then Machine Learning (a subset of AI), Deep Learning (a subset of ML), and modern generative AI (a subset DL). Now LLMs are used synonymously with "AI". So I made these graphics of the "AIroboros"

Good times! I fondly remember how I had to hack my way around to arrive at a formulation of the computation that would allow for efficient differentiation across a batch of distinct jets (and therefore distinct computation graphs for each element of the batch) 🤓

Reposted by Gilles Louppe

Whoa… 9 years ago, @glouppe.bsky.social and I teamed up with @kyunghyuncho.bsky.social on what I think was the first paper to make a connection between language modeling and particle physics. It also had a cool tie in with intepretability
arxiv.org/abs/1702.00748

Reposted by Gilles Louppe

"Cherry picking" in scientific papers.

(Repost of an older post on some other site in 2021).

Reposted by Gilles Louppe

Stars in our galaxy aren’t distributed evenly. Collisions with smaller galaxies or clusters make “stellar streams,” which are long, thin trails of stars. They form as the smaller object stretches while it falls into the Milky Way.

So… why does this stream have a MASSIVE hole in it??

1/7 ⚛️🧪

Reposted by Gilles Louppe

Scaling Laws in Particle Physics Data! This is a result I've been itching to share and it's finally out. One of the big open questions is how much better AI-based methods at particle colliders can still become. 1/4

But do we need to know how to code anymore, at least at the level of abstraction of today's programming languages? (genuine question)
“We found that using AI assistance led to a statistically significant decrease in mastery.”

Props to Anthropic for studying the effects of their creation and reporting results that are not probably what they wished for
www.anthropic.com/research/AI-...
How AI assistance impacts the formation of coding skills
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com

Reposted by Gilles Louppe

Our AlphaGenome paper and models are out today! 🧬

I'm really excited to see how people use the model and build on top of it in their biology research🔬

Huge congratulations to the entire team!👏

Read the full paper and access the model:

📄 Paper: goo.gle/4bXlV6y
💻 Code: goo.gle/4k1xrzI

Thea Aarrestad at sites.google.com/unimib.it/gw... You would like it! All talks are proudly talking about SBI for GW science :-)
gwfreeride
GWFREERIDE: Carving the AI Gradient in Gravitational-Wave Astronomy Sexten (Italy) - Jan 26-30, 2026
sites.google.com

I am old enough now to be mentioned as a veteran in simulation-based inference 👴(joking, thanks Thea for the highlight!)

Reposted by Gilles Louppe

My visit to #RockyWorlds4 kicked off with a hot take from @nplinnspace.bsky.social's poster: Even in the best-case scenario, we couldn't tell how "Earth-like" an Earth-like planet is 🔥

She assumed TRAPPIST-1 e had Earth-like air and showed that vastly different climates would be indistinguishable.

Reposted by Gilles Louppe

The risk of AI for education is not students cheating in exams, it is people in general cheating themselves into believing they understand things they don’t.

Reposted by Gilles Louppe

Cool new paper from my colleague @jsellenberg.bsky.social @uwmadison.bsky.social Math + collaborators. They used AlphaEvolve to generate interpretable programs that led to the discovery of new mathematical structures in the symmetric group
arxiv.org/abs/2601.01235
Introducing DroPE: Extending Context by Dropping Positional Embeddings

We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity.

arxiv.org/abs/2512.12167
pub.sakana.ai/DroPE
Proc B with @sampassmore.bsky.social! We used simulations to explore the innovation strategies of speed climbers 🧗‍♀️ Innovation is higher among slower athletes and lower when the population size is larger, and the overall balance of innovation and copying appears to be suboptimal 🔗 bit.ly/499QjZM
Simulation-based inference with deep learning suggests speed climbers combine innovation and copying to improve performance
Abstract. In the Olympic sport of speed climbing, athletes compete to reach the top of a 15 m wall as quickly as possible. Since the standardization of the
bit.ly

Reposted by Gilles Louppe

We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7

Reposted by Gilles Louppe

A study shows that using AI tools like LLMs lowers cognitive engagement and critical thinking in essay writing, signaling a trade-off between convenience and growth. LLM users had weaker memory recall and ownership, raising vital talks about AI's role in education. https://arxiv.org/abs/2506.08872
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
ArXiv link for Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
arxiv.org