Georg Bökman
bokmangeorg.bsky.social
Georg Bökman
@bokmangeorg.bsky.social
Geometric deep learning + Computer vision
Reposted by Georg Bökman
A chance to join us as postdoc in Gothenburg to workon this :) www.chalmers.se/en/about-cha...
I'm excited to open the new year by sharing a new perspective paper.

I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing
January 20, 2026 at 10:46 AM
Funny failure mode
January 20, 2026 at 12:13 PM
Reposted by Georg Bökman
Why you should probe more than just the final layer of your Vision Transformer to maximize performance. 🧵👇
January 19, 2026 at 9:44 AM
Reposted by Georg Bökman
Something @eugenevinitsky.bsky.social and I are very curious about... how can we make our client (a version of Bluesky for researchers) more friendly to grad students? What would encourage you all to post more?
Things I miss from our custom client when I'm using Bluesky:
- avatar colors that show whether we're mutuals
- exportable bookmarks with custom folders
- feed of trending papers and articles
- safety alerts when a post goes viral
- researcher profiles with topics, affiliations, featured papers
January 12, 2026 at 10:35 PM
Reposted by Georg Bökman
Another Erdos problem this morning:

(just to respond to a few people-- the system does NOT work by trying every possible answer and then checking. There's not enough matter and energy in the universe to solve theorems by trying every possible combination of symbols or whatever)
January 11, 2026 at 11:54 AM
Power chords are even cooler in 5-part choir harmonization of a twelve tone row (from 2:17 open.spotify.com/track/7uezPJ... )
January 10, 2026 at 8:35 AM
Reviewing for CVPR is sadly very boring.
January 9, 2026 at 10:58 AM
Reposted by Georg Bökman
New blog post (on a shiny new ICML blog!): What's New in #ICML2026 Peer Review

Some highlights:
- Policies to combat thinly sliced contributions
- Cascading desk rejections for peer-review abuse
- Reviewer reciprocity
- New ways to support authors and reviewers

Post: blog.icml.cc/2026/01/08/w...
January 8, 2026 at 5:26 PM
Reposted by Georg Bökman
so what do you think ChatGPT will say when ten million people ask it who they should vote for next year
December 31, 2025 at 1:44 PM
Reposted by Georg Bökman
I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity
December 28, 2025 at 6:23 AM
Reposted by Georg Bökman
every claim that "the incentives" support or deter certain kinds of behavior is also a statement about what kinds of external signals the claimant views as rewards or penalties #linklog
No, it’s not The Incentives—it’s you
There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and…
talyarkoni.org
December 25, 2025 at 12:46 AM
Reposted by Georg Bökman
On the unexplained similarity across networks

In behavior, order and weights, we keep seeing evidence that learning is more consistent than one might think.

A walk through occurrences, my thoughts and the open question, why?!
What's your hypothesis, missed papers and thoughts
🤖📈🧠 #AI
December 21, 2025 at 10:46 AM
Reposted by Georg Bökman
📢The second edition of ✨GRaM workshop✨ is here this time at #ICLR26.

🌟Submit your exciting works in Geometry-grounded representations.

We welcome submissions in multiple tracks i.e.
📄 Proceedings
📝extended abstract
👩‍🏫Tutorial/blogpost
as well as an exciting challenge!
December 18, 2025 at 5:31 AM
Reposted by Georg Bökman
Worried about AI’s military uses? We are too. We’re organising an ICLR 2026 workshop on AI research and military applications—dual-use risks, transparency, accountability, and ethical/legal governance & policy. Details + paper submissions: see Noa’s post and visit aiforpeaceworkshop.github.io.
December 16, 2025 at 9:44 AM
Reposted by Georg Bökman
An excellent article describing how LLMs fit into the bureaucratization of science: open.substack.com/pub/artifici...
Context Widows
or, of GPUs, LPUs, and Goal Displacement
open.substack.com
December 15, 2025 at 3:59 PM
Reposted by Georg Bökman
You know what season it is! Right. Internship application season.

Niantic Spatial is offering research internships on a multitude of 3D vision topics: relocalization, reconstruction, 3D VLMs... Top tier papers regularly come out of our internships 🚀

nianticspatial.careers.hibob.com/jobs/0fc4871...
Careers
nianticspatial.careers.hibob.com
December 10, 2025 at 8:38 AM
Reposted by Georg Bökman
#KostasKeynoteLessons: Curious about the "Keynote magic" behind my slides?

I’m releasing the full Keynote source file for my recent Gaussian Splatting lecture, all 10 GIGAbytes of it!

Grab the files in the thread and feel free to remix.

Files: drive.google.com/drive/folder...
Gaussian Splatting
YouTube video by CSProfKGD
youtu.be
December 9, 2025 at 6:52 PM
Reposted by Georg Bökman
Super excited to bring GRaM 2.0 to ICLR 2026 in Brazil!
Call for papers coming soon!

@gram-org.bsky.social @iclr-conf.bsky.social
🌐 Excited to bring GRaM Workshop to ICLR2026. 🇧🇷

🔷 Stay tuned for updates and call for papers!
December 6, 2025 at 5:29 PM
Reposted by Georg Bökman
🧠🔬 Excited to share our #NeurIPS2025 paper: "Convolution Goes Higher-Order"!

We asked: Can shallow networks be as expressive as deep ones? Inspired by biological vision, we introduce higher-order convolutions that capture complex image patterns standard CNNs miss.

🧵👇
December 1, 2025 at 1:24 PM
Reposted by Georg Bökman
🧠 How do neurons encode information? We know HOW MUCH, but what about WHAT information they encode?

Our new work uses diffusion models to decompose neural information down to individual stimuli & features!

🎯Spotlight at #NeurIPS2025 🌟📄

arxiv.org/abs/2505.11309
December 1, 2025 at 1:12 PM
Reposted by Georg Bökman
I'm open to there being a role for blind review, but introducing non-blind review has a lot of upsides that may reduce how much we actually care about blind review.

I think we care about blind review only because our publishing system is poorly designed and needs change in the modern era anyway.
The only argument advanced by proponents of blind peer-review boils down to "less powerful people can't criticize powerful people in public," the same argument people make when advocating for anonymity on social media.
In light of the new OpenReview identity-leak scandal, it's a good time to question our assumptions about why blinding in peer review is helpful in the first place.
November 30, 2025 at 4:13 PM
Reposted by Georg Bökman
We are currently recruiting 2 PhD students to work on AI-driven polymer design and engineering in my team at Chalmers as part of our @erc.europa.eu project POLYGEN. 💻🧪🇸🇪 Please spread the word and share to potentially interested candidates!

Apply by: Dec 28, 2025

www.chalmers.se/en/about-cha...
Vacancies
www.chalmers.se
December 1, 2025 at 8:16 AM
Reposted by Georg Bökman
"Initial Analysis of OpenReview API Security Incident" from OpenReview
November 30, 2025 at 11:15 PM
Ok this is the best counter against non-blind reviews I've seen so far. It would be possible to detect collusion. But who is going to do that?
Regarding fully-non-blind reviews.
I am not afraid of putting my name on my review.You are not afraid So far,so good.
But, imagine some amount of bad actors, who want to exploit the system - it will be super easy for them to operate in fully open.Collusion ring - you can create new every time.
1/
November 30, 2025 at 11:32 AM
Reposted by Georg Bökman
I feel a bit Jurgenish, but let me refer to the our post with Amy. So far one (the privilege plays here a big role though) could get all the confidence functions from elsewhere - arXiv, social media and LLMs. Except the formal goodies one 1/
amytabb.com/tips/2020/08...
November 29, 2025 at 10:15 AM