George Kaissis
g-k.ai
George Kaissis
@g-k.ai
Professor for Human-Centred Transformative AI @ Hasso-Plattner-Institut. Previously @ Google DeepMind, Imperial College London, TU Munich. 🇪🇺 🏳️‍🌈

https://www.g-k.ai
Reposted by George Kaissis
Let’s today not only commemorate Rosalind Franklin, but also Raymond Gosling, her @kingscollegelondon.bsky.social postgraduate who took photo 51…
November 8, 2025 at 5:52 PM
I am overjoyed to announce that I have joined HPI as a full professor for Human-Centred Transformative AI. I am looking forward to working with my amazing new and current collaborators and would like to deeply thank everyone who has been part of the journey that led me here! @hpi.bsky.social
November 4, 2025 at 6:04 AM
Reposted by George Kaissis
Zum 1. November trat Prof. Dr. med. Georg Kaissis seine Professur für #DigitalHealth: Human-Centred Transformative AI an. Sein Forschungsschwerpunkt: die Entwicklung der nächsten Generation multimodaler #KI-Modelle.
Mehr Infos zur neuen Professur: hpi.de/artikel/prof...
November 3, 2025 at 10:37 AM
Reposted by George Kaissis
We celebrated the 5th anniversary of our research chair at @tum.de! 💙🥂

It's been an incredible journey of research and collaboration. Thank you to everyone who has made this possible. We are very much looking forward to the next years to come!

#AIMAnniversary #AIMNews
November 3, 2025 at 12:28 PM
Reposted by George Kaissis
VaultGemma: A Differentially Private Gemma Model

Amer Sinha, Thomas Mesnard, Ryan McKenna, Daogao Liu, Christopher A. Choquette-Choo, Yangsibo Huang, Da Yu, George Kaissis, Zachary Charles, Ruibo Liu, Lynn Chua, Pritish Kamath, Pasin Manurangsi, Steve He, Chiyuan ...
http://arxiv.org/abs/2510.15001
October 20, 2025 at 3:48 AM
It has been a privilege to work with so many amazing colleagues across Google and Google DeepMind to build VaultGemma, an LLM trained from scratch with Differential Privacy. Weights are openly available! Check it out here: research.google/blog/vaultge...
VaultGemma: The world's most capable differentially private LLM
research.google
September 17, 2025 at 5:25 PM
Reposted by George Kaissis
Due to physical resource constraints, we currently estimate that around 300–400 of the candidate papers recommended for acceptance by the ACs will need to be rejected. We seek the support of our 41 SACs in addressing this distributed optimization problem in a fair and professional manner.
August 28, 2025 at 4:12 PM
Reposted by George Kaissis
NeurIPS has decided to do what ICLR did: As a SAC I received the message 👇 This is wrong! If the review process cannot handle so many papers, the conference needs yo split instead of arbitrarily rejecting 400 papers.
August 28, 2025 at 4:12 PM
Reposted by George Kaissis
📣 Researchers in AI security, privacy & fairness: It's time to share your latest work!

The SaTML 2026 submission site is live 👉 hotcrp.satml.org

🗓️ Deadline: Sept 24, 2025

@satml.org
SaTML 2026
hotcrp.satml.org
August 27, 2025 at 6:22 PM
Reposted by George Kaissis
NeurIPS is endorsing EurIPS, an independently-organized meeting which will offer researchers an opportunity to additionally present NeurIPS work in Europe concurrently with NeurIPS.

Read more in our blog post and on the EurIPS website:
blog.neurips.cc/2025/07/16/n...
eurips.cc
eurips.cc
A NeurIPS-endorsed conference in Europe held in Copenhagen, Denmark
eurips.cc
July 16, 2025 at 10:05 PM
Reposted by George Kaissis
New preprint on the most precise as of yet mapping between differential privacy and common operational notions of privacy risk used in practice:
Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy

Bogdan Kulynych, Juan Felipe Gomez, Georgios Kaissis, Jamie Hayes, Borja Balle, Flavio du Pin Calmon, Jean Louis Raisaro

http://arxiv.org/abs/2507.06969
July 10, 2025 at 10:05 AM
Reposted by George Kaissis
The Hitchhiker's Guide to Efficient, End-to-End, and Tight DP Auditing

Meenatchi Sundaram Muthu Selva Annamalai, Borja Balle, Jamie Hayes, Georgios Kaissis, Emiliano De Cristofaro

http://arxiv.org/abs/2506.16666
June 23, 2025 at 3:48 AM
Reposted by George Kaissis
Responsible medical AI demands patient-level privacy. Our recent #TPDP '25 paper extends Differential Privacy #DP beyond individual data points to protect entire patient profiles. 🧠

📄 Read the paper: tinyurl.com/k9fz456a
🎥 Watch the video: tinyurl.com/2jx528m5
tpdp.journalprivacyconfidentiality.org
June 12, 2025 at 7:55 AM
Reposted by George Kaissis
Going to be at CVPR the next couple of days presenting our paper „A Tale of Two Classes: Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets“.

arxiv.org/abs/2503.17024

Always happy to meet anyone working on representation learning or tabular DL and medical data
A Tale of Two Classes: Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets
Supervised contrastive learning (SupCon) has proven to be a powerful alternative to the standard cross-entropy loss for classification of multi-class balanced datasets. However, it struggles to learn ...
arxiv.org
June 11, 2025 at 12:56 AM
Reposted by George Kaissis
Markiert es euch im Kalender: Die IOI 2027 kommt ans HPI! 🎉

Zum ersten Mal seit 1992 kommt die Internationale Informatik-Olympiade (IOI) zurück nach Deutschland und wir sind in Kooperation mit Bundesweite Informatikwettbewerbe (BWINF) stolze Gastgeber des renommierten Wettbewerbs.
May 30, 2025 at 10:32 AM
Check out our new pre-print "Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models", joint work with fantastic colleagues from Google (DeepMind) and many other great institutions! Find it here: arxiv.org/abs/2505.18773
Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models
State-of-the-art membership inference attacks (MIAs) typically require training many reference models, making it difficult to scale these attacks to large pre-trained language models (LLMs). As a resu...
arxiv.org
May 27, 2025 at 8:00 AM
Check out our new pre-print "Redirection for Erasing Memory (REM): Towards a universal unlearning method for corrupted data", joint work with excellent colleagues from Google DeepMind, Google Research and University of Cambridge. Check it out here! arxiv.org/abs/2505.17730
Redirection for Erasing Memory (REM): Towards a universal unlearning method for corrupted data
Machine unlearning is studied for a multitude of tasks, but specialization of unlearning methods to particular tasks has made their systematic comparison challenging. To address this issue, we propose...
arxiv.org
May 26, 2025 at 7:17 AM
Reposted by George Kaissis
We are incredibly proud of Prof @danielrueckert.bsky.social for being elected as Fellow of the Royal Society! Well deserved and a testament to your dedication to research 🎉
We are pleased to announce the 90 outstanding researchers from across the world who have been elected to the Fellowship of the Royal Society this year. The group includes trailblazers from AI and electron microscopy to global health and neuroscience. #RSFellows royalsociety.org/news/2025/05...
Exceptional scientists elected as Fellows of the Royal Society | Royal Society
Over 90 outstanding researchers from across the world have this year been elected to the Fellowship of the Royal Society, the UK’s national academy of sciences.
royalsociety.org
May 22, 2025 at 11:08 AM
Reposted by George Kaissis
Now, it's time to up the ante.

We are committed to enshrining scientific freedom in EU law, creating a 7-year ‘super grant’ to attract top researchers, and expanding support for the most promising scientists.

More → europa.eu/!TTbWbJ
May 14, 2025 at 7:02 AM
Reposted by George Kaissis
Data attribution is crucial for debugging models and detecting low quality data (spotting mislabeled samples, bias etc.).
But many methods aren't mathematically sound and don’t scale.

But how could we improve this for large models?
1/n
April 20, 2025 at 8:49 AM
Reposted by George Kaissis
Excited to share: “A Tale of Two Classes: Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets” has been accepted to #CVPR2025! 🎉

Paper: lnkd.in/esKRqF5p
Code: lnkd.in/eZFvDA5Q

(Thread incoming 👇)
April 1, 2025 at 1:37 PM
Reposted by George Kaissis
Would you present your next NeurIPS paper in Europe instead of traveling to San Diego (US) if this was an option? Søren Hauberg (DTU) and I would love to hear the answer through this poll: (1/6)
NeurIPS participation in Europe
We seek to understand if there is interest in being able to attend NeurIPS in Europe, i.e. without travelling to San Diego, US. In the following, assume that it is possible to present accepted papers ...
docs.google.com
March 30, 2025 at 6:04 PM
Congratulations to my revered mentor and dear friend @danielrueckert.bsky.social for this great honour and for the outstanding and enduring achievements that underlie it!
🥁 Der Moment ist gekommen! 🏅 Heute ehren wir 10 herausragende Wissenschaftler*innen mit dem #LeibnizPreis 2025 der!👏 Für ihre bahnbrechenden Arbeiten und Verdienste in ihren Fachrichtungen erhalten sie je 2,5 Mio. € zur Förderung ihrer #Forschung. Herzlichen Glückwunsch an alle Preisträger*innen!
March 20, 2025 at 5:44 AM
Reposted by George Kaissis
$( varepsilon, δ)$ Considered Harmful: Best Practices for Reporting Differential Privacy Guarantees
Juan Felipe Gomez, Bogdan Kulynych, Georgios Kaissis, Jamie Hayes, Borja Balle, Antti Honkela
http://arxiv.org/abs/2503.10945
March 17, 2025 at 3:33 AM
Reposted by George Kaissis
#FUTURE-AI: International Experts Define Guidelines for Trustworthy Healthcare AI 📝

👉Learn more:
t1p.de/2rgfh

👉Check out the interview with Prof. Julia Schnabel & Dr. Georgios Kaissis:
t1p.de/e9g6a

#TrustworthyAI
March 12, 2025 at 2:05 PM