Philipp Leitner
philippleitner.net
Philipp Leitner
@philippleitner.net
Associate Professor @ Chalmers University of Technology

http://icet-lab.eu
For the last couple of weeks I have been trying to vibe-code a relatively complicated research system in the area of Java microbenchmarking in my spare time.

I am slowly reaching the point where the system does something useful, so here are some initial impressions:
November 6, 2025 at 7:22 AM
People talk a lot about echo chambers on here, but I think it's important to remember that you are not entitled to anybody's attention, independently of how important you or your cause are.
November 5, 2025 at 8:50 AM
Reposted by Philipp Leitner
People are saying that AI will transform the way we teach and learn. It has already transformed the way students cheat and, to my surprise, how they apologize for cheating.
Two professors at the University of Illinois Urbana-Champaign said they grew suspicious after receiving identical apologies from dozens of students they had accused of academic dishonesty. www.nytimes.com/2025/10/29/u...
Their Professors Caught Them Cheating. They Used A.I. to Apologize.
www.nytimes.com
October 30, 2025 at 10:50 AM
Reposted by Philipp Leitner
Fascinating paper by Zhen Zhang & James Evans: arxiv.org/pdf/2509.05591 

Analyzing 2M papers published immediately following the training of five prominent open LLMs, we show that ... the most perplexing are disproportionately represented among the most celebrated ... and also the most discounted.
arxiv.org
October 13, 2025 at 1:10 PM
If you want to see ChatGPT have a stroke in real-time just ask it "Is there a seahorse emoji?".
October 10, 2025 at 6:33 AM
How do software development companies think about LLM policies?

New paper accepted in IEEE Software, Special Issue on AIware in the Foundation Models Era. Congratulations to Ranim Khojah, Mazen Mohamad, Linda Erlenhov, and Francisco Gomes Oliveira Neto.

Preprint: arxiv.org/abs/2510.06718
LLM Company Policies and Policy Implications in Software Organizations
The risks associated with adopting large language model (LLM) chatbots in software organizations highlight the need for clear policies. We examine how 11 companies create these policies and the factor...
arxiv.org
October 9, 2025 at 7:35 AM
Fuck's sake, the @chiefs.bsky.social lost the game via the most ugly touchdown I have ever seen. Heartbreak.
October 7, 2025 at 10:01 AM
"The American military will follow lawful orders and disobey unlawful ones."

Will it? So far the track record of long-standing institutions pushing back isn't great.

www.theatlantic.com/ideas/archiv...
Pete Hegseth Is Living the Dream
A man who retired as a major lectures hundreds of generals about the need to meet his standards.
www.theatlantic.com
October 1, 2025 at 9:09 AM
It's an interesting question what "the fundamentals of programming" are going to be in an AI age. Two months ago I would have agreed that being able to program yourself, without AI, line-by-line, will remain crucial for the foreseeable future. Today, I'm much less sure.
First Day: A New Chapter at the JKU

It's Wednesday. Is this important? It's my first day in a new position. So, perhaps the real question is: what's going to be important to me from now on?

stefan-marr.de/2025/10/firs...
First Day: A New Chapter at the JKU
New job and responsibilities: what's now important to me?
stefan-marr.de
October 1, 2025 at 7:19 AM
New paper accepted by Huaifeng Zhang, Mohannad Alhanahnah, YT, and Ahmed Ali El Din:

BLAFS: A Bloat-Aware Container File System

(accepted at the ACM Symposium on Cloud Computing)

Preprint: arxiv.org/abs/2305.04641
Tool: github.com/negativa-ai/...

Congratulations to Huaifeng and the team!
The Cure is in the Cause: A Filesystem for Container Debloating
Containers have become a standard for deploying applications due to their convenience, but they often suffer from significant software bloat-unused files that inflate image sizes, increase provisionin...
arxiv.org
September 28, 2025 at 3:34 PM
I am emphatically in favor of this new type of "open source ish" license:

If you’re a little guy, do whatever you want with my work.
If you’re a big guy, fuck you pay me.
tante.cc tante @tante.cc · Sep 19
"The Free Software Foundation has been sliding into irrelevance more and more by entirely failing to address its big Creepy Uncle problem. Open-Source has turned into a form of unpaid internship to be hired to make shitty apps that bring more surveillance and ads to our world."
Introducing the Forklift Certified License—Aria’s Barks
It's not following the OSI definition of open-source because i don't give a damn how capital defines its needs.
aria.dog
September 19, 2025 at 1:33 PM
Slides for yesterday's talk at the 2025 WASP Software Engineering cluster meeting:

www.icet-lab.eu/news/2025090...
WASP Software Engineering and Technology Cluster Workshop Talk | Internet Computing and Emerging Technologies lab (ICET-lab)
Welcome to the Internet home of the the Internet Computing and Emerging Technologies lab at Chalmers and the University of Gothenburg
www.icet-lab.eu
September 12, 2025 at 8:21 AM
I find this equal parts fascinating and weird.
Today I learned: Die einzige Tageszeitung Liechtensteins, das "Vaterland", "übersetzt" ihre Artikel mit ChatGPT in "Jugendsprache" und veröffentlicht das auf brudiland.li.
Das Ziel: "Die Sprache darf salopp sein und viele Anglizismen enthalten. News in Nice eben!" (www.vaterland.li/portale/brud...)
September 9, 2025 at 1:57 PM
I will serve as Awards Co-Chair (together with Catalina M. Lladó) for ICPE'26:

icpe2026.spec.org/organizing-c...
Organizing Committee: ICPE 2026
icpe2026.spec.org
September 8, 2025 at 11:44 AM
Reposted by Philipp Leitner
arstechnica.com/ai/2024/10/h...

This is funny: one way to tell that openAI scraped YouTube, is that its Whisper transcription is biased toward transcribing inaudible or garbled text to "drop a comment in the section below" or "please like subscribe and share" and such
Hospitals adopt error-prone AI transcription tools despite warnings
OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.
arstechnica.com
January 15, 2025 at 2:43 PM
The reporting on the tragic ChatGPT suicide assistance story triggers me a little bit, but not in the way you might expect.

Teenagers turning to ChatGPT in times of crisis was bound to happen, given that mental healthcare everywhere is expensive, unavailable, and of embarrassingly shitty quality.
August 27, 2025 at 5:44 PM
I share this worry. To be honest it's a surprise that Scholar survived as long as it did.
August 27, 2025 at 8:49 AM
Reposted by Philipp Leitner
Since search is dead, how soon do you think Google Scholar is headed for the Google Graveyard? I'm betting it's soon, and academia is NOT prepared
Google Scholar Is Doomed
Academia built entire careers on a free Google service with zero guarantees. What could go wrong?
hannahshelley.neocities.org
August 13, 2025 at 1:28 AM
Reposted by Philipp Leitner
Btw. about 6 months ago the Anthropic CEO said that by now 50% of all code would be written by LLMs.

How does that prediction relate to the reality we all live in and what does that say about his ability to make predictions about the future?
Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months - Business Insider
"And then in 12 months, we may be in a world where AI is writing essentially all of the code," Anthropic CEO Dario Amodei said.
www.businessinsider.com
August 25, 2025 at 8:35 AM
Many decision processes end up unfair because for the deciding body false positives are disastrous but false negatives are almost irrelevant.

That's how you end up with coding interviews, overreaching visa requirements, and similar.
August 22, 2025 at 9:49 AM
Reposted by Philipp Leitner
Good morning Bluesky !
August 21, 2025 at 6:32 AM
Reposted by Philipp Leitner
Some colleagues and I are exploring #bias in #SoftwareEngineering #SoftwareDevelopment review activities. If you have a few minutes, please fill out our survey: forms.office.com/pages/respon...
Microsoft Forms
forms.office.com
August 18, 2025 at 1:15 PM
If you had outstanding PhD students in computer benchmarking or performance evaluation who graduated between Oct 2023 and Sept 2025: consider nominating them for the SPEC Kaivalya Dixit Distinguished Dissertation Award:

research.spec.org/awards/call-...

(I'm part of the selection group this year))
Call for Nominations | SPEC Research
research.spec.org
August 12, 2025 at 7:32 AM
Am I a fantasy football prodigy or is the NFL draft AI full of shit? Only time will tell.
August 7, 2025 at 12:31 PM
Reposted by Philipp Leitner
#KnowYourPC
@philippleitner.net, an Associate Professor at Chalmers and the University of Gothenburg, reminds us that non-functional properties like performance and energy efficiency will be critical in the AIware era.

💡 He invites you to contribute your ideas to AIware2025 (deadline in 2 days!)
July 23, 2025 at 12:52 PM