Felipe Meneguzzi
banner
felipe.meneguzzi.eu
Felipe Meneguzzi
@felipe.meneguzzi.eu
AI Researcher and former AAAI Councilor.
Professor of Computer Science at the University of Aberdeen.
Bridges Professor at PUCRS.
Views expressed here are my own.

https://www.meneguzzi.eu/felipe/
This is one of the pieces to be remembered years from now as to "what we got wrong", especially about coders (see Hinton's comments about radiologists...).
December 1, 2025 at 7:54 PM
So, if anyone saw my reviews on Openreview, disagrees with them, and wants to email me to chat, I'd actually be more than happy to talk through any improvements I suggested that they felt it was unclear.
November 30, 2025 at 7:47 PM
Now I'm still getting similar reports (on code they generated with LLMs). These students clearly see technical writing as a performative act, whose relation to truth or evidence is immaterial. This is the danger of the current direction of travel.
November 30, 2025 at 2:56 PM
This was where an unintuitive result from a poor heuristic actually led to worse performance than blind search. The assignment was instrumented to show graphs with the disparity in performance, but the report duly noted that the heuristic improved performance.
November 30, 2025 at 2:56 PM
Exactly, and even then, you could read in the reports from students who used LLMs that they clearly did not understand even the implementation they (at the time) copied from the internet (these are cases I caught traditional misconduct).
November 30, 2025 at 2:56 PM
One thing in the UK (which I feel will soon happen in the US, given funding cuts), is that, in response to the financial pressures on universities to do more with less, academic managers seem to think they can cheaply replace proper teaching with LLMs. This is a phantasy.
November 30, 2025 at 12:18 PM
While I understand the utility of LLMs for various things, I am adamantly against their widespread use on activities that have life-changing consequences for humans (such as in education). I do not buy the argument that it's pointless to resist (which I get increasingly hammered about).
November 30, 2025 at 12:15 PM
This is happening even in science departments, and specifically in my case, in CS departments, in an AI module (of all things). Even after they were warned the assignments were designed to be resilient to LLMs (the proper term for what drives these chatbots, as AI is much more than that).
November 30, 2025 at 12:13 PM
It's a good thing that I don't particularly mind if the authors of the papers I handled know who I am.
November 28, 2025 at 11:20 AM
Houveram mini ondas como os Angolanos e Libaneses fugindo das guerras civis dos anos 80, mas os números eram minúsculos comparado com a população.
November 27, 2025 at 2:34 PM
E uma estatística bem surpreendente (até pra ele), é que o Japão (conhecido por valorizar uma sociedade homogênea), tem uma parcela de imigrantes maior que o Brasil.
November 27, 2025 at 2:34 PM
Sim, tempos atrás eu estava conversando com um aluno (descendente de Japoneses) que apesar da fama de melting pot do Brasil, a última onda significativa de imigrantes pra este país provavelmente foram os antepassados dele (avôs), e que desde meados dos anos 50, as ondas de imigrantes era minúscula.
November 27, 2025 at 2:31 PM
Reposted by Felipe Meneguzzi
For lots of reasons, I don’t like LLMs and I don’t use them, but I know there are serious people working on ways to meaningfully incorporate them into education and I don’t doubt there are ways to do that productively. It’s probably obvious that “Have the LLM tell you the answer” isn’t one of them.
November 21, 2025 at 12:55 PM
The first ChatGPT was indeed a technological breakthrough (to the extent that the underlying model could scale). Almost everything after that stemmed from OpenAI management drinking their own kool-aid. Bullshit-driven innovation does not survive long in the open market.
November 20, 2025 at 11:06 AM