Antonio E. Porreca 🐳
banner
aeporreca.org
Antonio E. Porreca 🐳
@aeporreca.org
Maître de conférences (lecturer) at https://univ-amu.fr & researcher at https://lis-lab.fr 🐗🇫🇷 • Natural computing, discrete dynamical systems & complexity • Down with generative AI • I like (human) languages • This is a pandemic, y’all 😷 • aeporreca.org
Reposted by Antonio E. Porreca 🐳
Like these problems have been around for a while, but ChatGPT presents a unique temptation for students. I think if it existed 50 years ago we’d see a lot of the same issues. Or in other words, I think ChatGPT is a problem in and of itself, alongside larger structural/institutional issues.
December 10, 2025 at 8:03 PM
Reposted by Antonio E. Porreca 🐳
The "No Generative AI" Pledge:

"I oppose the use of so-called generative AI. I will not willingly interact or transact business with individuals and organizations that promote its use. Where alternatives exist, I will use them. Where they do not, I will support their creation."
November 24, 2025 at 2:52 PM
My point is that, in any practical sense, “automating the process of proving mathematical truths” requires an algorithm that always halts with a yes or no answer, which does not exist. Just “proving truths” (or theorems) and not halting if no proof exists is not enough.
December 10, 2025 at 11:12 AM
But if your prover hasn’t halted yet on a sentence φ, you cannot infer anything about φ. Maybe it will find a proof of φ later, maybe it will never halt because it’s not a theorem. (And I’m simplifying here by assuming that the axioms are true, since otherwise you have provable false statements…)
December 10, 2025 at 8:33 AM
Arithmetical truths (the OP is about truth, not theorems, if we take it at face value) are not *effectively* axiomatisable.
December 10, 2025 at 7:39 AM
Second, establishing if something is true (and not halting for some sentences if it is not, as is necessary if we don’t admit wrong answers) is not sufficient in practice, since you cannot infer anything from a run of your program that hasn’t halted yet. This is surely the context of the OP.
December 10, 2025 at 7:34 AM
First of all, just to be precise, theoremhood is not the same as truth (something can be true, at least in classical metalogic, even if it is not a theorem, that’s one consequence of incompleteness). Theoremhood is recursively enumerable as you point out, while (classical) truth is not.
December 10, 2025 at 7:34 AM
And the incompleteness theorems apply to effectively axiomatised theories, which is all about automation.
December 10, 2025 at 7:04 AM
Once again, you are missing the point. Your theorem proving program will not stop if no proof exists, so you can’t use it to establish theoremhood (let alone truth).
December 10, 2025 at 7:01 AM
For anyone actually interested in this subject: it is indeed about automation, and programs that automatically prove theorems are irrelevant in this context since they do not work on all statements.
December 10, 2025 at 6:37 AM
Reposted by Antonio E. Porreca 🐳
i mean, we have learned (Gödel, Church, Turing, et al) that we cannot—hard *cannot*—automate the process of proving *mathematical* truths.

general knowledge? forget it
December 9, 2025 at 1:01 PM
Reposted by Antonio E. Porreca 🐳
The very concept of “prompt engineering” gives the game away
December 9, 2025 at 8:26 PM
Reposted by Antonio E. Porreca 🐳
All efforts to get good results out of LLMs is more alchemy than “prompt engineering”. It’s prodding with a stick and hoping your hypothesis is true while never being able to prove it
December 9, 2025 at 5:37 PM
Reposted by Antonio E. Porreca 🐳
Mardi 9 décembre

QUILLER
De l'occitan quiha, quiller signifie placer un objet en hauteur, percher. Verbe bien connu des minots qui souvent quillent le ballon dans le jardin du voisin : "C’est toi qui l’as quillé, c’est toi qui vas le chercher !" Heureusement, ce qui est quillé peut se déquiller.
December 9, 2025 at 5:40 PM
Rare image de Château Gumbear à Marseille.
December 9, 2025 at 12:26 PM
Reposted by Antonio E. Porreca 🐳
there are literally thousands of Machine Learning papers that just find any old public dataset and then do something pointless with it. I particularly blame editors/reviewers for not rejecting these.
December 9, 2025 at 6:57 AM