Raoul de kezel
Raoul de kezel
@raouldekezel.bsky.social
I noticed a year ago that some people here wont neither listen nor learn nor experiment. I guess digging out good old Thomas Kuhn wont help either ...
June 2, 2025 at 4:54 PM
Nope. I preferred 3.5. It picked better my (terse) questions in the context of the chat.
March 10, 2025 at 10:02 AM
Oups je me suis avancé ce n’est plus vrai…
December 15, 2024 at 6:29 PM
à ceci près que la dette est détenue par les belges: Manneken Pis out Manneken Pis in.
December 15, 2024 at 6:13 PM
À vaincre sans péril … 😂
December 15, 2024 at 6:02 PM
Comme le suggère très vite le mot clé co-vide 🙄
December 8, 2024 at 12:37 PM
December 5, 2024 at 8:54 PM
December 5, 2024 at 8:51 PM
December 5, 2024 at 8:48 PM
x.com
x.com
December 5, 2024 at 8:46 PM
10 contre 1 que tu l’as pas fait 😁
December 5, 2024 at 12:20 PM
Actually llms have a short term memory, the context window, if your point is that some working storage is needed.
December 5, 2024 at 7:06 AM
Do you think your brain know facts ? More accurately than a book ? Why ? I realize you want to involve consciousness in your definition, and I don’t want to, so let’s agree to disagree 😉
December 4, 2024 at 3:57 PM
This is too subtle a distinction for me. If I ask an llm for pythagorean theorem and it answers correctly repeatedly, to me it knows the fact. And so do we do with students! There is no need for consciousness to model the experimental data here.
December 4, 2024 at 3:43 PM
I don’t know whether llms reason and I certainly don’t know whether they are conscious. I’m pretty sure they know facts (with high reproduvibility)
December 4, 2024 at 3:22 PM
Good. I don’t claim that 🙂
December 4, 2024 at 3:06 PM
I’m sorry but I have zero opinion on the relationship between reasoning and consciousness. I believe that a system can store (representations of) facts without consciousness. At any rate I don’t claim that llms reason . I claim that the argument used here to show that they do not seems invalid to me
December 4, 2024 at 2:53 PM
I make zero claim wrt to consciousness. I claim that saying « a complex system cannot exhibit this or this property because first principles are that or that » is not obvious and has many counter examples.
December 4, 2024 at 2:38 PM
Actually you are the one claiming that there is an undefined thing called « consciousness » that exists in the human brain but not in llms, so the burden of the proof should be yours. Anyway, if anything you agree that unexpected properties emerge from complex systems such as our brain 🙂
December 4, 2024 at 2:23 PM
I don’t know how to define nor measure consciousness.
December 4, 2024 at 2:11 PM
« I don’t think it can be emphasized enough that large language models were never intended to do math or know facts«  They are not good at doing it, for sure, but please consider that (1) they may be good in other fields (2) humans were never intended to do math either
December 4, 2024 at 2:00 PM