John Travers
banner
jctravs.bsky.social
John Travers
@jctravs.bsky.social
RAEng Chair in Emerging Technologies and Professor of Physics at Heriot Watt University, Edinburgh. Ultrafast optics, solitons, hollow fibres.

https://lupo-lab.com
Oh no! It was great to meet you up here in Edinburgh. Such a shame the trains are so often a mess.
November 4, 2025 at 10:47 PM
Quite common, and I prefer it. If your calendar is up to date it saves a lot of emails/messages between people. And you can always move it back in the same way!
October 31, 2025 at 2:41 PM
This is a good point! What frustrates me most is that these models are not actually trained on research papers. Copyright and environmental issues aside, a ChatGPT-like model trained on all scientific literature and books could be a really useful thing.
October 28, 2025 at 2:24 PM
The sort of good quality answers that you posted build confidence, but then suddenly you are faced with something that doesn't quite make sense. Only through deep personal knowledge can you notice it. My fear is that the majority of users will never know. That makes me uneasy.
October 28, 2025 at 2:21 PM
No, I think it is a perfectly good answer. But I agree with @jacopobertolotti.com that it is Wikipedia level. That is not a problem, and the fact that you get a specific direct answer is really useful. My issue is for the more subtle edge cases, where genuinely deep understanding is needed.
October 28, 2025 at 2:21 PM
I don't have an explicit example off the top of my head, but so far (even with GPT-5 thinking) I have found many subtle errors when discussing physics. I'll try to remember to post one when it next occurs.
October 27, 2025 at 4:44 PM
That is the main problem. When humans don't know something or start to BS, it is obvious. When chatbots don't "know" something it is often indistinguishable from when they are correct, unless you already know the answer. So we must use them as ignorant, but great, language manipulation tools.
October 27, 2025 at 4:44 PM
It astonishes how often the answers are wrong (even pro-level tools). And when wrong, it is often in subtle and hard to notice ways. When constrained appropriately, it can be very useful. But it is exceedingly scary how many people take the authoritive sounding and articulate answers as fact. (2/2)
October 27, 2025 at 4:27 PM
I disagree. I've been trying to make better use of this technology (partly motivated by you!) and made significantly more progress with the perspective that these things do not think at all and just complete maximal likelihood sentences. It is then a great tool for text or code wrangling. (1/2)
October 27, 2025 at 4:27 PM
Reposted by John Travers
"Matlab is the Visual Basic 6 of technical computing." #HeardAtJuliaCon #JuliaCon #JuliaLang
October 3, 2025 at 11:04 AM
I'm curious what your use cases are (within academia), especially as you seem quite optimistic about AI use. I mostly try out some AI tools every few months, have a brief wave of optimism, but then give up and go back to my old ways after finding the output not as good as I thought.
September 17, 2025 at 10:35 AM
Reposted by John Travers
If you burn down a forest, you don't miss out on lumber for just that season. You have to replant all the trees, nurture them, and wait for them to grow.

The science and research budget cuts happening now are wanton, senseless arson. Recovery, if it ever happens, will take generations.
June 3, 2025 at 9:02 PM
Reposted by John Travers
Even accepting the premise that AI produces useful writing (which no one should), using AI in education is like using a forklift at the gym. The weights do not actually need to be moved from place to place. That is not the work. The work is what happens within you.
April 15, 2025 at 2:56 AM