Bertram Højer
brtrm.bsky.social
Bertram Højer
@brtrm.bsky.social
CS PhD Student @ IT University of Copenhagen |
NLP, AI reasoning and seeming intelligence

https://bertramhojer.github.io |
nlpnorth.bsky.social
I wrote a little piece about a pet peeve of mine. People claiming that LLMs "lie". Writing is great to get your thoughts in order and it feels as if there's a bit more at stake when writing for a potential audience.

Might do more of these in the future.

substack.com/home/post/p-...
Language Models Don't Lie
Discussions and coverage of AI systems such as chatbots based on LLMs uses increasingly anthropomorphic language, such as claiming that LLMs lie. But LLMs cannot lie, and here's why.
open.substack.com
August 18, 2025 at 2:02 PM
Reposted by Bertram Højer
📣 Next week we will be in Vienna for @aclmeeting.bsky.social to present a couple of works from our lab!

Find more about each of them below 🧵👇

#NLP #NLProc #ACL2025NLP @itu.dk @aicentre.dk
July 22, 2025 at 2:43 PM
Reposted by Bertram Højer
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
June 19, 2025 at 11:21 AM
If you don't think I provided enough interesting findings in my last post, @annarogers.bsky.social has you covered in her latest post on our paper! ✨
🤯 We use the term 'intelligence' a lot, but wth do we mean?

We got 303 survey responses from researchers. The most agreed-on criteria are generalization, adaptability & reasoning.

ACL Findings preprint: arxiv.org/abs/2505.20959
with @brtrm.bsky.social @terne.bsky.social @heinrichst.bsky.social /1
June 2, 2025 at 9:35 AM
Reposted by Bertram Højer
📢 The Copenhagen NLP Symposium on June 20th!

- Invited talks by @loubnabnl.hf.co (HF) @mziizm.bsky.social (Cohere) @najoung.bsky.social (BU) @kylelo.bsky.social (AI2) Yohei Oseki (UTokyo)
- Exciting posters by other participants

Register to attend and/or present your poster at cphnlp.github.io /1
Copenhagen NLP Symposium 2025
symposium website
cphnlp.github.io
May 26, 2025 at 1:08 PM
Our survey paper "Research Community Perspectives on 'Intelligence' and Large Language Models" has been accepted to the ACL Findings 2025 - and I'll be in Vienna to present the work in July!

arxiv.org/abs/2505.20959
Research Community Perspectives on "Intelligence" and Large Language Models
Despite the widespread use of ''artificial intelligence'' (AI) framing in Natural Language Processing (NLP) research, it is not clear what researchers mean by ''intelligence''. To that end, we present...
arxiv.org
May 30, 2025 at 8:53 AM
If you’re at ICLR, swing by poster #246 on Saturday from 10-12.30 to hear more about our work on modulating the reasoning performance of LLMs!

#ICLR2025
ICLR is coming up and I thought I'd use the chance to advertise our paper: "Improving 'Reasoning' Performance in Large Language Models via Representation Engineering" ✨

Also happens to be my first publication as a PhD Student at @itu.dk
Improving Reasoning Performance in Large Language Models via...
Recent advancements in large language models (LLMs) have resulted in increasingly anthropomorphic language concerning the ability of LLMs to reason. Whether \textit{reasoning} in LLMs should be...
openreview.net
April 24, 2025 at 1:34 PM
Reposted by Bertram Højer
The problem with most machine-based random number generators is that they’re not TRULY random, so if you need genuine randomness it is sometimes necessary to link your code to an external random process like a physical noise source or the current rate of US tariffs on a given country.
April 9, 2025 at 7:15 PM
ICLR is coming up and I thought I'd use the chance to advertise our paper: "Improving 'Reasoning' Performance in Large Language Models via Representation Engineering" ✨

Also happens to be my first publication as a PhD Student at @itu.dk
Improving Reasoning Performance in Large Language Models via...
Recent advancements in large language models (LLMs) have resulted in increasingly anthropomorphic language concerning the ability of LLMs to reason. Whether \textit{reasoning} in LLMs should be...
openreview.net
March 31, 2025 at 2:27 PM
Reposted by Bertram Højer
its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time in things im an expert on. not going to think about this any further
March 8, 2025 at 12:13 AM
It’s a bit disturbing to hear Ezra Klein, someone I admire a lot, stating that “… virtually everyone working in this area (AI) are saying that [AGI] is coming”. In my view this is a gross misrepresentation of the actual sentiment in the field.
March 6, 2025 at 4:13 PM
Very harsh writing by Edward Zitron - but he voices concerns I have myself.

Developing helpful 'AI' systems could provide value, but the way current commercial 'AI' systems are being hyped is not very helpful and quite likely detrimental.

www.wheresyoured.at/longcon/
The Generative AI Con
It's been just over two years and two months since ChatGPT launched, and in that time we've seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st ...
www.wheresyoured.at
February 19, 2025 at 10:23 AM
Reposted by Bertram Højer
Not to mention that rather than being well-established observations, (1) is difficult if not impossible to assess without a proper definition of intelligence and (3) seems to be complete blather.
Three Observations
Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.  Systems that start to point to AGI* are coming into view, and so we think it’s important to...
blog.samaltman.com
February 10, 2025 at 6:21 AM
Reposted by Bertram Højer
Modern-Day Oracles or Bullshit Machines?

Jevin West (@jevinwest.bsky.social) and I have spent the last eight months developing the course on large language models (LLMs) that we think every college freshman needs to take.

thebullshitmachines.com
INTRODUCTION
thebullshitmachines.com
February 4, 2025 at 4:12 PM
The "Perspectives on Intelligence" survey is now closed! Thank you to the 200+ researchers who participated. Currently analyzing the data and writing up the findings - stay tuned for the paper!

Project in collaboration with @terne.bsky.social, @annarogers.bsky.social & @heinrichst.bsky.social!
Perspectives on Intelligence: Community Survey
Research survey exploring how NLP/ML/CogSci researchers define and use the concept of intelligence.
bertramhojer.github.io
January 17, 2025 at 2:07 PM
Do researchers in AI related fields believe that state-of-the-art language models are intelligent? And how do we even define intelligence?

If you haven't yet responded consider taking part in our survey. We'd love to hear your take!

Details and link in original post👇 !
What do YOU mean by "intelligence", and does ChatGPT fit your definition?
We collected the major criteria used in CogSci and other fields, and designed a survey to find out!

Access link: www.survey-xact.dk/collect
Code: 4S7V-SN4M-S536
Time: 5-10 mins
Perspectives on Intelligence: Community Survey
Research survey exploring how NLP/ML/CogSci researchers define and use the concept of intelligence.
bertramhojer.github.io
December 16, 2024 at 1:26 PM
Very cool paper on the internal dynamics of reasoning in LMs. The approach (Chain of Continuous Thought) lets models reason in continuous latent space rather than being constrained to generating specific tokens.

arxiv.org/abs/2412.06769
Training Large Language Models to Reason in a Continuous Latent Space
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. Ho...
arxiv.org
December 11, 2024 at 1:48 PM
What do YOU mean by "intelligence", and does ChatGPT fit your definition?
We collected the major criteria used in CogSci and other fields, and designed a survey to find out!

Access link: www.survey-xact.dk/collect
Code: 4S7V-SN4M-S536
Time: 5-10 mins
Perspectives on Intelligence: Community Survey
Research survey exploring how NLP/ML/CogSci researchers define and use the concept of intelligence.
bertramhojer.github.io
December 4, 2024 at 7:48 AM