viktorkewenig.bsky.social
@viktorkewenig.bsky.social
Thank you Gabriella!
July 22, 2025 at 9:05 AM
But see for yourself! concreteness.eu

All the thanks to my collaborators @gabriellavigliocco.bsky.social and @thelablab.bsky.social and big shout-out to the excellent Stanford NLU online course (this project started as my final submission).
Concreteness Rating
concreteness.eu
July 8, 2025 at 10:05 AM
Another bonus: these ratings can be generated "in-context", because transformer word embeddings change depending on surrounding semantic information. (Think "I go to the bank to get money" vs "I walk on the sandbank" -> previous methods were not able to differentiate btw these)
July 8, 2025 at 10:05 AM
The result surpasses SOTA and even the reliability of human raters (at least in English). Interestingly, we were able to extend ratings to other languages with a simple translation step (suggesting that there may be something universal about these ratings).
July 8, 2025 at 10:05 AM
Why a multimodal transformer? We know that visual information is important for accurately rating concrete words. And because emotional information is important for accurately rating abstract words, we fine-tuned our model on a dataset of emotional image descriptions.
July 8, 2025 at 10:05 AM
Our multimodal transformer tool for automating word-concreteness ratings is published today

πŸ“ C-ratings are used in research across Cognitive Science
πŸ’Ά They take time and money to collect
βš™οΈ Automation solves this + we get in-context ratings for free!

www.nature.com/articles/s44...
A multimodal transformer-based tool for automatic generation of concreteness ratings across languages - Communications Psychology
This resource presents a tool to use multimodal transformers to generate reliable, context-sensitive concreteness ratings for single words and multi-word expressions across languages.
www.nature.com
July 8, 2025 at 10:05 AM
@emollick.bsky.social you might be interested in this paper of ours! It was a very nice collab across different orgs.
Effects of LLM Use and Note-Taking On Reading Comprehension and Memory: A Randomised Experiment in Secondary Schools
The rapid uptake of Generative AI, particularly large language models (LLMs), by students raises urgent questions about their effects on learning. We compared t
papers.ssrn.com
May 21, 2025 at 9:56 AM
I should say I mean this in a positive way for you, big fan of your work.
March 8, 2025 at 11:01 AM
I think the lesson here is that most likely what you are projecting onto someone is more telling of who you are than who that person is.
March 8, 2025 at 10:59 AM
Reposted
Sam Gershman writes beautifully about how theory-free neuroscience prevents the field from reaching its promise. Beautiful and true. Most folks do not test hypotheses. Running a NHST does not a hypothesis make. www.thetransmitter.org/theoretical-...
Breaking the barrier between theorists and experimentalists
Many neuroscience students are steeped in an experiment-first style of thinking. Let’s not forget how theory can guide experiments.
www.thetransmitter.org
February 24, 2025 at 6:13 PM
Most critiques of LLMs (e.g. @garymarcus.bsky.social) are ultimately about not wanting to accept that very dumb processes can lead to "intelligence". I think this paper offers a great perspective on how this could happen in both minds and machines. www.sciencedirect.com/science/arti...
Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks
Evolution is a blind fitting process by which organisms become adapted to their environment. Does the brain use similar brute-force fitting processes …
www.sciencedirect.com
February 16, 2025 at 5:54 PM
also that’s such an arrogant statement.
February 16, 2025 at 12:34 AM
I guess attention is all you need after all
February 16, 2025 at 12:30 AM
…but true
February 15, 2025 at 11:38 PM
I wish a smart mind like yours would not focus on the negative side for likes, but instead try to think about what we can do with the technology we have.
February 15, 2025 at 11:33 PM
that’s personal and not to the point.
February 15, 2025 at 11:30 PM
what are those other approaches and what impact have they had? I 100% appreciate the skepticism but it just works. No matter how many failures you quote it’s absolutely incredible we have a model as capable as today’s.
February 15, 2025 at 11:29 PM
why always so negative, Gary?
February 15, 2025 at 8:46 PM
Reposted
I wish playing dead were an option when responding to reviewers.
February 3, 2025 at 8:35 PM
Reposted
The neurobiology of language does not operate in a vat. An important perspective from the @thelablab.bsky.social and colleagues: "Language is widely distributed throughout the brain" www.nature.com/articles/s41...
Language is widely distributed throughout the brain - Nature Reviews Neuroscience
Nature Reviews Neuroscience - Language is widely distributed throughout the brain
www.nature.com
January 7, 2025 at 3:06 PM
Reposted
Context is everything: How context influences the way the brain processes concepts.
Context is everything
How the brain processes concepts is influenced by contextual information, such as what a person is seeing, suggests new study.
buff.ly
December 8, 2024 at 9:21 AM
@hugospiers.bsky.social yes it really was a great review process, can only recommend.
December 5, 2024 at 9:41 PM
thanks for your support πŸ’ͺ
December 5, 2024 at 9:39 PM
VOR of this paper is out today in @elife.bsky.social.
Naturalistic encoding of concepts in the brain

@viktorkewenig.bsky.social shows that, while concepts generally encode habitual experiences, the underlying neurobiological organisation is not fixed but depends dynamically on available contextual information. πŸ‘πŸΎπŸ

elifesciences.org/articles/91522
December 5, 2024 at 9:39 PM