Neil Cohn
banner
neilcohn.bsky.social
Neil Cohn
@neilcohn.bsky.social
Comics creating cognitive (neuro)scientist at Tilburg University studying language, brains, comics, emoji & multimodality (he/him). 😮‍💨🫠🫥🥹🫨

www.visuallanguagelab.com
One of our local shopping centers is a converted textile factory, so they made a comic to describe the history of the building and its conversion!
November 9, 2025 at 3:18 PM
More praise for Speaking with Pictures, my upcoming graphic novel on language, cognition, and visual communication. I'm counting down the days until this finally gets out and I can't wait...👀 visuallanguagelab.com/sip
November 7, 2025 at 2:08 PM
She additionally lead the annotation of a 300+ comics from around the world, and @cogirmak.bsky.social's analysis of them uncovered various abstract patterns involved in how motion events are encoded in different motion cues www.degruyterbrill.com/document/doi...
November 3, 2025 at 10:55 AM
She also examined comics directly for how they use motion cues. @cogirmak.bsky.social first used a corpus of 85 comics and showed that the depiction of motion events varies based on the structures of the languages spoken by those authors www.degruyterbrill.com/document/doi...
November 3, 2025 at 10:55 AM
In experiments that compared how people perceive the speed implied by these different motion cues, @cogirmak.bsky.social found that background lines and suppletion lines seem faster than normal motion lines, which mostly indicate direction, not speed journalofcognition.org/articles/10....
November 3, 2025 at 10:55 AM
In a review paper for Cognitive Science, @cogirmak.bsky.social showed that studies overall suggest that motion lines are not based on perception or metaphors, but are encoded as part of a visual lexicon that requires exposure and familiarity onlinelibrary.wiley.com/doi/full/10....
November 3, 2025 at 10:55 AM
Congrats to @cogirmak.bsky.social whose dissertation is now printed and ready to be defended in a few weeks! She researched the visual depiction of motion events as part of our broader TINTIN Project, so here’s a little thread of her work… www.visuallanguagelab.com/tintin
November 3, 2025 at 10:55 AM
In experiments that compared how people perceive the speed implied by these different motion cues, @cogirmak.bsky.social found that background lines and suppletion lines seem faster than normal motion lines, which mostly indicate direction, not speed journalofcognition.org/articles/10....
November 3, 2025 at 10:43 AM
In a review paper for Cognitive Science, @cogirmak.bsky.social showed that studies overall suggest that motion lines are not based on perception or metaphors, but are encoded as part of a visual lexicon that requires exposure and familiarity onlinelibrary.wiley.com/doi/full/10....
November 3, 2025 at 10:43 AM
My upcoming graphic novel, Speaking with Pictures, on cognition, language, and visual communication is out in only 3 more months... www.visuallanguagelab.com/sip
October 27, 2025 at 10:44 AM
We call these "backfixing motion lines", and work by @cogirmak.bsky.social has shown that people judge them to convey faster speeds than normal motion lines that trail behind a moving object journalofcognition.org/articles/10....
October 14, 2025 at 3:42 PM
A fun find: Hokusai used background motion lines about 200 years ago in this page from one of his books of picture stories. This is the oldest example of these that I know of, and interestingly manga tend to use these more than other types of comics (See corpus in: www.visuallanguagelab.com/poc)
October 14, 2025 at 3:42 PM
My upcoming graphic novel Speaking with Pictures is now officially at the printer and I'm counting down to when it's released in February! www.visuallanguagelab.com/sip
October 9, 2025 at 2:31 PM
Thanks to @bodowinter.bsky.social for the wonderful endorsement of my upcoming graphic novel about language, cognition, and visual communication! visuallanguagelab.com/sip
September 28, 2025 at 9:57 AM
Note that this works the same as the impossible trident "illusion"—each half is formed just fine, but when put together they make a discontinuous object. In both cases you can cover up one part and the other is fine, but altogether its discontinuous
September 23, 2025 at 11:16 AM
Another reminder that graphics are decomposible… I assume this was created by AI because the top implies the girl walks forward but the legs and feet suggest waking away. Each part is well-formed but the whole is odd
September 23, 2025 at 11:04 AM
I’ve heard people doubt that graphics can be broken down into “minimal units” but here’s Hokusai clearly showing how to build pictures out of basic parts from over 200 years ago (from the Hokusai Manga exhibit at the Creative Museum in Tokyo)
September 20, 2025 at 9:47 AM
New paper alert! Work by @cogirmak.bsky.social explores the motion events in 300+ comics from around the world, revealing subtle underlying features of different types of motion cues: "Whoosh! visual depictions of direction, speed, and temporality" www.degruyterbrill.com/document/doi...
September 19, 2025 at 1:34 PM
I got the proofs today for my upcoming graphic novel about language, cognition, and visual communication and I cannot contain myself with how excited I am for this book to finally come out after working on it for 7 years 😱 visuallanguagelab.com/sip
August 21, 2025 at 11:17 AM
You can even get a slight discount on pre-ordering my upcoming graphic novel, Speaking in Pictures, about language, drawing, comics, and visual communication, which comes out in February www.bloomsbury.com/uk/speaking-...
August 18, 2025 at 12:28 PM
My publisher @bloomsburyling.bsky.social is currently having a back-to-school sale, so all my books are now 30% off, including my most recent ones!

Patterns of Comics: www.bloomsbury.com/uk/patterns-...
Multimodal Language Faculty: www.bloomsbury.com/uk/multimoda...
August 18, 2025 at 12:28 PM
Today is our last day of funding for the TINTIN Project about the structure of global comics, which has been an amazing journey. Tomorrow we start the PICTREE Project, back to studying the neurocognition of comics!

TINTIN: www.visuallanguagelab.com/tintin

PICTREE: www.visuallanguagelab.com/pictree
July 31, 2025 at 2:58 PM
This is a terrible take, starting with the assumption that models based on *text* accurately reflect Language, snd that Language itself is an amodal, arbitrary, symbolic system, which it’s not. Language is inherently multimodal and polysemiotic, and text is not natural language production
July 26, 2025 at 7:22 PM
I’ve long speculated about how processing of comics might vary because of cultural reasons or fluency in patterns of specific visual languages (most recently in my book The Patterns of Comics), so it’s nice to see other work exploring these issues visuallanguagelab.com/poc
July 23, 2025 at 12:10 PM
This new paper uses eye-tracking to show that Japanese and American readers’ eye-movements vary when reading US comics and manga due to both differences in cross-cultural attention and familiarity with their different comics onlinelibrary.wiley.com/doi/10.1111/...
July 23, 2025 at 12:10 PM