Adrian Chan
banner
gravity7.bsky.social
Adrian Chan
@gravity7.bsky.social
Bridging IxD, UX, & Gen AI design & theory. Ex Deloitte Digital CX. Stanford '88 IR. Edinburgh, Berlin, SF. Philosophy, Psych, Sociology, Film, Cycling, Guitar, Photog. Linkedin: adrianchan. Web: gravity7.com. Insta, X, medium: @gravity7
"coherent discourse organisation. This is achieved by either pointing backward to previously discussed material or forward to upcoming propositions" ...wouldn't lack of cataphoric be explained by Transformer architecture, so not anticipating arguments made later?
May 3, 2025 at 5:46 PM
This missive from Dario is worth the read (and mech interp on #AI is truly fascinating). Among features/concepts found in #LLMs: "genres of music that express discontent."

I'm reminded of Borges Chinese Encyclopedia of Animals

www.darioamodei.com/post/the-urg...
April 25, 2025 at 6:50 PM
Yes ... and this visualization of the diffusion model's "thinking" is perhaps much more honest than the verbalization or inner dialog we see with exposed 01, 03 etc reasoning traces.

arxiv.org/abs/2502.09992
February 27, 2025 at 3:16 PM
Compare this view of an LLM diffusion model generating its response to the "reasoning" we see in conventional LLMs. This view illustrates the degree to which seeing an AI "think" step by step sustains an illusion that it's actually thinking. Really it's just choosing its words carefully. #LLM #AI
February 20, 2025 at 3:01 PM
During research into Big Five personality traits, LLMs spontaneously started generating emojis. So the mech interp detectives went after it, and found that training on informal (conversational) data likely resulted in neurons activated for emojis.
#LLM #ML #AI

arxiv.org/abs/2409.102...
January 28, 2025 at 7:32 PM
"when an LLM explains a concept, can it answer related questions derived from that explanation...?"

"gap highlights fundamental limitations in the internal knowledge representation and reasoning abilities of current LLMs"

#ML #LLM #AI

www.alphaxiv.org/abs/2501.11721
January 27, 2025 at 6:07 PM
"Spurious Forgetting" - fine-tuning not necessarily cause of catastrophic forgetting, but rather task misalignment is cause.
#AI #ML #LLM #AIAlignment

www.alphaxiv.org/abs/2501.13453
January 27, 2025 at 5:34 PM
Polycrisis
January 9, 2025 at 9:15 PM
Using #LLMs to translate user preferences from user data & reviews. Demonstrates difficulty of capturing user prefs from text vs structured data, sentiment, etc. Paper shows progress but fund Q remains: reviews seek social status thus = polluted motives. #UX #AI #ML
www.alphaxiv.org/abs/2412.08604
January 7, 2025 at 6:34 PM
Cold, quiet, and wet out. Today might be the day for Satantango. #filmsky
December 22, 2024 at 6:22 PM
User experience, as usual, is going to shape success of integration of AI into so many of these applications... Whilst it's convenient to Ask AI about a PDF within Adobe Acrobat, copying and pasting passages into ChatGPT is faster (though responses lack document context). This is a fail
December 20, 2024 at 4:18 PM
Had one of those Eureka moments using Claude text prompt to build and post a web page to github using Claude Computer Use. Required some tinkering. AI Hacklets will soon be shared like pinterest boards. I cld see this being the social sharing feature of Gen AI. #AI #GenAI #Claude #Sonnet
December 16, 2024 at 7:42 PM
Watched 2001: A Space Odyssey again last night.

This is what Hal could do. Are we there yet? Getting close?
December 14, 2024 at 9:43 PM
Trust in AI - huge concept for AI design. This def from '22 doesn't account for hallucinations (facticity), deceptions (fakes), automations/delegations (comprehension?), other LLM/GenAI user trust issues.

Is there a more recent survey?
#UX #ML #AI #AIEthics

arxiv.org/abs/2205.00189
December 13, 2024 at 6:55 PM
Great book. How many of our product and company ideas should Stephenson, Gibson, others get credit for?!

Though funnily I think of this from Westworld as more of how we use gen AI
December 6, 2024 at 3:02 PM
Known knowns and known unknowns! Neel Nanda et al find entity recognition w/in LLMs to be a factor in chat refusal & hallucinations.
I can't determine whether this would have implications for user prompting? Would this suggest an SEO-type approach to prompting?

#AI #artificialintelligence #ML #LLM
December 4, 2024 at 2:25 PM
This was interesting - could be a technique for use in generating unexpected use cases, scenarios, outcomes etc (in other fields)
December 3, 2024 at 4:57 PM
UXers, Designers... What to make of these AI-generated conversational personas, intended to allow creators more insight into their audiences by conversing with agents about their content.
arxiv.org/abs/2408.109...
#ux #AI #design
November 20, 2024 at 10:50 PM