Adrian Chan
banner
gravity7.bsky.social
Adrian Chan
@gravity7.bsky.social
Bridging IxD, UX, & Gen AI design & theory. Ex Deloitte Digital CX. Stanford '88 IR. Edinburgh, Berlin, SF. Philosophy, Psych, Sociology, Film, Cycling, Guitar, Photog. Linkedin: adrianchan. Web: gravity7.com. Insta, X, medium: @gravity7
yes - people will still need a phone, and a lot of AI products, services, and UI will need a screen. and a touchable one at that.
June 3, 2025 at 1:46 AM
They mostly test whether they can steer pos/neg responses. But given Shakespeare was a test also, wld be interesting to extract style vectors from any number of authors then compare generations. (Is this approach used in those "historical avatars?" No idea.)
May 14, 2025 at 2:43 PM
But design will need to focus on tweaking model interactions so that they track conversational content and turns over time. For example with bi-directional prompting: models prompt users to keep conversations on track.

This seems a rich opportunity for interaction design #UX #IxD #LLMs #AI
May 14, 2025 at 1:38 PM
to sustain dialog. Social interaction face to face or online is already vulnerable to misunderstandings and failures, and we have use of countless signals, gestures, etc w which to rescue our interactions.

A communication-first approach to LLMs for conversation makes sense, as talk is not writing.
May 14, 2025 at 1:38 PM
"when LLMs take a wrong turn in a conversation, they get lost and do not recover."

Interaction design is going to be necessary to scaffold LLMs for talk, be it voice or single user chat or multi-user (e.g. social media).

It's one thing to read/summarize written documents, quite another ...
May 14, 2025 at 1:38 PM
Perhaps one could fine tune on Lewis Carroll, then feed the model with philosophical paradoxes, and see whether the model produces more imaginative generations.
May 12, 2025 at 5:21 PM
I think because this isn't making the model trip, synesthetically, but is simply giving it juxtapositions. So what is studied is a response to these paradoxical and conceptually incompatible prompts, not a measure of any latent conceptual activations or features.
May 12, 2025 at 5:21 PM
Yes and the label applied says as much about the person as it does about the model. In the world of creatives, the most-used term now is "slop," derived perhaps from enshitification. The latter capturing corporate malice where the "slop" is AI-generated byproduct unfit for human consumption...
May 10, 2025 at 5:08 PM
Thread started w your second post so yes I missed the initial post. Never mind.
May 10, 2025 at 4:53 PM
Assuming alignment using synthetic data is undesirable, one route is to complement global alignment (alignment to some "universally" preferred human values) w local, contextualized alignment, via feedback and use by the user. Tune the LLM's behavior to user preferences.
May 10, 2025 at 4:43 PM
Customized LLMs use the feedback obtained from the individual user interactions and align to those.
May 10, 2025 at 4:34 PM
Staying power of ceasefires becoming a proxy for multilateral resilience amid baseline rivalries?
May 10, 2025 at 4:33 PM
I think this will be one accelerant for individualized/personally customized AI - e.g. personal assistants. The verifiers can use the user's preferences and tune to those rather than apply globally aligned behavioral rules.
May 10, 2025 at 4:29 PM
It's also a problem of use cases and user adoption. Though it may turn out that Transformer-based AI does indeed fail to meet expectations.

There's a lot of misunderstanding and anthropomorphism of AI's reasoning, for example, that might not turn out well.
May 10, 2025 at 4:27 PM
Coincidentally many startups of that time set up in loft & warehouse spaces w exposed concrete & steel beams.... I like this analogy especially for Social Interaction Design/Social UX, where "social architecture" is exposed for users to take up in norms, behaviors, expectations for how to engage
May 10, 2025 at 4:24 PM