Brian Grellmann
banner
briangrellmann.bsky.social
Brian Grellmann
@briangrellmann.bsky.social
💼 UX Research & Accessibility Lead in Finance
🏫 Industry Advisory Board HCI at 2 Universities
✍️ Posting summaries & reflections of my reading list
🐶 Rescue dog dad
This new case study shows:

⭐ strategies for building & exploring personal knowledge bases

⭐ how retrieval shapes the way people create & maintain notes

⭐ where AI could support knowledge work in the future
September 25, 2025 at 7:29 AM
Someone please run this study!
September 22, 2025 at 12:54 PM
💭 A relevant paper to our discussions in HCI curriculum development. How do we encourage critical thinking, understanding, and enquiry around AI for workforce skills requirements against academic integrity and the need to enforce against misuse.

arxiv.org/pdf/2506.22231
arxiv.org
July 9, 2025 at 6:38 PM
The following recommendations are suggested and summarise activities:

1. Redesign assessments to emphasise process and originality

2. Enhance AI literacy for staff and students

3. Implement multi layered enforcement and detection

4. Develop clear and detailed AI usage guidelines
July 9, 2025 at 6:38 PM
🍎 There are pedagogical concerns like the erosion of academic integrity, and the risk of misinformation. If used as a shortcut rather than a learning aid there is the potential that unfettered use reduces understanding or ability to think critically.



So what can universities do?
July 9, 2025 at 6:38 PM
✅ AI can provide great benefit across the academic spectrum. Writing research grants, increase research productivity and transform teaching and learning.

⛔️ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.
July 9, 2025 at 6:38 PM
In short, the paper combines worker sentiment and expert views to shows AI agents are most valuable when humans and machines collaborate, not when AI operates alone.

Responsible AI should:
✅ Center human agency
✅ Align AI design with worker preferences
✅ Recognise where human strengths truly shine
July 6, 2025 at 7:28 AM
Authors suggest key human skills are shifting with AI adoption:

The demand for information-processing skills is shrinking.

While interpersonal, organisational skills are found in tasks that demand high human agency.

Could this have implications for training, hiring, and designing with AI in mind?
July 6, 2025 at 7:28 AM
The authors highlight 4 core insights, here are 2 of them:

2️⃣ There are mismatches between what AI can do and what workers want it to do

4️⃣ There’s a broader skills shift underway: from information-processing to interpersonal competence
July 6, 2025 at 7:28 AM
They introduce the Human Agency Scale: a shared language for human-AI task relationships

H1: AI handles the task entirely on its own

H2: AI needs minimal human input

H3: Equal human-agent partnership

H4: AI needs substantial human input

H5: AI can’t function without continuous human involvement
July 6, 2025 at 7:28 AM
Paper here: arxiv.org/pdf/2503.002...

Side note: I especially appreciated the researcher’s reflection on doing a solo-authored paper—and how it deepened her appreciation for working collaboratively with co-authors and her team.
arxiv.org
May 4, 2025 at 8:33 PM
In short: Speculative tech in pop culture is a rich resource for rethinking how we design for real human needs in HCI.

Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.
May 4, 2025 at 8:33 PM
The takeaway: Human needs haven’t changed much over the decades—but the technologies used to meet them have. While AI, AR, and VR echo some of Doraemon’s inventions, his tools are more seamlessly embedded in everyday life, moving beyond screen-based, modern UI paradigms.
May 4, 2025 at 8:33 PM
For the unfamiliar: Doraemon is a robot cat from the 22nd century who travels back in time to help the hapless Nobita, armed with a seemingly endless supply of intuitive, problem-solving gadgets.
May 4, 2025 at 8:33 PM
An important chapter to read for anyone designing AI-enabled systems, drawing links from established AI design principles and how users form mental models.

Worksheets: pair.withgoogle.com/worksheet/me...

pair.withgoogle.com/guidebook/ch...
People + AI Guidebook
A toolkit for teams building human-centered AI products.
pair.withgoogle.com
April 20, 2025 at 6:15 AM
➃ Account for user expectations of human-like interaction.

Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.

Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.
April 20, 2025 at 6:15 AM
➂ Plan for co-learning.

Implicit and explicit feedback improve AI and change the UX over time.

When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.

Remind and re-inforce mental models especially when user needs or journeys change
April 20, 2025 at 6:15 AM
➁ Onboard in stages.

Onboarding starts before users' first interaction and continues indefinitely.

- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation
April 20, 2025 at 6:15 AM
➀ Set expectations for adaptation.

One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.
April 20, 2025 at 6:15 AM