Brian Grellmann
banner
briangrellmann.bsky.social
Brian Grellmann
@briangrellmann.bsky.social
💼 UX Research & Accessibility Lead in Finance
🏫 Industry Advisory Board HCI at 2 Universities
✍️ Posting summaries & reflections of my reading list
🐶 Rescue dog dad
If you're looking for an invite to Comet with Pro included, (the AI powered browser that acts as a personal assistant), then you're in luck: pplx.ai/briangrell35...
Try Comet with Pro included
For a limited time, get access to Comet with a month of free Perplexity Pro
pplx.ai
October 21, 2025 at 4:06 PM
No way! Some researchers at IBM in Brazil have looked into exactly what I’ve been trying to figure out myself… how researchers use Obsidian as a “second brain” to manage knowledge 🧠📝

arxiv.org/pdf/2509.20187
September 25, 2025 at 7:29 AM
Wore a suit in central London for the first time and was offered so much cocaine.

Does 👔+🌆 = 💊?

My hypothesis: People in suits are more likely to be approached with illicit drugs than those in casual wear, as suits may signal disposable income, social capital, or lower perceived risk to dealers.
September 22, 2025 at 12:54 PM
📃 in Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education, Beale asks how do universities respond to the advent of LLMs?

There are clear benefits and risk for misuse. Which policies strike the right balance for use of AI?
July 9, 2025 at 6:38 PM
📘 What do workers actually want AI agents to do?

A new paper from Stanford titled The Future of Work with AI Agents proposes a principled, survey-based framework to evaluate this, shifting the focus from technical capability to human desire and agency.

🧵
Paper: arxiv.org/pdf/2506.06576
July 6, 2025 at 7:28 AM
📘 In Doraemon’s Gadget Lab, Tram Tran explores the speculative tech of the beloved Japanese manga Doraemon through an HCI lens—categorising 379 gadgets by user needs, comparing them to today’s technologies, and asking how they might inspire future interaction design paradigms.
May 4, 2025 at 8:33 PM
The 🧠 Mental Models chapter of the 🌐 People + AI Guidebook explains how AI-enabled systems change over time, yet users' mental models may not match what a product can actually do.

Mismatched mental models lead to unmet expectations, frustration, and product abandonment.

4 key considerations 👇
April 20, 2025 at 6:15 AM
📘 In Guidelines for Human-AI Interaction, Amershi et al. present design principles for AI systems. Synthesised from 20+ years of research, the 18 guidelines provide a robust cross-domain foundation for creating human-AI interactions that are intuitive, trustworthy, and adaptive to real-world use.
April 15, 2025 at 12:44 PM