🏫 Industry Advisory Board HCI at 2 Universities
✍️ Posting summaries & reflections of my reading list
🐶 Rescue dog dad
⭐ strategies for building & exploring personal knowledge bases
⭐ how retrieval shapes the way people create & maintain notes
⭐ where AI could support knowledge work in the future
⭐ strategies for building & exploring personal knowledge bases
⭐ how retrieval shapes the way people create & maintain notes
⭐ where AI could support knowledge work in the future
arxiv.org/pdf/2506.22231
arxiv.org/pdf/2506.22231
1. Redesign assessments to emphasise process and originality
2. Enhance AI literacy for staff and students
3. Implement multi layered enforcement and detection
4. Develop clear and detailed AI usage guidelines
1. Redesign assessments to emphasise process and originality
2. Enhance AI literacy for staff and students
3. Implement multi layered enforcement and detection
4. Develop clear and detailed AI usage guidelines
—
So what can universities do?
—
So what can universities do?
⛔️ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.
⛔️ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.
Responsible AI should:
✅ Center human agency
✅ Align AI design with worker preferences
✅ Recognise where human strengths truly shine
Responsible AI should:
✅ Center human agency
✅ Align AI design with worker preferences
✅ Recognise where human strengths truly shine
The demand for information-processing skills is shrinking.
While interpersonal, organisational skills are found in tasks that demand high human agency.
Could this have implications for training, hiring, and designing with AI in mind?
The demand for information-processing skills is shrinking.
While interpersonal, organisational skills are found in tasks that demand high human agency.
Could this have implications for training, hiring, and designing with AI in mind?
2️⃣ There are mismatches between what AI can do and what workers want it to do
4️⃣ There’s a broader skills shift underway: from information-processing to interpersonal competence
2️⃣ There are mismatches between what AI can do and what workers want it to do
4️⃣ There’s a broader skills shift underway: from information-processing to interpersonal competence
H1: AI handles the task entirely on its own
H2: AI needs minimal human input
H3: Equal human-agent partnership
H4: AI needs substantial human input
H5: AI can’t function without continuous human involvement
H1: AI handles the task entirely on its own
H2: AI needs minimal human input
H3: Equal human-agent partnership
H4: AI needs substantial human input
H5: AI can’t function without continuous human involvement
Side note: I especially appreciated the researcher’s reflection on doing a solo-authored paper—and how it deepened her appreciation for working collaboratively with co-authors and her team.
Side note: I especially appreciated the researcher’s reflection on doing a solo-authored paper—and how it deepened her appreciation for working collaboratively with co-authors and her team.
Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.
Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.
Worksheets: pair.withgoogle.com/worksheet/me...
pair.withgoogle.com/guidebook/ch...
Worksheets: pair.withgoogle.com/worksheet/me...
pair.withgoogle.com/guidebook/ch...
Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.
Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.
Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.
Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.
Implicit and explicit feedback improve AI and change the UX over time.
When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.
Remind and re-inforce mental models especially when user needs or journeys change
Implicit and explicit feedback improve AI and change the UX over time.
When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.
Remind and re-inforce mental models especially when user needs or journeys change
Onboarding starts before users' first interaction and continues indefinitely.
- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation
Onboarding starts before users' first interaction and continues indefinitely.
- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation
One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.
One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.