Anna Queiroz
annacqueiroz.bsky.social
Anna Queiroz
@annacqueiroz.bsky.social
Reposted by Anna Queiroz
How do groups behave in VR?

Monique Santoso coded 9,000 speech acts to develop Virtual Reality Interaction Dynamics Scheme, 10 speech acts (e.g., disagreements, context-dependent commentary). Prior speech acts and current nonverbal behavior predict group action.

vhil.stanford.edu/publications...
May 7, 2025 at 11:13 PM
Reposted by Anna Queiroz
Years went into the large-scale study that honed the cognitive load algorithm for HP Omnicept, measuring pupillometry, saccade, gaze direction, and heart-rate variability. 738 participants, 4 continents, ages 19-61, tasks varying in cognitive load. Now published!

vhil.stanford.edu/publications...
April 16, 2025 at 9:00 PM
Reposted by Anna Queiroz
New #cscw paper out with my amazing collaborators! In it, we predicted and analyzed what features saliently predicted turn-taking behaviors in VR 🗣️📢
Who speaks next?

@PortiaWang.bsky.social analyzed a VR dataset of 77 sessions, 1660 minutes of group meetings over 4 weeks. Verbal & nonverbal history captured at millisecond level predicted turn-taking at nearly 30% over chance. To appear @acm-cscw.bsky.social.

vhil.stanford.edu/publications...
April 28, 2025 at 9:25 PM
Reposted by Anna Queiroz
Who speaks next?

@PortiaWang.bsky.social analyzed a VR dataset of 77 sessions, 1660 minutes of group meetings over 4 weeks. Verbal & nonverbal history captured at millisecond level predicted turn-taking at nearly 30% over chance. To appear @acm-cscw.bsky.social.

vhil.stanford.edu/publications...
April 28, 2025 at 6:44 PM
Reposted by Anna Queiroz
Had a great time guest lecturing for Prof. Anna Queiroz’s Meaningful Connections class at UMiami! I shared my research on the embodied and psychological implications of avatar representation in social VR and then we hopped into VRChat together, where students experienced avatar embodiment firsthand.
March 5, 2025 at 2:57 AM