Danqing Shi
banner
danqingshi.bsky.social
Danqing Shi
@danqingshi.bsky.social
Human-Computer Interaction, Human-AI Interaction, Visualization
University of Cambridge
https://sdq.github.io
Thrilled to share our #UIST2025 research! We investigate how the decomposition principle can improve human feedback for LLM alignment. In a 160-participant study, our tool DxHF increases feedback accuracy by +4.7%
👉 sdq.github.io/DxHF

Furui Tino
@oulasvirta.bsky.social @elassady.bsky.social
September 17, 2025 at 7:32 PM
7/ Check out the project page for more details:
🔗 typoist.github.io
👨‍💻 Danqing Shi @danqingshi.bsky.social , Yujun Zhu, Francisco Erivaldo Fernandes Junior, Shumin Zhai , and Antti Oulasvirta @oulasvirta.bsky.social
February 27, 2025 at 7:01 AM
6/ Typoist marks a notable divergence from the data-driven approaches so popular today: in explicitly modeling the causes of errors, instead of just “parroting” statistically plausible typographical errors in text, the model takes a glass-box rather than a black-box approach.
February 27, 2025 at 7:01 AM
4/ We developed a visualization-based exploration tool based on the model to help practitioners and researchers simulate error behaviors. It allows users to fine-tune the model manually.
February 27, 2025 at 7:01 AM
3/ How does Typoist work?

Typoist extends the computational rationality framework ( crtypist.github.io ) for touchscreen typing. It simulates eye & finger movements and predicts how users detect & correct errors.
February 27, 2025 at 7:01 AM
2/ Typing errors are more than just “fat finger” errors. They come in three main forms:

🔹Slips - motor execution deviates from the intended outcome;
🔹Lapses - memory failures;
🔹Mistakes - incorrect or partial knowledge.

Typoist captures them all!
February 27, 2025 at 7:01 AM
1/ Why do people make so many errors in touchscreen typing, and how do people fix them?

Our #CHI2025 paper introduces Typoist, the computational model to simulate human errors across perception, motor, and memory. 📄 arxiv.org/abs/2502.03560
February 27, 2025 at 7:01 AM
7/ Check out the project page for more details:
🔗 chart-reading.github.io
👨‍💻 Danqing Shi @danqingshi.bsky.social , Yao Wang , Yunpeng Bai, Andreas Bulling, and Antti Oulasvirta @oulasvirta.bsky.social
February 24, 2025 at 11:53 AM
5/ We tested Chartist against real human eye-tracking data. It outperformed existing models ( UMSS, DeepGaze III, and VQA model) in simulating task-driven gaze movement on visualizations.
February 24, 2025 at 11:53 AM
3/ How does Chartist work?

Chartist uses a hierarchical gaze control model with:
A cognitive controller (powered by LLMs) to reason about the task-solving process
An oculomotor controller (trained via reinforcement learning) to simulate detailed gaze movements
February 24, 2025 at 11:53 AM
2/ Given a chart + a task,
🧐 Want to find a specific value?
🔍 Need to filter relevant data points?
📈 Looking for extreme values?
Chartist predicts human-like eye movement, simulating how people move their gaze to address these tasks.
February 24, 2025 at 11:53 AM
1/ How people read charts when they have a specific task in mind? Their gaze isn’t random!
Our #CHI2025 paper introduces Chartist, the first model designed to simulate these task-driven eye movements. 📄 arxiv.org/abs/2502.03575
February 24, 2025 at 11:53 AM