Aditya Ponnada
banner
adityaponnada.bsky.social
Aditya Ponnada
@adityaponnada.bsky.social
Sr. Researcher, MongoDB. ex-Spotify Research.

🔬 in HCI + Personalization + Experience Sampling.

With great power comes great difficulty in polynomial factorization!

Website: https://adityaponnada.github.io//
+ This work was funded by the NIH. This shows how agencies don't contribute to public health but also results in new tools and methods for data collection and AI development.
July 6, 2025 at 1:33 PM
Congratulations to all the co-authors, @shirlen3.bsky.social , Jixin Li, @genevievedunton.bsky.social , Wei-Lin Wang, Don Hedeker, and Stephen Intille. The details of the TIME study can be found here: timestudydocumentation.github.io/docs/build/h... (Reach out for questions or more info 😀)
TIME study documentation
timestudydocumentation.github.io
July 6, 2025 at 1:25 PM
This is one of the most intense EMA data collection studies leveraging smartphones and smartwatches. The μEMA method can be leveraged reliably for large-scale personalized data collection, just-in-time adaptive interventions, and human-in-the-loop ML that requires human feedback.
July 6, 2025 at 1:25 PM
Finally, if you are wondering whether μEMA and EMA collect similar data. We compared the user-level variability captured by μEMA and EMA across 11 affect-based constructs. We found a moderate to strong +ve correlation between μEMA and EMA variability across constructs.
July 6, 2025 at 1:25 PM
Third, when we measure user burden at the end of 12 months of data collection (only those who completed the study), μEMA was still perceived as less burdensome among those with possible survivor bias in data collection.
July 6, 2025 at 1:25 PM
Second, we observed that regardless of the users' engagement with the data collection study (e.g., those who completed vs. withdrew vs. unenrolled), μEMA was consistently perceived as less burdensome, despite much higher interruption longitudinally.
July 6, 2025 at 1:25 PM
This means, users who withdrew or were unenrolled because of poor engagement with EMA were twice as likely to answer μEMA surveys in real-world settings. We also suspect some survivor biases among those users who successfully completed 12 months of data collection.
July 6, 2025 at 1:25 PM
First, we found that μEMA response rates were highest among those users who were unenrolled by research staff or voluntarily withdrew from data collection because of EMA burden. This response rate difference was negligible among those who completed 12 months of data collection.
July 6, 2025 at 1:25 PM
As a result, we modeled ~1.3 million μEMA surveys and 14.9K EMA surveys collected across N = 177 participants, resulting in ~50K data collection days.
July 6, 2025 at 1:25 PM
But for μEMA, each interruption presented only one micro-question with a yes/no type answer that can be responded to with a quick micro-interaction (taking hardly 2 seconds). In EMA, users answered long surveys with multiple back-to-back questions.
July 6, 2025 at 1:25 PM
We used data collected in the TIME study, where users responded to surveys using μEMA and EMA in a burst-based longitudinal experiment. The μEMA method collected data on a smartwatch 4 times/hr for ~270 days. The EMA method collected data once/hr for ~90 days.
July 6, 2025 at 1:25 PM
Most of my research involved personalizing the experience sampling methods. We proposed a novel method of using smartwatch interactions to collect self report at scale. One paper showed how smartwatches may be less prone to non response bosses in longitudinal studies. dl.acm.org/doi/abs/10.1...
Contextual Biases in Microinteraction Ecological Momentary Assessment (μEMA) Non-response | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Ecological momentary assessment (EMA) is used to gather in-situ self-report on behaviors using mobile devices. Microinteraction EMA (μEMA), is a type of EMA where each survey is only one single questi...
dl.acm.org
December 5, 2024 at 5:24 AM
Also given that data was collected in early adoption of the platform, will it change the way people use it?
November 27, 2024 at 3:01 PM
I’m finding this starter pack directory useful to rebuild network. blueskydirectory.com
The Ultimate Directory of tools and applications for Bluesky
A curated collection of all things relating to the Blue Sky social media platform.
blueskydirectory.com
November 27, 2024 at 4:34 AM
I’m treating the starter pack labels as a broad intent search query and so posts that are not relevant from the child accounts can be automatically hidden from my timeline. Just thinking aloud
November 27, 2024 at 4:13 AM
Yeah that would be nice. Right now those tabs are manual. But also, let’s say I follow “HCI researchers” starter pack. I’m following folks with the expectation that it’ll be about relevant HCI research. In a perfect world, I’ll happy to not see non-HCI posts from those accounts
November 27, 2024 at 4:13 AM
Also wondering hope these EMA surveys are delivered. On a mobile phone, numeric rating scales tend to be below the field biasing responses. Usually fully labeled 5 point scale has worked better for us. Of course you compromise on sensitivity
November 26, 2024 at 12:48 AM
Hey Chris! Fellow Quant UXR here! Can you add me to the pack? 🙏
November 25, 2024 at 11:10 PM