Anna Langener
banner
annalangener.bsky.social
Anna Langener
@annalangener.bsky.social
Postdoc @Dartmouth | Researching how to optimize the use of passive data (e.g., from smartphones & smartwatches) to predict mental health outcomes.
7/ Want to learn more? Join me for a workshop at the SIPS online conference, where I’ll dive deeper into these topics! ✨ #SIPS2025
March 5, 2025 at 5:31 PM
6/ To address these pitfalls, we present recommendations for aligning validation and evaluation strategies with the intended use case scenario and created a tool to help researchers investigate whether their strategies and goals are misaligned: annalangener.shinyapps.io/Justintime/
Just in Time or Just a Guess? Validating Prediction Models Based on Longitudinal Data
annalangener.shinyapps.io
March 5, 2025 at 5:31 PM
5/ Third, selecting appropriate baseline models is key. Some models may look effective (e.g., AUC = 0.77) but actually underperform compared to simple baselines (e.g., AUC = 0.96). ⚖️
March 5, 2025 at 5:31 PM
4/ Second, ensuring adequate variability in the outcome variable is crucial. If outcomes are stable, frequent predictions may offer little practical benefit for JITAI.
March 5, 2025 at 5:31 PM
3/ Centering predictor variables within individuals can improve within-person accuracy but may reduce overall performance.
March 5, 2025 at 5:31 PM
2/ First, models may perform well overall (AUC = 0.77), but their ability to predict within-person change can be much lower (AUC = 0.56, SD = 0.11). For JITAIs, this will prevent the model from identifying intervention delivery moments and will only discriminate between people.
March 5, 2025 at 5:31 PM
1/ Many researchers are focused on building prediction models for JITAIs. But a major challenge is the mismatch between model development, evaluation, and application. We use simulations to illustrate three pitfalls.
March 5, 2025 at 5:31 PM
5/ Third, selecting appropriate baseline models is key. Some models may look effective (e.g., AUC = 0.77) but actually underperform compared to simple baselines (e.g., AUC = 0.96). ⚖️
March 5, 2025 at 5:18 PM
4/ Second, ensuring adequate variability in the outcome variable is crucial. If outcomes are stable, frequent predictions may offer little practical benefit for JITAI.
March 5, 2025 at 5:18 PM
3/ Centering predictor variables within individuals can improve within-person accuracy but may reduce overall performance.
March 5, 2025 at 5:18 PM
2/ First, models may perform well overall (AUC = 0.77), but their ability to predict within-person change can be much lower (AUC = 0.56, SD = 0.11). For JITAIs, this will prevent the model from identifying intervention delivery moments and will only discriminate between people.
March 5, 2025 at 5:18 PM
1/ Many researchers are focused on building prediction models for JITAIs. But a major challenge is the mismatch between model development, evaluation, and application. We use simulations to illustrate three pitfalls.
March 5, 2025 at 5:18 PM
7/ Want to learn more? Join me for a workshop at the SIPS online conference, where I’ll dive deeper into these topics! ✨ #SIPS2025
March 5, 2025 at 4:45 PM
6/ To address these pitfalls, we present recommendations for aligning validation and evaluation strategies with the intended use case scenario and created a tool to help researchers investigate whether their strategies and goals are misaligned: annalangener.shinyapps.io/Justintime/%....
https://annalangener.shinyapps.io/Justintime/💡
March 5, 2025 at 4:45 PM
5/ Third, selecting appropriate baseline models is key. Some models may look effective (e.g., AUC = 0.77) but actually underperform compared to simple baselines (e.g., AUC = 0.96). ⚖️
March 5, 2025 at 4:45 PM