Björn Siepe
@bsiepe.bsky.social
PhD Student in Psychological Methods (University of Marburg)
Interested in time series, simulation studies & open science
https://bsiepe.github.io
Interested in time series, simulation studies & open science
https://bsiepe.github.io
Makes sense, thx! I'll make a note to link docs & metadata more clearly
For now, the best way to obtain the time series relevant to you is probably to download all datasets (still manageable) and filter them yourself. In the long term, with more data, we will work to enable more advanced filtering
For now, the best way to obtain the time series relevant to you is probably to download all datasets (still manageable) and filter them yourself. In the long term, with more data, we will work to enable more advanced filtering
October 27, 2025 at 7:15 PM
Makes sense, thx! I'll make a note to link docs & metadata more clearly
For now, the best way to obtain the time series relevant to you is probably to download all datasets (still manageable) and filter them yourself. In the long term, with more data, we will work to enable more advanced filtering
For now, the best way to obtain the time series relevant to you is probably to download all datasets (still manageable) and filter them yourself. In the long term, with more data, we will work to enable more advanced filtering
Amazing, that's great to hear! Feel free to let me know if you or hack participants have any feedback
October 27, 2025 at 3:16 PM
Amazing, that's great to hear! Feel free to let me know if you or hack participants have any feedback
Thank you!
Yes, that refers to the maximum (see here: openesmdata.org/docs/data/#n...). The number of observations in my first post (> 740k) refers to actual non-missing observations.
We only used "time points" for brevity/consistency, but I agree this could be confusing & I'll likely change it
Yes, that refers to the maximum (see here: openesmdata.org/docs/data/#n...). The number of observations in my first post (> 740k) refers to actual non-missing observations.
We only used "time points" for brevity/consistency, but I agree this could be confusing & I'll likely change it
Data Documentation
Understanding openESM datasets and metadata
openesmdata.org
October 27, 2025 at 3:14 PM
Thank you!
Yes, that refers to the maximum (see here: openesmdata.org/docs/data/#n...). The number of observations in my first post (> 740k) refers to actual non-missing observations.
We only used "time points" for brevity/consistency, but I agree this could be confusing & I'll likely change it
Yes, that refers to the maximum (see here: openesmdata.org/docs/data/#n...). The number of observations in my first post (> 740k) refers to actual non-missing observations.
We only used "time points" for brevity/consistency, but I agree this could be confusing & I'll likely change it
I see, that makes sense! I'll note it down on our list for future improvements
October 24, 2025 at 12:31 PM
I see, that makes sense! I'll note it down on our list for future improvements
Thanks for exploring openESM!
Do you mean the dates and locations at which data were collected for each dataset? If so, this information has not yet been included because it was often not clearly available. However, we do intend to add more metadata on the details of data collection in the future
Do you mean the dates and locations at which data were collected for each dataset? If so, this information has not yet been included because it was often not clearly available. However, we do intend to add more metadata on the details of data collection in the future
October 24, 2025 at 8:31 AM
Thanks for exploring openESM!
Do you mean the dates and locations at which data were collected for each dataset? If so, this information has not yet been included because it was often not clearly available. However, we do intend to add more metadata on the details of data collection in the future
Do you mean the dates and locations at which data were collected for each dataset? If so, this information has not yet been included because it was often not clearly available. However, we do intend to add more metadata on the details of data collection in the future
Another idea could be to write a consortium paper. For instance, everyone who contributes data could be included in a comprehensive paper on the database. I'm very curious to hear other ideas besides those relating to funding and awards
October 23, 2025 at 3:42 PM
Another idea could be to write a consortium paper. For instance, everyone who contributes data could be included in a comprehensive paper on the database. I'm very curious to hear other ideas besides those relating to funding and awards
I also hope that DBs can achieve that! I'm also still unsure how to best incentivize sharing & documentation, both as a scientific community in general, and as a DB maintainer in particular. I suppose that the broad adoption of DBs would considerably increase citations of datasets, which could help
October 23, 2025 at 3:42 PM
I also hope that DBs can achieve that! I'm also still unsure how to best incentivize sharing & documentation, both as a scientific community in general, and as a DB maintainer in particular. I suppose that the broad adoption of DBs would considerably increase citations of datasets, which could help
Thank you for sharing, Shirley! :)
October 22, 2025 at 7:52 PM
Thank you for sharing, Shirley! :)
While study-level CIs differ, this made no difference for our overall results & pooled effect, so we kept this visualization
October 22, 2025 at 7:52 PM
While study-level CIs differ, this made no difference for our overall results & pooled effect, so we kept this visualization
Thank you!
We used a 2-step aggregation approach to get study-level effects & CIs here (as recommended in the package we used), but also provide an alternative visualization with shrinkage-based estimates & CIs in the online supplement
We used a 2-step aggregation approach to get study-level effects & CIs here (as recommended in the package we used), but also provide an alternative visualization with shrinkage-based estimates & CIs in the online supplement
October 22, 2025 at 7:52 PM
Thank you!
We used a 2-step aggregation approach to get study-level effects & CIs here (as recommended in the package we used), but also provide an alternative visualization with shrinkage-based estimates & CIs in the online supplement
We used a 2-step aggregation approach to get study-level effects & CIs here (as recommended in the package we used), but also provide an alternative visualization with shrinkage-based estimates & CIs in the online supplement
We are deeply grateful to everyone who shares their ESM data.
Thanks to @jmbh.bsky.social, @matzekloft.bsky.social, @anabelbuechner.bsky.social, @yongzhangzzz.bsky.social, @eikofried.bsky.social, @danielheck.bsky.social for collaborating on this huge effort - we look forward to your feedback!
Thanks to @jmbh.bsky.social, @matzekloft.bsky.social, @anabelbuechner.bsky.social, @yongzhangzzz.bsky.social, @eikofried.bsky.social, @danielheck.bsky.social for collaborating on this huge effort - we look forward to your feedback!
October 22, 2025 at 7:34 PM
We are deeply grateful to everyone who shares their ESM data.
Thanks to @jmbh.bsky.social, @matzekloft.bsky.social, @anabelbuechner.bsky.social, @yongzhangzzz.bsky.social, @eikofried.bsky.social, @danielheck.bsky.social for collaborating on this huge effort - we look forward to your feedback!
Thanks to @jmbh.bsky.social, @matzekloft.bsky.social, @anabelbuechner.bsky.social, @yongzhangzzz.bsky.social, @eikofried.bsky.social, @danielheck.bsky.social for collaborating on this huge effort - we look forward to your feedback!
Further details:
▶️Search and filter datasets on openesmdata.org
▶️Auto-generate R/Python code
▶️(Meta-)data are stored on Zenodo with DOIs
▶️Metadata and software on GitHub enable community contributions
▶️Contribution guidelines allow further database extensions so that openESM can continue to grow
▶️Search and filter datasets on openesmdata.org
▶️Auto-generate R/Python code
▶️(Meta-)data are stored on Zenodo with DOIs
▶️Metadata and software on GitHub enable community contributions
▶️Contribution guidelines allow further database extensions so that openESM can continue to grow
October 22, 2025 at 7:34 PM
Further details:
▶️Search and filter datasets on openesmdata.org
▶️Auto-generate R/Python code
▶️(Meta-)data are stored on Zenodo with DOIs
▶️Metadata and software on GitHub enable community contributions
▶️Contribution guidelines allow further database extensions so that openESM can continue to grow
▶️Search and filter datasets on openesmdata.org
▶️Auto-generate R/Python code
▶️(Meta-)data are stored on Zenodo with DOIs
▶️Metadata and software on GitHub enable community contributions
▶️Contribution guidelines allow further database extensions so that openESM can continue to grow
▶️Our example analysis shows how to use openESM: We estimated within-person correlations between positive and negative affect across 39 datasets (>500K observations)
▶️We find a robust negative correlation (−0.49 [-0.54, -0.42]) and outline ideas for future research building on this
▶️We find a robust negative correlation (−0.49 [-0.54, -0.42]) and outline ideas for future research building on this
October 22, 2025 at 7:34 PM
▶️Our example analysis shows how to use openESM: We estimated within-person correlations between positive and negative affect across 39 datasets (>500K observations)
▶️We find a robust negative correlation (−0.49 [-0.54, -0.42]) and outline ideas for future research building on this
▶️We find a robust negative correlation (−0.49 [-0.54, -0.42]) and outline ideas for future research building on this
▶️In our introduction paper, we outline why large-scale analyses are important for substantive, design, and statistical research
▶️To make such research easier, we provide rich metadata for each dataset, plus dedicated R and Python packages to easily access and handle the data
▶️To make such research easier, we provide rich metadata for each dataset, plus dedicated R and Python packages to easily access and handle the data
October 22, 2025 at 7:34 PM
▶️In our introduction paper, we outline why large-scale analyses are important for substantive, design, and statistical research
▶️To make such research easier, we provide rich metadata for each dataset, plus dedicated R and Python packages to easily access and handle the data
▶️To make such research easier, we provide rich metadata for each dataset, plus dedicated R and Python packages to easily access and handle the data
▶️Individual ESM studies are often limited in size and diversity of population and measures
▶️Open data are scattered across repositories in different formats, impeding research into robustness, generalizability, and heterogeneity
▶️We aim to change this to enable large-scale, cumulative ESM research
▶️Open data are scattered across repositories in different formats, impeding research into robustness, generalizability, and heterogeneity
▶️We aim to change this to enable large-scale, cumulative ESM research
October 22, 2025 at 7:34 PM
▶️Individual ESM studies are often limited in size and diversity of population and measures
▶️Open data are scattered across repositories in different formats, impeding research into robustness, generalizability, and heterogeneity
▶️We aim to change this to enable large-scale, cumulative ESM research
▶️Open data are scattered across repositories in different formats, impeding research into robustness, generalizability, and heterogeneity
▶️We aim to change this to enable large-scale, cumulative ESM research
Sure! Thanks for that.
August 7, 2025 at 11:14 AM
Sure! Thanks for that.