Laura Bronner
laurabronner.bsky.social
Laura Bronner
@laurabronner.bsky.social
Scientific director, Public Discourse Foundation || Senior applied scientist, IPL/ETH Zürich || Data, media, experiments || Formerly FiveThirtyEight quant editor.

www.laurabronner.com
This is so cool, congrats!!
September 16, 2025 at 5:17 PM
It seems that Disney never really knew what to do with 538 (Nate's take below), which feels like a real missed opportunity. I hope other outlets will take up the mantle, hire those laid off yesterday, and really invest in data and rigor in journalism - which is more important now than ever.
March 6, 2025 at 5:19 PM
This is a huge loss. I feel awful for everyone who was laid off, but I'm also just really sad that ABC News didn't appreciate the special blend of reporting chops, data skills, talent, and kindness they managed to amass. I think the *wrong* lesson to draw is that this blend isn't profitable -
March 6, 2025 at 5:19 PM
6) That said, 538 was also special for journalism - exemplified, perhaps, by the decision to have someone on staff whose entire purpose was to slow stuff down: work through code, question analyses, and be annoying about causal claims. They cared about getting stuff right, even if it took longer.
March 6, 2025 at 5:19 PM
5) At its best, journalism at 538 blended qualitative (reporting, deep understanding of the substance) with quantitative (data, advanced methods). Academics often think that good research is only done in academia. I think a lot of fantastic research is done in journalism.
March 6, 2025 at 5:19 PM
4) Understanding what kind of effort gets you 90% of the way to answering something - and whether those 90% are enough to say something meaningful - is something I should remind myself of over and over. Academia spends an inordinate amount of time on those last 10%. Often, it's not worth it.
March 6, 2025 at 5:19 PM
3) Good data is everything, and understanding data sources and their downsides is crucial for anyone who works with data. So much work went into collecting and auditing the data 538 used - it's a resource for people across (and beyond!) journalism.
bsky.app/profile/base...
Just a reminder that all the data FiveThirtyEight collected—polls, election results, and much more—is available for download (for now) on our GitHub page. github.com/fivethirtyei...
March 6, 2025 at 5:19 PM
2) At the same time, data-centric doesn't necessarily mean complex. Many of the most interesting analyses (e.g. differences in means) are simple; the difficulty is in understanding the data and the substance well enough to ensure those analyses and comparisons are meaningful.
March 6, 2025 at 5:19 PM
1) Quantitative description and comparison is key to understanding what's going on. Some of 538's most important work was descriptive (e.g. poll trackers, geographic mapping), and much of their contribution to political journalism was in normalizing a much more data-centric type of reporting.
March 6, 2025 at 5:19 PM
This is consistent with stricter pre-publication review, though, right? It's just more of a focus on preventing rather than correcting errors. (Errors in articles/charts were routinely caught by attentive readers, so the incentive to avoid them was strong.)
December 6, 2024 at 6:11 PM
Clearly 538 values the trade-off differently (or did while I was there), which I think is interesting. I wonder if this partially depends on where the blame for an error goes - the authors or the publication.
December 6, 2024 at 3:20 PM
It would have required a more in-depth check than this, since our code did produce our results/figures - it's just that the code contained an error. And while correcting the record is great, preventing such errors from being published is also important!
bsky.app/profile/adam...
@aeggers.bsky.social and I supervise a small team that checks 1) that basic documentation is in place 2) that tables & figures in the manuscript can be reproduced starting from raw data w deposited code.
We run, don't review the code, so won't normally catch flipped scales, dropped observations etc
December 6, 2024 at 1:35 PM
To put it in personal terms: I would much have preferred the error invalidating my Voting at 16 article to be caught in pre-publication code review, even though the resulting replication-and-extension collaboration probably couldn't have gone any better.
bsky.app/profile/laur...
A thread about being wrong:

5 years ago, we wrote a paper about how how newly enfranchised 16-year-olds vote in Austria. But we were wrong.

This year, @elisabethgraf.bsky.social, @schnizzl.bsky.social, Sylvia Kritzinger and I are setting the record straight: authors.elsevier.com/c/1juT5xRaZk...
December 6, 2024 at 1:35 PM
Don't get me wrong, I think this is really cool! A way of creating positive incentives for something important that otherwise mostly has educational (part of a course) or negative (trying to prove something/someone wrong) incentives. But I don't think it's a substitute for preventing errors.
December 6, 2024 at 1:35 PM
Publishing already takes a lot of time and effort, so further hurdles might be questionable. But in my experience, it definitely improves the quality of the output. People make mistakes (myself obviously included!) - proper code review should be seen as a service rather than a cost.
December 5, 2024 at 9:59 PM
On the one hand, this catches errors before publication, which is obviously important. On the other hand, it takes a lot of time - both for the quant editor and for the author, who has to ensure the code & decisions can be understood.
December 5, 2024 at 9:59 PM