Tamkinat Rauf
@tsrauf.bsky.social
Asst Prof of Sociology @ uwsoc.bsky.social | Interests: happiness; inequality; social psych; genomics; open science | www.tamkinatrauf.com
I would love to see if findings replicate once you remove keywords that obviously indicate that the researchers had some statistically significant and theoretically important findings.
October 11, 2025 at 4:31 PM
I would love to see if findings replicate once you remove keywords that obviously indicate that the researchers had some statistically significant and theoretically important findings.
Might there be a chance that the researchers show are publishing in high-impact journals (which is the indicator of "quality" here) after Chat-GPT release are actually finding more "significant" results for unrelated reasons? (e.g., luck, better research ideas)
October 11, 2025 at 4:31 PM
Might there be a chance that the researchers show are publishing in high-impact journals (which is the indicator of "quality" here) after Chat-GPT release are actually finding more "significant" results for unrelated reasons? (e.g., luck, better research ideas)
The authors could have learned a thing or two from sociology! www.journals.uchicago.edu/doi/10.1086/...
Why Do Liberals Drink Lattes?1 | American Journal of Sociology: Vol 120, No 5
Popular accounts of “lifestyle politics” and “culture wars” suggest that political and ideological divisions extend also to leisure activities, consumption, aesthetic taste, and personal morality. Dra...
www.journals.uchicago.edu
October 7, 2025 at 12:40 PM
The authors could have learned a thing or two from sociology! www.journals.uchicago.edu/doi/10.1086/...
There may be a trade-off between the cognitive resources we need to process large amounts of data and carefully examining data. By the same logic, feedback from 1-2 careful readers may be more useful than from several readers who lack the skills or willingness to appropriately engage with your work.
October 4, 2025 at 4:31 PM
There may be a trade-off between the cognitive resources we need to process large amounts of data and carefully examining data. By the same logic, feedback from 1-2 careful readers may be more useful than from several readers who lack the skills or willingness to appropriately engage with your work.
But can we reduce noise in the data we do have? I think yes, and much less advice exists out there about how to do that. I think we can reduce noise through thoughtful, unemotional reflection about the data that we already have. It means, not necessarily reading more, but reading carefully.
October 4, 2025 at 4:31 PM
But can we reduce noise in the data we do have? I think yes, and much less advice exists out there about how to do that. I think we can reduce noise through thoughtful, unemotional reflection about the data that we already have. It means, not necessarily reading more, but reading carefully.
How to get more data? Read more. Write & submit more. And get tons of feedback from others before submitting. We've all heard this advice.
October 4, 2025 at 4:31 PM
How to get more data? Read more. Write & submit more. And get tons of feedback from others before submitting. We've all heard this advice.
Ideally, we want to adjust the brain's model so it reflects reality as closely as possible. To do that, we need to improve the model by either giving our brain more data or less noisy data.
October 4, 2025 at 4:31 PM
Ideally, we want to adjust the brain's model so it reflects reality as closely as possible. To do that, we need to improve the model by either giving our brain more data or less noisy data.
Interpretations should vary case-by-case. But, in practice, I've noticed that same people tend to have the same interpretations regardless of the specifics of the case (which makes sense, given the Bayesian brain!). This fallacy is especially common among grad students w/ less publishing experience.
October 4, 2025 at 4:31 PM
Interpretations should vary case-by-case. But, in practice, I've noticed that same people tend to have the same interpretations regardless of the specifics of the case (which makes sense, given the Bayesian brain!). This fallacy is especially common among grad students w/ less publishing experience.
Interpretation 1: This is a terrible paper. Reaction: Radical rewrite.
Interpretation 2: Bad luck. Reaction: Do nothing.
Interpretation 3: Paper is OK, but there's room to improve. Reaction: Some rewriting.
Interpretation 2: Bad luck. Reaction: Do nothing.
Interpretation 3: Paper is OK, but there's room to improve. Reaction: Some rewriting.
October 4, 2025 at 4:31 PM
Interpretation 1: This is a terrible paper. Reaction: Radical rewrite.
Interpretation 2: Bad luck. Reaction: Do nothing.
Interpretation 3: Paper is OK, but there's room to improve. Reaction: Some rewriting.
Interpretation 2: Bad luck. Reaction: Do nothing.
Interpretation 3: Paper is OK, but there's room to improve. Reaction: Some rewriting.
How does this apply to publishing? Take the example of journal rejections. I think there are 3 ways in which we broadly interpret and thus react:
October 4, 2025 at 4:31 PM
How does this apply to publishing? Take the example of journal rejections. I think there are 3 ways in which we broadly interpret and thus react:
To summarize the key idea: our brain is a Bayesian machine trying to iterate the best-fitting model of the world. Sometimes we over-interpret random correlations. Other times we desensitize ourselves to the environment and miss important causal info.
October 4, 2025 at 4:31 PM
To summarize the key idea: our brain is a Bayesian machine trying to iterate the best-fitting model of the world. Sometimes we over-interpret random correlations. Other times we desensitize ourselves to the environment and miss important causal info.