Mark Ramos
@mframos.bsky.social
Teacher, Data Scientist, Gamer, Ph.D. Statistics. My views are my own.
That weakness may have been inherited from the training data. What proportion of people that you meet have the ability to admit that they don't know? I don't know.
September 9, 2025 at 12:35 PM
That weakness may have been inherited from the training data. What proportion of people that you meet have the ability to admit that they don't know? I don't know.
They say MTG has been dying since 1995. :D
August 25, 2025 at 10:44 PM
They say MTG has been dying since 1995. :D
For more information, read our #openaccess paper in @accountabilityair.bsky.social here: www.tandfonline.com/doi/full/10....
Balanced examination of positive publication bias impact
Positive publication bias is the tendency to favor studies that reject null hypotheses for publication and is widely regarded as detrimental to research enterprise quality. However, this view overs...
www.tandfonline.com
August 14, 2025 at 11:02 AM
For more information, read our #openaccess paper in @accountabilityair.bsky.social here: www.tandfonline.com/doi/full/10....
For sure. Sample size does get larger the smaller the effect assumed. I think your paper can benefit from including more context on how expensive clinical trials get wrt sample size. Small world, I saw Adrian's graphic on linkedin and discussed suggestions with him. www.linkedin.com/feed/update/...
Clinical trial biostatisticians do hypothesis testing differently than what you learned in Stats 101 - and you should pay attention! | Justin Bélair posted on the topic | LinkedIn
Clinical trial biostatisticians do hypothesis testing differently than what you learned in Stats 101 - and you should pay attention!
You see, there is so much discussion around the idea that statisti...
www.linkedin.com
August 11, 2025 at 11:11 AM
For sure. Sample size does get larger the smaller the effect assumed. I think your paper can benefit from including more context on how expensive clinical trials get wrt sample size. Small world, I saw Adrian's graphic on linkedin and discussed suggestions with him. www.linkedin.com/feed/update/...
Thank you. Yes, it is an easy mistake to make. I like to say the problem with statistics isn’t that it is hard to understand, but rather that it is so easy to misunderstand.
August 10, 2025 at 11:28 AM
Thank you. Yes, it is an easy mistake to make. I like to say the problem with statistics isn’t that it is hard to understand, but rather that it is so easy to misunderstand.
"On the other hand, if researchers and publishers are competent, good faith actors and the only risk in positive publication bias comes from inflation of type 1 error rate by chance, ....(we) show that the negative impact is outweighed by the much larger improvement in true positive rates."
August 10, 2025 at 1:48 AM
"On the other hand, if researchers and publishers are competent, good faith actors and the only risk in positive publication bias comes from inflation of type 1 error rate by chance, ....(we) show that the negative impact is outweighed by the much larger improvement in true positive rates."
Plus, people are free to propose other frameworks. Bayesian inference is one example. It isn't strictly better than a frequentist approach, but neither is it strictly worse. As always, context is key. Whatever you propose, it must be open to scrutiny on such matters as t1 & t2 error rate control.
August 9, 2025 at 4:42 PM
Plus, people are free to propose other frameworks. Bayesian inference is one example. It isn't strictly better than a frequentist approach, but neither is it strictly worse. As always, context is key. Whatever you propose, it must be open to scrutiny on such matters as t1 & t2 error rate control.
Statistical evidence is not the only piece of evidence. Your sample mean, odds ratio, etc. do not change if a hypothesis test fails to reject Ho. We only offer a reasonable framework backed by ways to control type 1 error rate and power in the context of uncertainty and sample variability.
August 9, 2025 at 4:36 PM
Statistical evidence is not the only piece of evidence. Your sample mean, odds ratio, etc. do not change if a hypothesis test fails to reject Ho. We only offer a reasonable framework backed by ways to control type 1 error rate and power in the context of uncertainty and sample variability.
That's why clinical trials have different stages. Earlier stages may be looking for any effect, but later stages (as mentioned in the paper) set standards for clinical relevance using various, non-statistical bases (i.e. they test that the effect is at least larger than this or it does not pass).
August 9, 2025 at 4:17 PM
That's why clinical trials have different stages. Earlier stages may be looking for any effect, but later stages (as mentioned in the paper) set standards for clinical relevance using various, non-statistical bases (i.e. they test that the effect is at least larger than this or it does not pass).
pvalues and ci's are two parts of the same inference. Both are taught, used, and often misunderstood. The truth value of hypotheses are typically unknown even after one has made a decision. Best we can hope for is to have some control over our error rates, which is why we set alpha for t1 error.
August 9, 2025 at 4:13 PM
pvalues and ci's are two parts of the same inference. Both are taught, used, and often misunderstood. The truth value of hypotheses are typically unknown even after one has made a decision. Best we can hope for is to have some control over our error rates, which is why we set alpha for t1 error.