Michael Lin, MD PhD
@michaelzlin.bsky.social
Harvard → UCLA → HMS → UCSD → Associate Prof. of Neurobiology & Bioengineering at Stanford → Molecules, medicines, & SARSCoV2. Bad manners blocked.
"Your idea might not work, unlike these other proposals we get that use existing technology. So, lower score for approach."
(Gets it to work, submits proposal to use it...)
"You should have Dr. X as co-PI. He's good at using existing tech on this question. He just got lots of $$$ for it actually"
(Gets it to work, submits proposal to use it...)
"You should have Dr. X as co-PI. He's good at using existing tech on this question. He just got lots of $$$ for it actually"
November 6, 2025 at 1:35 AM
"Your idea might not work, unlike these other proposals we get that use existing technology. So, lower score for approach."
(Gets it to work, submits proposal to use it...)
"You should have Dr. X as co-PI. He's good at using existing tech on this question. He just got lots of $$$ for it actually"
(Gets it to work, submits proposal to use it...)
"You should have Dr. X as co-PI. He's good at using existing tech on this question. He just got lots of $$$ for it actually"
Actually they don't want to pay very much for them afterwards either
November 6, 2025 at 12:47 AM
Actually they don't want to pay very much for them afterwards either
Somewhere in there: thinking, reading, analyzing, advising, listening, presenting, writing papers.
Might be easier for others. Choosing to specialize in technology development is selecting difficulty mode for grant-writing... people want the tools but they don't want to pay for them in advance.
Might be easier for others. Choosing to specialize in technology development is selecting difficulty mode for grant-writing... people want the tools but they don't want to pay for them in advance.
November 6, 2025 at 12:36 AM
Somewhere in there: thinking, reading, analyzing, advising, listening, presenting, writing papers.
Might be easier for others. Choosing to specialize in technology development is selecting difficulty mode for grant-writing... people want the tools but they don't want to pay for them in advance.
Might be easier for others. Choosing to specialize in technology development is selecting difficulty mode for grant-writing... people want the tools but they don't want to pay for them in advance.
Would you mind posting a link to the article; I couldn't find it. Thanks!
October 4, 2025 at 3:39 PM
Would you mind posting a link to the article; I couldn't find it. Thanks!
I addressed this as well in the original thread. Thanks Christophe for linking to it
September 6, 2025 at 6:45 AM
I addressed this as well in the original thread. Thanks Christophe for linking to it
Thus the arbitrary 95% standard and how it is applied leads to contradictory conclusions, making scientists seem to hapless and clueless. So it harms public understanding and scientific support to insist on painting results in black or white rather than how they actually are: shades of gray.
August 28, 2025 at 3:44 PM
Thus the arbitrary 95% standard and how it is applied leads to contradictory conclusions, making scientists seem to hapless and clueless. So it harms public understanding and scientific support to insist on painting results in black or white rather than how they actually are: shades of gray.
And this is not just an academic exercise. How many times do you read in the news there is no association between risk factor X and outcome Y, only to read the opposite a few months later? These inconsistencies are often due to these Type 2 errors of declaring no difference when there was one.
August 28, 2025 at 3:44 PM
And this is not just an academic exercise. How many times do you read in the news there is no association between risk factor X and outcome Y, only to read the opposite a few months later? These inconsistencies are often due to these Type 2 errors of declaring no difference when there was one.
It's more informative, accurate, and comprehensive than our current rules of saying yes or no when the answer is almost always different degrees of maybe. It would do justice to the concept of statistics, which is the supposed to be the science of quantifying degrees of certainty.
August 28, 2025 at 3:44 PM
It's more informative, accurate, and comprehensive than our current rules of saying yes or no when the answer is almost always different degrees of maybe. It would do justice to the concept of statistics, which is the supposed to be the science of quantifying degrees of certainty.
Then one can calmly and rationally consider whether that result provides some support for a hypothesis, together with what is mechanistically likely.
Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.
Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.
August 28, 2025 at 3:44 PM
Then one can calmly and rationally consider whether that result provides some support for a hypothesis, together with what is mechanistically likely.
Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.
Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.
This would be much more factual than "There was no significant difference between Groups A and B" or, even worse but too common, "There was no difference between Groups A and B".
August 28, 2025 at 3:44 PM
This would be much more factual than "There was no significant difference between Groups A and B" or, even worse but too common, "There was no difference between Groups A and B".
Allow papers and proposals to show the graph of outcome distributions by condition and to state any possible or likely differences by the actual confidence level. For example, "Group B had 50% higher levels than Group A on average; the distributions were 90% likely non-random".
August 28, 2025 at 3:44 PM
Allow papers and proposals to show the graph of outcome distributions by condition and to state any possible or likely differences by the actual confidence level. For example, "Group B had 50% higher levels than Group A on average; the distributions were 90% likely non-random".
The defense of these arbitrary requirements is that they are necessary to prevent a high false-positive rate. But we don't have to generate a bunch of false negatives and throw out all discussion of actual likely differences to counteract that. There is a simple, easy, clear, and logical solution.
August 28, 2025 at 3:44 PM
The defense of these arbitrary requirements is that they are necessary to prevent a high false-positive rate. But we don't have to generate a bunch of false negatives and throw out all discussion of actual likely differences to counteract that. There is a simple, easy, clear, and logical solution.
Thus the arbitrary 95% threshold and its enforcement by data non-discussion leads to a lot of false negative conclusions. Essentially real differences are being suppressed and thrown aside if they don't get to 95% confidence. It's wasteful and leads to actual wrong conclusions.
solution.you
August 28, 2025 at 3:44 PM
Thus the arbitrary 95% threshold and its enforcement by data non-discussion leads to a lot of false negative conclusions. Essentially real differences are being suppressed and thrown aside if they don't get to 95% confidence. It's wasteful and leads to actual wrong conclusions.
What makes the situation harmful is that we have imposed this arbitrary threshold of 95% confidence onto all experimental results, and reviewers for grants and papers are being instructed to not allow any discussion of differences if that threshold is not met.
August 28, 2025 at 3:44 PM
What makes the situation harmful is that we have imposed this arbitrary threshold of 95% confidence onto all experimental results, and reviewers for grants and papers are being instructed to not allow any discussion of differences if that threshold is not met.
In reality most experiments where p values are calculated aren't powered to meet a predicted effect size, so are underpowered. And many actual differences are reported in a conceptually and statistically incorrect manner as "no difference" when it's "difference not reaching a 95% confidence level".
August 28, 2025 at 3:44 PM
In reality most experiments where p values are calculated aren't powered to meet a predicted effect size, so are underpowered. And many actual differences are reported in a conceptually and statistically incorrect manner as "no difference" when it's "difference not reaching a 95% confidence level".
For other research it can also be worth it, say for the final conclusive hypothesis test in a preclinical study, to also set a rigorous 95% threshold, to get that level of certainty.
But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...
But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...
August 28, 2025 at 3:44 PM
For other research it can also be worth it, say for the final conclusive hypothesis test in a preclinical study, to also set a rigorous 95% threshold, to get that level of certainty.
But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...
But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...
One place there is an absolute need for an arbitrary threshold is registrational clinical trials where an adequate level of statistical confidence needs to be pre-agreed upon, then met to get approval.
August 28, 2025 at 3:44 PM
One place there is an absolute need for an arbitrary threshold is registrational clinical trials where an adequate level of statistical confidence needs to be pre-agreed upon, then met to get approval.