Michael Lin, MD PhD
michaelzlin.bsky.social
Michael Lin, MD PhD
@michaelzlin.bsky.social

Harvard → UCLA → HMS → UCSD → Associate Prof. of Neurobiology & Bioengineering at Stanford → Molecules, medicines, & SARSCoV2. Bad manners blocked.

Michael Z. Lin is a Taiwanese-American biochemist and bioengineer. He is a professor of neurobiology and bioengineering at Stanford University. He is best known for his work on engineering optically and chemically controllable proteins. .. more

Neuroscience 42%
Biology 41%

"Your idea might not work, unlike these other proposals we get that use existing technology. So, lower score for approach."

(Gets it to work, submits proposal to use it...)

"You should have Dr. X as co-PI. He's good at using existing tech on this question. He just got lots of $$$ for it actually"

Actually they don't want to pay very much for them afterwards either

Somewhere in there: thinking, reading, analyzing, advising, listening, presenting, writing papers.

Might be easier for others. Choosing to specialize in technology development is selecting difficulty mode for grant-writing... people want the tools but they don't want to pay for them in advance.

Was up until 5:30 am writing a grant, then up again at 730 for another full day to finish it today. Now done! 🎉

As academics know, it's not one job. It might be 4. With funding rates at 5%, grant-writing is 1 full-time job. Then there's letters, reviews, committee work, teaching — endless deadlines

Reposted by Michael Z. Lin

Excited to share our latest @nature.com: How does naloxone (Narcan) stop an opioid overdose? We determined the first GDP-bound μ-opioid receptor–G protein structures and found naloxone traps a novel "latent” state, preventing GDP release and G protein activation.💊🧪 🧵👇 www.nature.com/articles/s41...

Reposted by Michael Z. Lin

A pan-KRAS inhibitor and its derived degrader elicit multifaceted anti-tumor efficacy in KRAS-driven cancers www.cell.com/cancer-cell...

No problem!

My daughter made a series of Halloween cats using air clay. Can you recognize them all?

Would you mind posting a link to the article; I couldn't find it. Thanks!

Had the pleasure of visiting Prague as part of an advisory commission for the Czech Academy of Sciences Institute of Biotechnology. Got to check out exciting science and the impressive ultra-high resolution MS machine.

Great to see people working hard to expand knowledge, with public support too!
I cannot overstate how remarkable it is that under GOP rule, US federal health regulations have been captured by fringe crackpots who espouse views that the vast majority of the US public—and nearly 100% of health professionals—reject.

Gift link:
Kennedy’s Vaccine Panel Votes to Limit Access to Covid Shots
www.nytimes.com

First clouds over Stanford since spring

I addressed this as well in the original thread. Thanks Christophe for linking to it

Thus the arbitrary 95% standard and how it is applied leads to contradictory conclusions, making scientists seem to hapless and clueless. So it harms public understanding and scientific support to insist on painting results in black or white rather than how they actually are: shades of gray.

And this is not just an academic exercise. How many times do you read in the news there is no association between risk factor X and outcome Y, only to read the opposite a few months later? These inconsistencies are often due to these Type 2 errors of declaring no difference when there was one.

It's more informative, accurate, and comprehensive than our current rules of saying yes or no when the answer is almost always different degrees of maybe. It would do justice to the concept of statistics, which is the supposed to be the science of quantifying degrees of certainty.

Then one can calmly and rationally consider whether that result provides some support for a hypothesis, together with what is mechanistically likely.

Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.

This would be much more factual than "There was no significant difference between Groups A and B" or, even worse but too common, "There was no difference between Groups A and B".

Allow papers and proposals to show the graph of outcome distributions by condition and to state any possible or likely differences by the actual confidence level. For example, "Group B had 50% higher levels than Group A on average; the distributions were 90% likely non-random".

The defense of these arbitrary requirements is that they are necessary to prevent a high false-positive rate. But we don't have to generate a bunch of false negatives and throw out all discussion of actual likely differences to counteract that. There is a simple, easy, clear, and logical solution.

Thus the arbitrary 95% threshold and its enforcement by data non-discussion leads to a lot of false negative conclusions. Essentially real differences are being suppressed and thrown aside if they don't get to 95% confidence. It's wasteful and leads to actual wrong conclusions.
solution.you

What makes the situation harmful is that we have imposed this arbitrary threshold of 95% confidence onto all experimental results, and reviewers for grants and papers are being instructed to not allow any discussion of differences if that threshold is not met.

In reality most experiments where p values are calculated aren't powered to meet a predicted effect size, so are underpowered. And many actual differences are reported in a conceptually and statistically incorrect manner as "no difference" when it's "difference not reaching a 95% confidence level".

For other research it can also be worth it, say for the final conclusive hypothesis test in a preclinical study, to also set a rigorous 95% threshold, to get that level of certainty.

But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...

One place there is an absolute need for an arbitrary threshold is registrational clinical trials where an adequate level of statistical confidence needs to be pre-agreed upon, then met to get approval.

I think having an arbitrary threshold for statistical significance does more harm than good. It creates artificially black or white conclusions (there was a significant difference or not, with the word significant often omittted leading to bad misunderstandings) when knowledge is actually all gray.
“A legitimate PhD-level expert in anything,” they said.

“Show me a diagram of the US presidents since FDR, with their names and years in office under their photos,” I said.
Measles likely came from cows (via rinderpest) around the 6th century BCE.

For 2,500 years, we didn’t evolve superhuman resistance—children just died. Real protection only came in the 1960s, with vaccines.

Our superpower isn’t evolving into superhumans. It’s outthinking pathogens.
Measles virus and rinderpest virus divergence dated to the sixth century BCE
Measles virus diverged from rinderpest virus in the sixth century BCE, indicating an early origin for human measles.
www.science.org

And these insights from imaging fast interneuron spiking over several days is something that only genetically encoded voltage indicators can provide.

Thus interneurons do learn, but there is a hierarchy of specificity, where pyramidals > PV > SST. And the role of PV appears to be to engage in negative feedback to enhance contrast between odor-encoding pyramidals, required to link the memory of CS and US, vs non-encoding pyramidals.