Ian Sudbery
@iansudbery.bsky.social
Senior Lecturer in Bioinformatics at the University of Sheffield. Likes gene regulation, 3' UTRs, non-coding RNA and dancing. He/Him/His
Also at IanSudbery@genomic.social
Also at IanSudbery@genomic.social
So if you mean only publish papers where the experiments are correctly concieved, designed, carried out and interpreted, irrepsective of the outcome, then sure.
But I don't think only publishing mammoth, "impactful" pieces, then I think that would make things worse, not better.
But I don't think only publishing mammoth, "impactful" pieces, then I think that would make things worse, not better.
November 11, 2025 at 1:03 PM
So if you mean only publish papers where the experiments are correctly concieved, designed, carried out and interpreted, irrepsective of the outcome, then sure.
But I don't think only publishing mammoth, "impactful" pieces, then I think that would make things worse, not better.
But I don't think only publishing mammoth, "impactful" pieces, then I think that would make things worse, not better.
Even then, only publishing when you beieve you are ready to change the world, or at least the field, can lead to people sitting on data for years when other people might have the key that makes it make sense, but two and two are never put together.
November 11, 2025 at 1:03 PM
Even then, only publishing when you beieve you are ready to change the world, or at least the field, can lead to people sitting on data for years when other people might have the key that makes it make sense, but two and two are never put together.
Perhaps when we move to a different system of judge scientists, then we could think about the size of the minimal publishible unit. Thats what I meant by "unless you are careful".
November 11, 2025 at 1:00 PM
Perhaps when we move to a different system of judge scientists, then we could think about the size of the minimal publishible unit. Thats what I meant by "unless you are careful".
Being in the right place at the right time, being put on the right project, having the right collaborators, all these things are at least partially out of an early career scientists hands. If only 1 in 20 scientists get to be on one of these better papers in 3.5 years of PhD, what of the other 19?
November 11, 2025 at 1:00 PM
Being in the right place at the right time, being put on the right project, having the right collaborators, all these things are at least partially out of an early career scientists hands. If only 1 in 20 scientists get to be on one of these better papers in 3.5 years of PhD, what of the other 19?
While we have a system where survival is based around publishing papers. Fewer papers=less survival. Perhaps if that meant it was a good filter (as we have to have a filter of some sort) you could argue for it, but I doubt that would be the case.
November 11, 2025 at 1:00 PM
While we have a system where survival is based around publishing papers. Fewer papers=less survival. Perhaps if that meant it was a good filter (as we have to have a filter of some sort) you could argue for it, but I doubt that would be the case.
Where as even the bad papers I'm send from smaller journals have 8 figures, with 6 panels each along with a similar number of supplementary figures, and it takes me the best part of 2 days to review.
November 11, 2025 at 11:51 AM
Where as even the bad papers I'm send from smaller journals have 8 figures, with 6 panels each along with a similar number of supplementary figures, and it takes me the best part of 2 days to review.
The alternative is smaller and more straight forward papers that are easier to review. I've seen people in other fields who get papers with 3 figures and 2 panels per figure. Takes them less than an hour to do a good review.
November 11, 2025 at 11:51 AM
The alternative is smaller and more straight forward papers that are easier to review. I've seen people in other fields who get papers with 3 figures and 2 panels per figure. Takes them less than an hour to do a good review.
The trouble we fewer better papers, is that unless you are careful, that coudl lead to fewer scientists (only those lucky enought to have one of the better papers passes each poisson filter).
November 11, 2025 at 11:51 AM
The trouble we fewer better papers, is that unless you are careful, that coudl lead to fewer scientists (only those lucky enought to have one of the better papers passes each poisson filter).
Absolutely the way to fix peer review is for peer reviewers to do better reviews, not an AI to almost match the current poor level of human review. But maybe if you take away the oppotunity for humans to point out a axis label is missing, they'll find more interesting things to say?
November 11, 2025 at 11:44 AM
Absolutely the way to fix peer review is for peer reviewers to do better reviews, not an AI to almost match the current poor level of human review. But maybe if you take away the oppotunity for humans to point out a axis label is missing, they'll find more interesting things to say?
Personally, I wouldn't want an reviewer putting my papers through QED and returing that as a review. But I will try putting my own papers through it to see if it catches anything I missed.
November 11, 2025 at 11:44 AM
Personally, I wouldn't want an reviewer putting my papers through QED and returing that as a review. But I will try putting my own papers through it to see if it catches anything I missed.
On the otherhand, telling the authors that there is a well known pitfall to the method they are using that could account for their results, or that their results don't imply their conclusions, or that they could cross check a wobbly conclusion by comparison to an existing dataset does.
November 11, 2025 at 11:44 AM
On the otherhand, telling the authors that there is a well known pitfall to the method they are using that could account for their results, or that their results don't imply their conclusions, or that they could cross check a wobbly conclusion by comparison to an existing dataset does.
I don't know, I feel like telling an author that their figures are too low resolution, that whole experiments are missing from the Materials and Methods, that they reference the wrong figures in sevearal place, or that the grammer makes it hard to understand are things that don't require ane expert.
November 11, 2025 at 11:44 AM
I don't know, I feel like telling an author that their figures are too low resolution, that whole experiments are missing from the Materials and Methods, that they reference the wrong figures in sevearal place, or that the grammer makes it hard to understand are things that don't require ane expert.
I believe the Wellcome Trust tried this for a while, but gave up when reviewers simply looked the journal titles up and continued to use them.
November 9, 2025 at 3:06 PM
I believe the Wellcome Trust tried this for a while, but gave up when reviewers simply looked the journal titles up and continued to use them.
Don't most CRAN packages remain available on conda-forge (which is the only package source we are able to use anyway)? And bioconductor pacakges on bioconda?
November 7, 2025 at 2:15 PM
Don't most CRAN packages remain available on conda-forge (which is the only package source we are able to use anyway)? And bioconductor pacakges on bioconda?
As a computational/transcriptomics person, I'm never going to understand the ins and outs and common pit falls of synthetic chemistry, nor would I expect my plant physiology collueges to be able to detect subtle cases of training data leakage or the difference between different batch corrections.
November 7, 2025 at 11:57 AM
As a computational/transcriptomics person, I'm never going to understand the ins and outs and common pit falls of synthetic chemistry, nor would I expect my plant physiology collueges to be able to detect subtle cases of training data leakage or the difference between different batch corrections.
I agree that most peer review isn't good enough. But I think its unrealistic to expect everyone to be a good enough judge of everything in a modern study with 10s of authors, and dozens of different techniques from maybe 2 or 3 different disciplines.
November 7, 2025 at 11:57 AM
I agree that most peer review isn't good enough. But I think its unrealistic to expect everyone to be a good enough judge of everything in a modern study with 10s of authors, and dozens of different techniques from maybe 2 or 3 different disciplines.
Good take: The problem with peer review is that most humans doing it don't do a good enough job of it. Making a machine that does it almost (but maybe not quite as good) a job doesn't solve that problem.
November 7, 2025 at 11:49 AM
Good take: The problem with peer review is that most humans doing it don't do a good enough job of it. Making a machine that does it almost (but maybe not quite as good) a job doesn't solve that problem.