Michael Lazear
michaellazear.bsky.social
Michael Lazear
@michaellazear.bsky.social
I like big data and I cannot lie | Informatics/ML lead @ Belharra Tx, wet lab PhD @ Cravatt Lab
I use +3.5/-1.25 on MS1, which covers isotopic errors. Not a huge difference vs specifying isotopic errors, but it's faster in Sage at least.
September 10, 2025 at 10:14 PM
Dude is an monster... and only 16s off of Remco on the TT today.
July 9, 2025 at 9:05 PM
Can confirm that it really increases (subjective) MS1 XIC quality for us.
July 1, 2025 at 10:43 PM
I'm not an IM expert, but isn't IM extremely strongly correlated with m/z of the precursor though (at least for TIMS)? High resolution IM is probably just as good as knowing the quad isolation window. And multiple charge states are already fragmented at once in DIA?
June 29, 2025 at 2:12 AM
For further AI-readiness stuff, I am the wrong person to ask. Nearly 100% of my code is artisanal, organic, small-batch, and handcrafted. Exceptions are occasionally made for one-off scripts that I know how to write, but LLM can write faster.
June 27, 2025 at 5:08 AM
As Tine mentioned, data cleaning/prep is usually the most important part. Some kind of metadata tracking system for all of your samples (could be a CSV file, a SQL database, etc) is critical. I know Claude can interact with SQL databases directly using some of the MCP plugins
June 27, 2025 at 5:06 AM
January 24, 2025 at 8:44 PM
You can definitely burn parchment paper in an oven.
November 21, 2024 at 6:15 PM
For gaming, 4060. I'm still rocking an old 1080 Ti
November 19, 2024 at 3:44 PM
What's the use case? More GPU RAM is generally best.
November 18, 2024 at 10:14 PM
I have just a gravel bike right now that I've slapped some fast tires on - but frequently find myself spinning out on the 1x10 gearset. So looking for something faster, and I've enjoyed test riding the racier bikes
November 14, 2024 at 4:02 PM
That is what I'm leaning towards!
November 14, 2024 at 4:01 PM
Oh, for sure. Not all tools need to be hyper-optimized. The point of the parent post was why bother writing a GPU search engine if you haven't squeezed out every CPU cycle first 😉
February 14, 2024 at 11:59 PM
This is a completely meaningless way to measure "pushing CPUs to their limit". You can make an infinite spin loop that does nothing and has 100% utilization. How many spectra are you processing per ms? What is the throughput in GB/s?
February 14, 2024 at 8:35 PM
Yes, it's for personal use. But I also write proteomics software, so it kinda blurs the line 😉. Being able to search fast is critical for keeping a tight iteration loop.
February 12, 2024 at 8:43 PM
AFAIK DIA-NN still isn't GPU for it's deep learning-based spectral predictions despite using (I believe) torch under the hood. So don't think it would do any good anyways
February 11, 2024 at 6:33 AM
Re: single thread, I think most of the tools are pretty good about being highly parallel, to varying extents. But faster is usually still better. I just built a new PC with AMD 7950x (32 cores at ~4.5 GHz) and I'm pretty happy with it lol
February 11, 2024 at 6:32 AM
That's gonna be highly specific to your software. Disk speed will always be important for reading data - for a closed search using Sage, reading data takes longer than searching data. FP/DiaNN/MQ all do pretty heavy disk reads and writes throughout their execution, so that will impact them more.
February 11, 2024 at 6:31 AM
Prolucid GPU and Andy Kong's unpublished GPU search engine come to mind. GPUs are pretty hard to program for though... And to be honest, most extant tools aren't even pushing CPUs to their limits yet.
February 11, 2024 at 5:26 AM
Horizontal scaling (e.g. spinning up 100s or 1000s of VMs to each search a single file) can be faster, but is much less user friendly if you aren't a programmer.
February 11, 2024 at 5:25 AM
RAM speed and memory bandwidth can be pretty important. Unfortunately many tools don't run on Apple Silicon or other ARM CPU architectures, but they tend to have incredibly high memory bandwidth. You will always hit a wall with how much you can vertically scale though.
February 11, 2024 at 5:24 AM