Research Computing Teams
researchcomputingteams.org
Research Computing Teams
@researchcomputingteams.org
Supporting the teams that support research through data analysis, software, computing, or all of the above.
Reposted by Research Computing Teams
To share at least a little knowledge with the world, I documented the basics of discrete event simulation for reasoning about #HPC system design. The example I wrote shows how to calculate MTTDL for RAID arrays of differing sizes, parity disks, and drive MTBFs.

glennklockwood.com/garden/discr...
discrete event simulation
Discrete-event simulation is a useful technique for modeling the behavior of complex systems (like supercomputers and data centers) where events (and reactions to them) unfold ...
glennklockwood.com
February 8, 2025 at 5:31 AM
I hadn't heard of hardened libc++ before - libcxx.llvm.org/Hardening.html. Obviously, Google's code (lots of it being infra code) is different from scientific software, but it's still interesting. Anyone played with this?
Hardening Modes — libc++ documentation
libcxx.llvm.org
January 4, 2025 at 10:20 PM
Some other things I've written on this basic topic - scientific judgement is part of our job: www.researchcomputingteams.org/newsletter_i...
: #182 - 2 June 2024
Scientific judgement is part of our job. Plus: Parking lots; Stop hiding in the comfort of your expertise; No wrong doors; Sales is research; Mentoring plans mandatory for NIH funding; NIHR RSS fundi...
www.researchcomputingteams.org
January 3, 2025 at 7:08 PM
If a VPR or a funder has to decide between two similar centres, one can demonstrate opening up new research directions, good careers for trainees, and spinoffs, and the best the other can offer is 89% utilization or fully checked off worthiness lists, how do you think that decision is going to go?
January 3, 2025 at 7:08 PM
Without us making the case for our work and our researcher client's work, how can those in charge possibly be fully informed for their next funding decision?

No one else is going to give them that information they need.
January 3, 2025 at 7:08 PM
Advocating for the work we do means *qualitatively* showing decision makers how our work and our researcher client's work directly supports their priorities and missions. How the impact we're having is the impact they want to see.
January 3, 2025 at 7:08 PM
We owe it to our teams, we owe it to the researchers whose life's work we support, to powerfully advocate for the work we do.
January 3, 2025 at 7:08 PM
So allocation of resources for supporting research is decided based on human research judgement. Yes, messy, flawed, biased, human research judgement.
January 3, 2025 at 7:08 PM
There are too many diverse *kinds* of research and scholarship for them to be able to be compared against each other in any kind of quantitative or checklist-algorithmic way.
January 3, 2025 at 7:08 PM
There are more useful, worthy things to spend research funding on then there is research funding. That would be true even if research funding doubled tomorrow.
January 3, 2025 at 7:08 PM
How many units of "impact in qualitative social sciences" are there in one unit of "impact in quantitative computational biology"?
January 3, 2025 at 7:08 PM
But *even if it was quantifiable*, research funders and institutional decision makers have to decide how to allocate scarce resources between incommensurate things. How many units of "reusable research software" equals one unit of "well-used HPC cluster" equals one unit of "hire more postdocs?"
January 3, 2025 at 7:08 PM
The worthiness or impact of the work we support is basically unquantifiable in the short term. The work we do to *support* that work, doubly so.
January 3, 2025 at 7:08 PM
No one would actually *say* "Our work is demonstrably worthy - we've shown 85% utilization | 13 out of 14(!!) of the FAIR4RS principles met - so our work is done. If the funders don't fund us, it's on them". But...
January 3, 2025 at 7:08 PM