Dr Cyril Pernet
@cyrilrpernet.bsky.social
NeuroInformatics, NeuroImaging, Stats #openscience #opendata #publicdata -- leading https://publicneuro.eu/
put my stuff on https://github.com/CPernet/
see also http://cpernet.github.io
--- pour mes amis francophone aussi: https://lavieen202025.github.io/
put my stuff on https://github.com/CPernet/
see also http://cpernet.github.io
--- pour mes amis francophone aussi: https://lavieen202025.github.io/
do you know of any model that can draw bridges between seemingly different topics? for humans it so natural, how many times do see a talk not on your topic and yet you tink, oh cool this could work in what I do.
October 26, 2025 at 7:01 PM
do you know of any model that can draw bridges between seemingly different topics? for humans it so natural, how many times do see a talk not on your topic and yet you tink, oh cool this could work in what I do.
is there a green open access or preprint somewhere? 🙏
October 10, 2025 at 4:37 AM
is there a green open access or preprint somewhere? 🙏
ok I give you my latest imaging related to help you with your data - no trump and no AI, promise arxiv.org/abs/2509.15278
Assessing metadata privacy in neuroimaging
The ethical and legal imperative to share research data without causing harm requires careful attention to privacy risks. While mounting evidence demonstrates that data sharing benefits science, legit...
arxiv.org
October 2, 2025 at 12:48 PM
ok I give you my latest imaging related to help you with your data - no trump and no AI, promise arxiv.org/abs/2509.15278
- Different BLAS/LAPACK implementations
- Threading changes summation order (round-off)
- MKL/Accelerate/OpenBLAS can use SIMD/FMA and architecture-specific kernels.
- libm differences (exp, log, trig, etc) come from the OS
- compiler stuff affects rounding
- RNGkind() floating-point ops can diverge
- Threading changes summation order (round-off)
- MKL/Accelerate/OpenBLAS can use SIMD/FMA and architecture-specific kernels.
- libm differences (exp, log, trig, etc) come from the OS
- compiler stuff affects rounding
- RNGkind() floating-point ops can diverge
August 25, 2025 at 8:48 AM
- Different BLAS/LAPACK implementations
- Threading changes summation order (round-off)
- MKL/Accelerate/OpenBLAS can use SIMD/FMA and architecture-specific kernels.
- libm differences (exp, log, trig, etc) come from the OS
- compiler stuff affects rounding
- RNGkind() floating-point ops can diverge
- Threading changes summation order (round-off)
- MKL/Accelerate/OpenBLAS can use SIMD/FMA and architecture-specific kernels.
- libm differences (exp, log, trig, etc) come from the OS
- compiler stuff affects rounding
- RNGkind() floating-point ops can diverge
computational reproducibility -- since you work with R, you may want to mention numerical accuracy, tiny differences are expected with OS and OS versions
August 25, 2025 at 8:47 AM
computational reproducibility -- since you work with R, you may want to mention numerical accuracy, tiny differences are expected with OS and OS versions