ChrisRohlf
banner
chrisrohlf.bsky.social
ChrisRohlf
@chrisrohlf.bsky.social
🇺🇸 Waging algorithmic warfare since 2003. Software & Security Engineer. Non-Resident Research Fellow CSETGeorgetown CyberAI
Over 15 years ago cyber teams covertly altered centrifuge spin rates at Natanz to degrade the uranium enrichment process and silently damage nuclear weapons development … yet the best AI doomers can come up with is “steal the model weights”?
July 8, 2025 at 3:06 PM
or injecting semantic collisions into tokenizer produced vocabularies to subtly degrade / bias multilingual pretraining pipelines?
July 8, 2025 at 3:06 PM
Clearly these people never read the Matasano blog…
June 3, 2025 at 11:01 AM
Open source, and the influence it brings over tech ecosystems, is a soft power we should never take for granted.

The BIS guidance clearly spells out how usage of the Huawei Ascend 910 series anywhere in the world may violate existing US export controls.

www.bis.gov/media/docume...
www.bis.gov
May 15, 2025 at 12:05 PM
But can it generate nausea inducing Prezi’s?
January 4, 2025 at 4:32 AM
Deterrence by denial has largely failed as a USG strategy at least in the cyber realm. While I agree wholeheartedly that secure by design is the way, USG lacks the authorities to make it happen by incentive or liability.
December 27, 2024 at 5:02 PM
An interesting replication benchmark and data point to support the self reinforcing AI flywheel might be to measure how accurately and efficiently an AI model could autonomously retool from CUDA to CANN and achieve model training parity. This is somewhat analogous to self hosting compilers.
December 27, 2024 at 4:50 PM
* How does the number of Ascend chips affect the remainder of the setup including power requirements, interconnect and memory bandwidth limitations etc?
December 27, 2024 at 4:50 PM
* Assuming you can achieve hardware compute parity in the pretraining cluster, what is the performance delta between those ported CANN kernels and CUDA for this model architecture and how does it affect compute hours required?
December 27, 2024 at 4:50 PM
* What is the level of effort required to port CUDA based kernels and associated configuration and monitoring tooling to CANN?
December 27, 2024 at 4:50 PM
* Given lower yields for Huawei Ascend 910B/C and the fact its almost 3x slower (at FP16) than H800 theoretical max TFLOP(s) it seems it would take around +/- 6000 Ascend 910B's to match the theoretical compute.
December 27, 2024 at 4:50 PM