LaurieWired
banner
lauriewired.bsky.social
LaurieWired
@lauriewired.bsky.social
researcher @google; serial complexity unpacker

ex @ msft & aerospace
You might think it was used in Mathematics, but it’s technically a different symbol.



(you’re supposed to use “set minus” for math, but no one does)



Next time you hit backslash, just think, you’re using the youngest punctuation character!
November 11, 2025 at 9:29 PM
Before the IBM standardization, it get’s murky.



There’s a German teletype machine with the backslash symbol from 1937…but no one really knows what it was used for.
November 11, 2025 at 9:29 PM
Backslash marks were popularized by the IBM standards committee in the 1960s, which got rolled into ASCII.

Programmers took a liking to the symbol; quickly adopting it as the standard escape character.


*Forward* slash existed in the 18th century by comparison.
November 11, 2025 at 9:29 PM
You might be thinking to yourself, what about Brackets? Curly Braces? 


Nope, not even close. Brackets have been used since the 1500s.

Tilde? Still wrong, used by medieval scribes from 1086 AD.



Backslash is a *bizzare* symbol with unsolved origins.
November 11, 2025 at 9:29 PM
Take a look at your keyboard.



See the backslash key?



It’s the *only* punctuation character (not a glyph!) created in the computer age.



Just about every typographic symbol on your keyboard is centuries old.
November 11, 2025 at 9:29 PM
Tick between frames fast enough, and you get a (somewhat useable) CPU.



About ~250 kilohertz on a 2080Ti.



Not much, but enough to run Linux!

What I love most is that you can visually “see” the system state at any point just by viewing the texture itself.
November 10, 2025 at 9:44 PM
By abusing the heck out of shader logic, you can do some funny things.



To run linux in a shader, you first need a (simulated) CPU.



Of course, someone took it to the logical extreme; and emulated RISC-V logic in HLSL.


~64MiB of “Ram” stored as a texture.
November 10, 2025 at 9:44 PM
VRChat allows users to embed custom fragment shaders within worlds.



Of course, you don’t just get to run arbitrary C code wherever you want; that would be an insane security risk.



But, you *do* have textures. Textures that can hold state.
November 10, 2025 at 9:44 PM
Shader systems are ridiculously powerful if you’re clever enough. 



Most people use them to create visual effects. You know what’s cooler?

Running Linux.

Inside an emulated RISC-V CPU. Inside a pixel shader. Inside of VRChat...
November 10, 2025 at 9:44 PM
It’s a really fun story, but unfortunately Teramac was a bit ahead of its time.


Here’s one of the better articles about it:
fab.cba.mit.edu/classes/862....
November 5, 2025 at 6:28 PM
Test the path, localize the bad resource, blacklist it, compile around it.



Teramac didn’t just sit idle either!

They mapped MRI data of brain arteries, played with volume rendering (Cube-4), and ran a number of *actually useful* workloads after they proved the utility.
November 5, 2025 at 6:28 PM
75% of the FPGAs in Teramac would normally be considered too faulty to use. Scrapped.


By intentionally overbuilding the interconnects; well beyond what was sane, defect tolerance was (theoretically) high.

The first workload thus needed to create a "defect database".
November 5, 2025 at 6:28 PM
The team expected that bleeding edge silicon would likely have much higher defect rates.


There was huge pressure to reduce yield risk; improving software reconfiguration could change the industry.



The real magic was in the interconnect.
November 5, 2025 at 6:28 PM
HP Labs once built a broken supercomputer…on purpose.


Teramac had over 220,000 Hardware Defects.

The question was; can you make a reliable computer out of *known* bad parts?


It was a phenomenal software problem to route around the faults:
November 5, 2025 at 6:28 PM
It took over a *year* to fix some of the platforms after the rollover bug, with some clever software patches through the Iridium satellite network.



Pretty neat read, check it out here:
gi.copernicus.org/articles/10/...
November 4, 2025 at 7:50 PM
In any case, 2038 is going to be a tricky year for time.



One of the better writeups about the 2019 GPS event is a paper from Antarctic site PG2.



Many of their instruments are completely inaccessible during the polar night season.
November 4, 2025 at 7:50 PM
The worst parts is *unlike* the UNIX 2038 problem, the GPS rollover bug doesn’t hit devices all at the same time.

It’s quite common for GPS units to only rebuild the week number on a cold boot.

Many scientific devices didn’t get hit until months (or even years!) later.
November 4, 2025 at 7:50 PM
Part of the issue is GPS isn’t *just* used for positioning, it’s also used for accurate time.


The week counter is stored with just 10 binary digits, aka 0-1023.


This causes…odd knock on effects.


In the 2019 rollover, telemetry broke on 12,000+ traffic lights in NYC.
November 4, 2025 at 7:50 PM
You’ve heard of the Unix 2038 Problem.


I bet you haven’t heard of the GPS 2038 problem.


Every GPS navigation device in existence experiences an integer overflow every 19.6 years.


Last time, it wiped out iPhones, NOAA weather buoys, and a number of flights in China:
November 4, 2025 at 7:50 PM
These days, the errors are more often cosmic rays, but the point is that at least we attempt to track them!



Here’s the original paper:
gwern.net/doc/cs/hardw...
November 3, 2025 at 7:41 PM
Once the root cause was found, it radically changed the industry.



Chipmakers quickly switched to low-alpha materials, added radiation blocking layers, and started keeping track of emission specs.



The “soft error rate” (SER) in modern DRAM is a direct byproduct of Intel’s investigation.
November 3, 2025 at 7:41 PM
Intel’s main ceramic vendor was located in Colorado, on the Green River.



Miles upstream was a…Uranium mine. Oops.



Alpha particles from the ceramic were causing random, single-bit flips.
November 3, 2025 at 7:41 PM
Seriously though, the chips were a bit spicy, and it was a *problem*.



AT&T was really annoyed. They had a major phone switching project, and refused delivery until Intel found the root cause.



A full investigation was launched.

The silicon looked…fine.
November 3, 2025 at 7:41 PM
The reason we know Radiation causes bit-flips in DRAM is pretty hilarious.


In the late 70s, Intel Ram was occasionally producing soft, uncorrectable errors.


Turns out, the ceramic packaging on the chip itself had a little bit of Uranium.

You know, as one does.
November 3, 2025 at 7:41 PM
The full story is even better than my summary, check it out here:
www.ee.torontomu.ca/~elf/hack/re...
November 1, 2025 at 6:14 PM