LaurieWired
@lauriewired.bsky.social
researcher @google; serial complexity unpacker
ex @ msft & aerospace
ex @ msft & aerospace
The post and story itself is a goldmine, highly encourage you to go read _pi_'s blog on the topic.
blog.pimaker.at/texts/rvc1/
blog.pimaker.at/texts/rvc1/
Linux in a Pixel Shader - A RISC-V Emulator for VRChat
Linux in a Pixel Shader - A RISC-V Emulator for VRChat
blog.pimaker.at
November 10, 2025 at 9:44 PM
The post and story itself is a goldmine, highly encourage you to go read _pi_'s blog on the topic.
blog.pimaker.at/texts/rvc1/
blog.pimaker.at/texts/rvc1/
Tick between frames fast enough, and you get a (somewhat useable) CPU.
About ~250 kilohertz on a 2080Ti.
Not much, but enough to run Linux!
What I love most is that you can visually “see” the system state at any point just by viewing the texture itself.
About ~250 kilohertz on a 2080Ti.
Not much, but enough to run Linux!
What I love most is that you can visually “see” the system state at any point just by viewing the texture itself.
November 10, 2025 at 9:44 PM
Tick between frames fast enough, and you get a (somewhat useable) CPU.
About ~250 kilohertz on a 2080Ti.
Not much, but enough to run Linux!
What I love most is that you can visually “see” the system state at any point just by viewing the texture itself.
About ~250 kilohertz on a 2080Ti.
Not much, but enough to run Linux!
What I love most is that you can visually “see” the system state at any point just by viewing the texture itself.
By abusing the heck out of shader logic, you can do some funny things.
To run linux in a shader, you first need a (simulated) CPU.
Of course, someone took it to the logical extreme; and emulated RISC-V logic in HLSL.
~64MiB of “Ram” stored as a texture.
To run linux in a shader, you first need a (simulated) CPU.
Of course, someone took it to the logical extreme; and emulated RISC-V logic in HLSL.
~64MiB of “Ram” stored as a texture.
November 10, 2025 at 9:44 PM
By abusing the heck out of shader logic, you can do some funny things.
To run linux in a shader, you first need a (simulated) CPU.
Of course, someone took it to the logical extreme; and emulated RISC-V logic in HLSL.
~64MiB of “Ram” stored as a texture.
To run linux in a shader, you first need a (simulated) CPU.
Of course, someone took it to the logical extreme; and emulated RISC-V logic in HLSL.
~64MiB of “Ram” stored as a texture.
VRChat allows users to embed custom fragment shaders within worlds.
Of course, you don’t just get to run arbitrary C code wherever you want; that would be an insane security risk.
But, you *do* have textures. Textures that can hold state.
Of course, you don’t just get to run arbitrary C code wherever you want; that would be an insane security risk.
But, you *do* have textures. Textures that can hold state.
November 10, 2025 at 9:44 PM
VRChat allows users to embed custom fragment shaders within worlds.
Of course, you don’t just get to run arbitrary C code wherever you want; that would be an insane security risk.
But, you *do* have textures. Textures that can hold state.
Of course, you don’t just get to run arbitrary C code wherever you want; that would be an insane security risk.
But, you *do* have textures. Textures that can hold state.
It’s a really fun story, but unfortunately Teramac was a bit ahead of its time.
Here’s one of the better articles about it: fab.cba.mit.edu/classes/862....
Here’s one of the better articles about it: fab.cba.mit.edu/classes/862....
November 5, 2025 at 6:28 PM
It’s a really fun story, but unfortunately Teramac was a bit ahead of its time.
Here’s one of the better articles about it: fab.cba.mit.edu/classes/862....
Here’s one of the better articles about it: fab.cba.mit.edu/classes/862....
Test the path, localize the bad resource, blacklist it, compile around it.
Teramac didn’t just sit idle either!
They mapped MRI data of brain arteries, played with volume rendering (Cube-4), and ran a number of *actually useful* workloads after they proved the utility.
Teramac didn’t just sit idle either!
They mapped MRI data of brain arteries, played with volume rendering (Cube-4), and ran a number of *actually useful* workloads after they proved the utility.
November 5, 2025 at 6:28 PM
Test the path, localize the bad resource, blacklist it, compile around it.
Teramac didn’t just sit idle either!
They mapped MRI data of brain arteries, played with volume rendering (Cube-4), and ran a number of *actually useful* workloads after they proved the utility.
Teramac didn’t just sit idle either!
They mapped MRI data of brain arteries, played with volume rendering (Cube-4), and ran a number of *actually useful* workloads after they proved the utility.
75% of the FPGAs in Teramac would normally be considered too faulty to use. Scrapped.
By intentionally overbuilding the interconnects; well beyond what was sane, defect tolerance was (theoretically) high.
The first workload thus needed to create a "defect database".
By intentionally overbuilding the interconnects; well beyond what was sane, defect tolerance was (theoretically) high.
The first workload thus needed to create a "defect database".
November 5, 2025 at 6:28 PM
75% of the FPGAs in Teramac would normally be considered too faulty to use. Scrapped.
By intentionally overbuilding the interconnects; well beyond what was sane, defect tolerance was (theoretically) high.
The first workload thus needed to create a "defect database".
By intentionally overbuilding the interconnects; well beyond what was sane, defect tolerance was (theoretically) high.
The first workload thus needed to create a "defect database".
The team expected that bleeding edge silicon would likely have much higher defect rates.
There was huge pressure to reduce yield risk; improving software reconfiguration could change the industry.
The real magic was in the interconnect.
There was huge pressure to reduce yield risk; improving software reconfiguration could change the industry.
The real magic was in the interconnect.
November 5, 2025 at 6:28 PM
The team expected that bleeding edge silicon would likely have much higher defect rates.
There was huge pressure to reduce yield risk; improving software reconfiguration could change the industry.
The real magic was in the interconnect.
There was huge pressure to reduce yield risk; improving software reconfiguration could change the industry.
The real magic was in the interconnect.
It took over a *year* to fix some of the platforms after the rollover bug, with some clever software patches through the Iridium satellite network.
Pretty neat read, check it out here: gi.copernicus.org/articles/10/...
Pretty neat read, check it out here: gi.copernicus.org/articles/10/...
November 4, 2025 at 7:50 PM
It took over a *year* to fix some of the platforms after the rollover bug, with some clever software patches through the Iridium satellite network.
Pretty neat read, check it out here: gi.copernicus.org/articles/10/...
Pretty neat read, check it out here: gi.copernicus.org/articles/10/...
In any case, 2038 is going to be a tricky year for time.
One of the better writeups about the 2019 GPS event is a paper from Antarctic site PG2.
Many of their instruments are completely inaccessible during the polar night season.
One of the better writeups about the 2019 GPS event is a paper from Antarctic site PG2.
Many of their instruments are completely inaccessible during the polar night season.
November 4, 2025 at 7:50 PM
In any case, 2038 is going to be a tricky year for time.
One of the better writeups about the 2019 GPS event is a paper from Antarctic site PG2.
Many of their instruments are completely inaccessible during the polar night season.
One of the better writeups about the 2019 GPS event is a paper from Antarctic site PG2.
Many of their instruments are completely inaccessible during the polar night season.
The worst parts is *unlike* the UNIX 2038 problem, the GPS rollover bug doesn’t hit devices all at the same time.
It’s quite common for GPS units to only rebuild the week number on a cold boot.
Many scientific devices didn’t get hit until months (or even years!) later.
It’s quite common for GPS units to only rebuild the week number on a cold boot.
Many scientific devices didn’t get hit until months (or even years!) later.
November 4, 2025 at 7:50 PM
The worst parts is *unlike* the UNIX 2038 problem, the GPS rollover bug doesn’t hit devices all at the same time.
It’s quite common for GPS units to only rebuild the week number on a cold boot.
Many scientific devices didn’t get hit until months (or even years!) later.
It’s quite common for GPS units to only rebuild the week number on a cold boot.
Many scientific devices didn’t get hit until months (or even years!) later.
Part of the issue is GPS isn’t *just* used for positioning, it’s also used for accurate time.
The week counter is stored with just 10 binary digits, aka 0-1023.
This causes…odd knock on effects.
In the 2019 rollover, telemetry broke on 12,000+ traffic lights in NYC.
The week counter is stored with just 10 binary digits, aka 0-1023.
This causes…odd knock on effects.
In the 2019 rollover, telemetry broke on 12,000+ traffic lights in NYC.
November 4, 2025 at 7:50 PM
Part of the issue is GPS isn’t *just* used for positioning, it’s also used for accurate time.
The week counter is stored with just 10 binary digits, aka 0-1023.
This causes…odd knock on effects.
In the 2019 rollover, telemetry broke on 12,000+ traffic lights in NYC.
The week counter is stored with just 10 binary digits, aka 0-1023.
This causes…odd knock on effects.
In the 2019 rollover, telemetry broke on 12,000+ traffic lights in NYC.
These days, the errors are more often cosmic rays, but the point is that at least we attempt to track them!
Here’s the original paper: gwern.net/doc/cs/hardw...
Here’s the original paper: gwern.net/doc/cs/hardw...
November 3, 2025 at 7:41 PM
These days, the errors are more often cosmic rays, but the point is that at least we attempt to track them!
Here’s the original paper: gwern.net/doc/cs/hardw...
Here’s the original paper: gwern.net/doc/cs/hardw...
Once the root cause was found, it radically changed the industry.
Chipmakers quickly switched to low-alpha materials, added radiation blocking layers, and started keeping track of emission specs.
The “soft error rate” (SER) in modern DRAM is a direct byproduct of Intel’s investigation.
Chipmakers quickly switched to low-alpha materials, added radiation blocking layers, and started keeping track of emission specs.
The “soft error rate” (SER) in modern DRAM is a direct byproduct of Intel’s investigation.
November 3, 2025 at 7:41 PM
Once the root cause was found, it radically changed the industry.
Chipmakers quickly switched to low-alpha materials, added radiation blocking layers, and started keeping track of emission specs.
The “soft error rate” (SER) in modern DRAM is a direct byproduct of Intel’s investigation.
Chipmakers quickly switched to low-alpha materials, added radiation blocking layers, and started keeping track of emission specs.
The “soft error rate” (SER) in modern DRAM is a direct byproduct of Intel’s investigation.
Intel’s main ceramic vendor was located in Colorado, on the Green River.
Miles upstream was a…Uranium mine. Oops.
Alpha particles from the ceramic were causing random, single-bit flips.
Miles upstream was a…Uranium mine. Oops.
Alpha particles from the ceramic were causing random, single-bit flips.
November 3, 2025 at 7:41 PM
Intel’s main ceramic vendor was located in Colorado, on the Green River.
Miles upstream was a…Uranium mine. Oops.
Alpha particles from the ceramic were causing random, single-bit flips.
Miles upstream was a…Uranium mine. Oops.
Alpha particles from the ceramic were causing random, single-bit flips.
Seriously though, the chips were a bit spicy, and it was a *problem*.
AT&T was really annoyed. They had a major phone switching project, and refused delivery until Intel found the root cause.
A full investigation was launched.
The silicon looked…fine.
AT&T was really annoyed. They had a major phone switching project, and refused delivery until Intel found the root cause.
A full investigation was launched.
The silicon looked…fine.
November 3, 2025 at 7:41 PM
Seriously though, the chips were a bit spicy, and it was a *problem*.
AT&T was really annoyed. They had a major phone switching project, and refused delivery until Intel found the root cause.
A full investigation was launched.
The silicon looked…fine.
AT&T was really annoyed. They had a major phone switching project, and refused delivery until Intel found the root cause.
A full investigation was launched.
The silicon looked…fine.
The full story is even better than my summary, check it out here:
www.ee.torontomu.ca/~elf/hack/re...
November 1, 2025 at 6:14 PM
The full story is even better than my summary, check it out here:
www.ee.torontomu.ca/~elf/hack/re...
After a little thinking, they hand-typed a stripped binary in raw hex…and ran it.
The long-shot worked. A writeable /etc.
From that point, they recreated passwd, hosts, and eventually ftp.
Recovering /bin from another host, the system was back online, and no one was the wiser.
The long-shot worked. A writeable /etc.
From that point, they recreated passwd, hosts, and eventually ftp.
Recovering /bin from another host, the system was back online, and no one was the wiser.
November 1, 2025 at 6:14 PM
After a little thinking, they hand-typed a stripped binary in raw hex…and ran it.
The long-shot worked. A writeable /etc.
From that point, they recreated passwd, hosts, and eventually ftp.
Recovering /bin from another host, the system was back online, and no one was the wiser.
The long-shot worked. A writeable /etc.
From that point, they recreated passwd, hosts, and eventually ftp.
Recovering /bin from another host, the system was back online, and no one was the wiser.
*Assuming* you could copy or recover any tools, they needed a place to put them.
How do you rename /tmp to /etc…without mv?
Don’t forget; you can’t even compile code.
Remember that single Emacs session? Yeah, time to break out some raw VAX assembly.
How do you rename /tmp to /etc…without mv?
Don’t forget; you can’t even compile code.
Remember that single Emacs session? Yeah, time to break out some raw VAX assembly.
November 1, 2025 at 6:14 PM
*Assuming* you could copy or recover any tools, they needed a place to put them.
How do you rename /tmp to /etc…without mv?
Don’t forget; you can’t even compile code.
Remember that single Emacs session? Yeah, time to break out some raw VAX assembly.
How do you rename /tmp to /etc…without mv?
Don’t forget; you can’t even compile code.
Remember that single Emacs session? Yeah, time to break out some raw VAX assembly.
A single Emacs session was still open, with a root shell.
Many student’s PhD thesis work was on the box.
Every basic tool, ls, cd, mkdir, etc was already wiped.
The last tape backup was a week ago. Any downtime was unthinkable.
Many student’s PhD thesis work was on the box.
Every basic tool, ls, cd, mkdir, etc was already wiped.
The last tape backup was a week ago. Any downtime was unthinkable.
November 1, 2025 at 6:14 PM
A single Emacs session was still open, with a root shell.
Many student’s PhD thesis work was on the box.
Every basic tool, ls, cd, mkdir, etc was already wiped.
The last tape backup was a week ago. Any downtime was unthinkable.
Many student’s PhD thesis work was on the box.
Every basic tool, ls, cd, mkdir, etc was already wiped.
The last tape backup was a week ago. Any downtime was unthinkable.