Eric Hallahan
banner
erichallahan.bsky.social
Eric Hallahan
@erichallahan.bsky.social
Unemployed Engineer doing ML, Robotics, and more. ADHD, ASD, hearing impaired. PSU Engineering Graduate, Former Intel oneAPI Software Innovator.
https://erichallahan.github.io/
Rare (for these days) political post, to simply say: I'm proud of my hometown.
November 6, 2025 at 2:54 AM
Variable register allocation and an increase to 10 threads-per-EU are icing on the cake, and it should be much easier to extract performance without massive penalties for higher-complexity vector workloads—if Intel can get their software right, it should be quite competent.
October 9, 2025 at 7:18 PM
To put it in perspective, consider that Lunar Lake had 2.7× more theoretical throughput relative to available global memory bandwidth versus Battlemage, with less L1$/SLM and L2$ to boot—marginally compute-bound workloads on the latter easily became memory-bound on the former!
October 9, 2025 at 7:18 PM
Initial appraisal on Xe3: It's a nice improvement over Xe2, and it's a good thing it isn't a radical change generationally: L1$/SLM increasing to 256 KiB in-line with Battlemage is appreciated, and the doubling of L2$ to 16 MiB is even more welcome given limited memory bandwidth.
October 9, 2025 at 7:18 PM
Reposted by Eric Hallahan
Aaron Swartz did not die for "100 documents per month"
Great news!
JSTOR now have a free account with an Independent Researcher category. You can access 100 documents per month

www.jstor.org/action/showL...
October 1, 2025 at 9:38 PM
Far more true of Plan 9.
September 7, 2025 at 6:44 PM
girl
September 1, 2025 at 5:15 PM
@freya.bsky.social, do you have any advice? You seem to be the person that would most likely to have that in my circles.
August 28, 2025 at 12:20 AM
Huge endorsement from me for this proposed push for getting Mesa's Vulkan drivers set up with Cooperative Matrix and beyond. It's always been my preference long-term if possible, as ML compute would be a whole lot easier to manage without vendored stacks.
airlied.blogspot.com/2025/07/rama...
ramalama/mesa : benchmarks on my hardware and open source vs proprietary
One of my pet peeves around running local LLMs and inferencing is the sheer mountain of shit^W^W^W complexity of compute stacks needed to ru...
airlied.blogspot.com
July 25, 2025 at 2:23 AM
Somehow I've never read it, despite being deep in rat spaces for a couple years.
July 20, 2025 at 1:32 AM
It is with disappointment that I announce the end of my close relationship with Intel and their Software Innovators program—it was great fun while it lasted. I wish all the best for their consumer GPU endeavors and will personally continue be an enthusiast for them going forward.
July 15, 2025 at 12:24 AM
In the past couple years since publication, this article has become a bit dated in some details (PipeWire is even more mature now, and I have made changes in my WirePlumber configuration to make everything automatic)—perhaps an update will be in order sometime soon.
July 14, 2025 at 11:41 PM
Long-time followers of mine are probably already familiar with this article of mine, but newer ones haven't seen it in a fully functional state. With some recent interest, I finally put in the effort to fix it up to good-as-new again!
erichallahan.github.io/articles/hea...
Compensating for Hearing with Basic Signal Processing
A look at (and an explanation of) how I came to compensate for my hearing when I am not wearing a hearing aid.
erichallahan.github.io
July 14, 2025 at 11:41 PM
I wrote most of my papers in undergrad that way. I wouldn't imagine using Overleaf ever again unless I need to collaborate on another academic paper with others.
July 6, 2025 at 11:31 PM
*hugs*
July 3, 2025 at 7:14 PM
I wish to absorb it from you!
July 3, 2025 at 7:13 PM
New blog post: Automatic differentiation with Enzyme… on OpenCL?!? Yes, you heard right!
erichallahan.github.io/articles/enz...
Enzyme's secret language
Demonstrating OpenCL kernels autodiffed with Enzyme
erichallahan.github.io
July 1, 2025 at 1:40 AM
Hello!
June 22, 2025 at 5:17 PM
Reposted by Eric Hallahan
This continual level of ignorance (although I am tempted to assume bad faith or its rhetorical equivalent) is not only dangerous, but undermines work towards very real problems it seems to have originated against (silicon valley hype, corporate control, environmental concerns, the rising tide of
LLMs used as synthetic text extruding machines have no legitimate use cases and --- for all the reasons discussed in the stochastic parrots paper --- are prone to harmful outputs to boot.

>>
June 21, 2025 at 9:55 PM
Mission: Space is four industrial-grade human centrifuges built by Environmental Tectonics Corporation. Add development and all the drama between TWDC and ETC and you get your answer.
June 15, 2025 at 1:36 AM
Do I count
May 15, 2025 at 1:43 AM
I *need* to go back
May 6, 2025 at 2:27 AM
Put GPT-Neo on it
May 4, 2025 at 9:06 PM
Waow (Based Based Based)
May 1, 2025 at 12:04 PM
Good luck!
April 26, 2025 at 9:53 PM