kilian
banner
kiliantw.bsky.social
kilian
@kiliantw.bsky.social
Not your regular geek
So on brand!
November 18, 2025 at 11:34 PM
Have they really broken up, though? Or is it more like open marriage is what's needed to keep those cloud relationships alive, now? 😁
November 3, 2025 at 4:16 PM
Does it say anything about where those systems will be physically hosted? Given that OCI is again in the loop, I guess that it could mean Oracle is "buying" the hardware and hosting those 100k GPUs in OCI, with DoE paying a monthly rent for dedicated access.
October 28, 2025 at 6:36 PM
And this "The system is co-developed by AMD, Hewlett Packard Enterprise, Oracle Cloud Infrastructure and Oak Ridge National Laboratory." prompts the question of OCI's role, here, too... 🤔
October 28, 2025 at 12:06 AM
Yes, I suspect the "hotel bar" plays a more important role in the decision to attend conferences than one may be ready to admit.
October 14, 2025 at 3:38 PM
Or (unpopular opinion), maybe it's time to question the relevance of giant in-person conferences in 2025, drawing 15k+ people to one city (with all that environmental impact), only to check email while jet-lagged speakers present slides already online, or attend meetings that could have been emails?
October 14, 2025 at 3:15 PM
Well, no-context benchmarks are like unit-less graphs with truncated axes: it's marketing 101. What do you expect? :/
October 14, 2025 at 2:23 PM
> multiple hyperscalers do this to optimize every level of their data center infrastructure.

And many failed, and are now coming back to off-the-shelf solutions.
September 5, 2025 at 3:08 PM
When you consider the number of *nodes* in those "AI servers" (which are actually rack-scale networked systems), going from NVL144 to NVL572 (a x4 increase in just system size, before any generational perf. increase) is just a 2.7x price bump. $/GPU goes from $22k to $15k.
Pretty good deal! :D
August 20, 2025 at 9:55 PM
But then, how the vague and full of prestidigitation marketing tactics will work? 🤔
August 19, 2025 at 8:30 PM
... several million dollars per _rack_.
A NVL72 rack is still 18 independent servers (OS and kernel-wise).
August 14, 2025 at 7:44 PM
Funny how, despite all their learning capabilities, LLMs are completely incapable of learning that making things up is not an optimal pathway.

Oh wait, maybe that's simply because they have no way to evaluate if they're making things up or not.
August 9, 2025 at 3:18 AM
Building your own silicon from scratch to compete with companies with a 30yr head start? Who could have predicted?
August 8, 2025 at 12:48 AM