I built one in 2019(probably after 18 years), took a bit of research, but with sites like pcpartpicker, it’s really a breeze.
Recently learnt about Minisforum etc., and I’d def go the mini route if I didn’t need GPUs.
I built one in 2019(probably after 18 years), took a bit of research, but with sites like pcpartpicker, it’s really a breeze.
Recently learnt about Minisforum etc., and I’d def go the mini route if I didn’t need GPUs.
Would you suggest backing up the private key for future/new yubikey transfers (similar to gpg certify key bkp to extend subkey expiry), or just generate one on key and "never expose it"? Guess it depends, but curious about your workflow?
Would you suggest backing up the private key for future/new yubikey transfers (similar to gpg certify key bkp to extend subkey expiry), or just generate one on key and "never expose it"? Guess it depends, but curious about your workflow?
@filippo.abyssdomain.expert Curious if this is the best way to use Age+Yubikeys?
@filippo.abyssdomain.expert Curious if this is the best way to use Age+Yubikeys?
I think this book honestly is the most readable compressed history of the chip industry all the way from vacuum tubes to modern day custom accelerator chips incl. the geopolitics.
Not to drop the momentum, what should I read next?
I think this book honestly is the most readable compressed history of the chip industry all the way from vacuum tubes to modern day custom accelerator chips incl. the geopolitics.
Not to drop the momentum, what should I read next?
The tokens are just waiting in-between various queues on the TCP(“network”) layer as packets waiting to be sent (or eventually dropped).
The tokens are just waiting in-between various queues on the TCP(“network”) layer as packets waiting to be sent (or eventually dropped).
The tokens pile up in queue somewhere between the server and the browser, and if the TCP congestion clears up all the tokens will seem to arrive in “one big chunk”, like that flaky audio call. 😅
The tokens pile up in queue somewhere between the server and the browser, and if the TCP congestion clears up all the tokens will seem to arrive in “one big chunk”, like that flaky audio call. 😅
The packaging didn’t really specify what it was. Fun little useless surprise at the end of putting it together.
The packaging didn’t really specify what it was. Fun little useless surprise at the end of putting it together.
Diverse tools def. increase the problem complexity space, but as an LLM provider you probably want to solve that to satisfy more downstream customers.
Of course, I agree that most LLM consumers(businesses etc.) individually don't need a diverse set of tools.
Diverse tools def. increase the problem complexity space, but as an LLM provider you probably want to solve that to satisfy more downstream customers.
Of course, I agree that most LLM consumers(businesses etc.) individually don't need a diverse set of tools.
Immediate thought: one might need a diverse set of tools in the “training run”, so as not to overfit to the same set.
Possibly more interesting if you can run the inner loop during training, and “transfer” that learning to run outer loops during inference.
Immediate thought: one might need a diverse set of tools in the “training run”, so as not to overfit to the same set.
Possibly more interesting if you can run the inner loop during training, and “transfer” that learning to run outer loops during inference.