⚠️🔧⌨️🔥
banner
plausiblyreliable.com
⚠️🔧⌨️🔥
@plausiblyreliable.com
cuda is done with some wsl2-specific magic passthrough device the runtime libs know how to use - do _not_ try to actually install any drivers - that will break it, but things that bundle their own cuda runtime like pytorch should just work out of the box

for gui stuff it kinda speaks wayland now
September 13, 2025 at 8:58 PM
never found a good way to make a disk visible to both windows and wsl and perform well from both - but one with a linux fs on it should be attachable to wsl like this learn.microsoft.com/en-us/window...
Get started mounting a Linux disk in WSL 2
Learn how to set up a disk mount in WSL 2 and how to access it.
learn.microsoft.com
September 13, 2025 at 8:50 PM
yeah wsl just sucks at this case
September 13, 2025 at 8:48 PM
the root fs on wsl2 should act just like a regular linux fs on a vm - because it is - but permissions _are_ pretty broken on wsl1 generally and when using the wsl2 9p mounts of windows drives
September 13, 2025 at 8:46 PM
wsl2 is much better and _almost_ the same as a vm - now mostly just need to remember that the windows fs mounts are not high iops/mmap-friendly (don't try to run stuff directly off them) and it doesn't run an actual init by default (but *can* be configured to run systemd)
September 13, 2025 at 8:38 PM
I would just look for "post training", "supervised fine-tuning" (human created example responses), "RLHF" (human rater score tuning) - "alignment" is a lot more related to "AI Safety" stuff, sometimes it means things like getting the models to reject bad requests and sometimes it means AI doomerism
September 9, 2025 at 4:43 PM
The base models (rarely released anymore) almost certainly could be, the "personality" comes from the post training - additional steps at the end with examples in the target style and a bit of tuning by human raters scoring outputs
September 9, 2025 at 1:00 PM
For anything new now we're using modal and having it write back to our own S3
August 21, 2025 at 6:31 PM
IME anything ob GPU, even small non-LLM models are hard to run cost effectively if you have low or difficult to predict utilization
August 21, 2025 at 6:29 PM
The first place I saw this was the GPT-4 technical report. arxiv.org/pdf/2303.08774 p.12
August 8, 2025 at 11:46 PM
One interesting result I've seen is that *base* models' (pure next token predictors) outputted probabilities match up pretty well with the likelihood of correctness and can kind of be interpreted as confidence scores, but after the post-training steps, especially RL, that stops working.
August 8, 2025 at 11:41 PM
for the most part you should just be able to take existing web/html apps into it unmodified but it also has some escape hatches to get at native stuff if you need
August 5, 2025 at 5:50 PM
Might be looking for something like Tauri v2.tauri.app
Tauri 2.0
The cross-platform app building toolkit
v2.tauri.app
August 5, 2025 at 5:48 PM
They do have the option of just using claude code at API rates, fully usage based - but nobody really likes that either because you have no idea how much it will spend on a task ahead of time (and if you max out the limits subs are *still* much cheaper than API rates)
July 29, 2025 at 2:49 PM
I like the open source vibecoding tools like Cline where you bring your own API keys better but paying the raw API prices can be rough, I use Cursor basically *for* the subsidy
July 9, 2025 at 2:45 AM
Nowadays, after training them on the whole Internet, they do a much shorter post training phase with chat transcripts (outsourced human workers write these, usually for pennies) to make them chatbots out of the box, (ChatGPT) but even those kinds are still fundamentally text completion systems
July 9, 2025 at 1:45 AM
The actual math part of an LLM is barely a screen full of code. The behavior really is all in the training data selection and prompting.
July 9, 2025 at 1:12 AM
They know what it currently is they don't know what it used to be
July 9, 2025 at 1:08 AM
All of the chatbot ones do. Before chatbots "LLM" referred to large text auto completion systems trained on the Internet (e.g. GPT2&3). Since those can reliably complete all sorts of text, it was figured out that you could make them into chat bots by just prompting them with enough chat transcript.
July 9, 2025 at 1:08 AM
Realistically though it's probably just as simple as this

bsky.app/profile/ceej...
it is kind of interesting that the language-based instruction of LLMs may be revealing certain truths hidden in the way we’ve communicated for decades, like how “politically incorrect” is interpreted to mean “openly racist”
Annnd @xai just deleted Musk's new prompt which told the Grok LLM to "not shy away from making claims which are politically incorrect, so long as they are well substantiated."

That prompt seemingly caused the LLM to become Nazi 4Chan, and has now been deleted, but other recent changes remain.
July 9, 2025 at 12:58 AM
I mean, they don't know anything *outside the context window* about themselves - obviously they know about their system prompt, that's just more input. And it's all from the prompts, not the model itself. If the wrong knowledge cutoff date is in the prompt then the answer about it will be wrong.
July 9, 2025 at 12:53 AM
It made it up on the spot, if you ask several times you'll get different answers, and there's no way for it to know at all what it would've been in the past, outside consulting web sources. Models themselves have no memory.
July 9, 2025 at 12:46 AM
We also don't know for sure that these are the live prompts. It is plausible that this is the only cause though: LLM's are known bad at conditions and negation- it doesn't work well to prompt to respond one way in only some circumstances, if something is in the prompt at all, it always has effects
July 9, 2025 at 12:38 AM
It got that by looking at the news, not from self introspection, another LLM would've found the same thing
July 9, 2025 at 12:31 AM