thomasht86.bsky.social
@thomasht86.bsky.social
Software Engineer @ vespa.ai
Interested in coding, sports and philosophy
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.

3/20
#BookChallenge
#Books
#BookSky 💙📚
#20daybookchallenge
December 3, 2024 at 7:26 AM
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.

I’m joining in with the book challenge :)

#BookChallenge
#Books
#BookSky 💙📚
#20daybookchallenge

2/20
December 1, 2024 at 6:31 AM
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.

I’m joining in with the book challenge :)

#BookChallenge
#Books
#BookSky 💙📚
#20daybookchallenge

1/20
November 30, 2024 at 9:28 AM
uv ❤️
November 29, 2024 at 12:11 PM
Is there a tool that lets me create a github PR (private tepo) with voice from my phone, or do I have to make one?
November 29, 2024 at 8:02 AM
Yes
November 29, 2024 at 6:32 AM
Reposted
Are there limits to what you can learn in a closed system? Do we need human feedback in training? Is scale all we need? Should we play language games? What even is "recursive self-improvement"?

Thoughts about this and more here:
arxiv.org/abs/2411.16905
Boundless Socratic Learning with Language Games
An agent trained within a closed system can master any desired capability, as long as the following three conditions hold: (a) it receives sufficiently informative and aligned feedback, (b) its covera...
arxiv.org
November 28, 2024 at 4:01 PM
Reposted
A librarian that previously worked at the British Library created a relatively small dataset of bsky posts, hundreds of times smaller than previous researchers, to help folks create toxicity filters and stuff.

So people bullied him & posted death threats.

He took it down.

Nice one, folks.
November 28, 2024 at 5:33 AM
Excited to try this! Smaller and better 🙌
The authors of ColPali trained a retrieval model based on SmolVLM 🤠 TLDR;
- ColSmolVLM performs better than ColPali and DSE-Qwen2 on all English tasks
- ColSmolVLM is more memory efficient than ColQwen2 💗

Find the model here huggingface.co/vidore/colsm...
November 28, 2024 at 10:10 AM
Reposted
Releasing SmolVLM, a small 2 billion parameters Vision+Language Model (VLM) built for on-device/in-browser inference with images/videos.

Outperforms all models at similar GPU RAM usage and tokens throughputs

Blog post: huggingface.co/blog/smolvlm
November 26, 2024 at 4:58 PM
with this growth, how long before someone from @bsky.app will reach out to consider moving from opensearch to vespa 😊
November 26, 2024 at 4:27 AM
And a lot of the LLM bias might be a lot more subtle than this. Think LLM-assisted judgments/scoring 😬
Github Copilot output. A sad but fascinating alignment failure. Reveals hidden LLM biases by going out of their RLHF distribution
November 26, 2024 at 4:18 AM