AI-Homelab/ Simon
ai-homelab.bsky.social
AI-Homelab/ Simon
@ai-homelab.bsky.social
Teacher, PICTS School Admin, Student in MAS Didactics of media and computer science (UZH, PHSZ, HSLU), Techno Optimist, tinkerer, ML & Server Build Youtuber

https://youtube.com/@ai-homelab?si=i98REuQ1j5N-e02n
Hey Giacomo

Sorry - I didn't check Bluesky for a long time. This semester of my masters is a bit busy. If you're still interested in a discussion about AI support for students: Let's talk. =)
May 16, 2025 at 11:40 AM
Now for the second question: Sorry to disappoint, but I am not sure. I mean use gguf and exllama2. And I think exllama 2 uses the transformers library to accelerate multi gpu inf.
February 3, 2025 at 2:47 PM
Sorry I need tp ise Bluesky more regularly. 🙈

I run Windows cause most viewers do. But I plan to add a third SSD with Linux as most workflows which aren't adopted by Pinokio Computer are easier to get running there.
February 3, 2025 at 2:45 PM
is of utmost importance.
So let's not repeat the disaster with the reflection LM. 🙈
December 4, 2024 at 12:54 PM
Me 😬🫡

But I am only a small Youtuber in the AI space woth only around 2700 subs. But I create AI Homeservers.
November 29, 2024 at 10:11 AM
Vram isn't a big problem here. With such a thinking model it's more about the time it will take me to do it manually. 😅

But true, FP16 will surely deliver some improvement. But I guess I am going to write a python script to automate the "question-answer part".
November 29, 2024 at 6:19 AM
Any guess on how bad degradation will be? I plannto test it locally with my own set of questions. I am not yet sure if I want to trst it in FP16, q_8 or q_4.
November 28, 2024 at 8:29 PM
😂 👌 Poor clippy.
November 26, 2024 at 7:25 PM
Would be funny to see what your AI assistant would think about beeing compared to clippy. 😅🙈

But yeah, computer use will be awesome once it's rate of failure gets lower. But I haven't yet tried the mode from anthropic to be honest. ✌️
November 26, 2024 at 6:50 PM
Well, I mean no lecture really is guarded. You don't need to show a student ID to get into a lecture. And there are lectures for a broader audience on some evenings.
November 26, 2024 at 6:35 PM
Just again dropping casually a source of educational gold. Thank you for sharing! Open education is a gift! 🫡
November 26, 2024 at 6:36 AM
I can't afford any of these, but the L40S seems to be the better choice on the paper. Newer architecture, more tensor cores,more and higher clocked cuda cores, better memory bandwith(?)
November 25, 2024 at 10:06 PM
In any case: Good luck! Not sure if any player in academia trained on such a huge set.
November 25, 2024 at 1:09 PM
I can imagine that. 🙈
November 25, 2024 at 1:08 PM
wow... Are you preparing to train on that amount of tokens? 😅
November 25, 2024 at 12:28 PM
One more reason to switch to this platform. As it seems, even the API is open here. We can use the data to create our own open models (in theory).
November 25, 2024 at 8:30 AM