petar
banner
petar
@petarov.bsky.social
dev, futurist and neuropsy enthusiast, български
Gamedev, but that was long time ago. Nowadays, I'm more curious about how to solve problems.
August 1, 2025 at 7:35 PM
It was a great book. Especially later on and the parts about how we built modern economics. A lot of the details were brutal but really eye-opening.
July 15, 2025 at 9:40 PM
Great. Just another Verschwörungstheoretiker. L
July 15, 2025 at 9:35 PM
What Weinstein refers to as "Lagrangian" is incomplete and depending on the undefined SHIAB operator. In other words - useless. youtu.be/jz7Trp5rTOY?...
Eric Weinstein Walks Into a Bar...
YouTube video by Professor Dave Explains
youtu.be
July 15, 2025 at 11:58 AM
She's a clickbaiter. All she cares about are YT money, let's not kid ourselves.
July 15, 2025 at 10:22 AM
I only learned about this just now. Respect to you Sean for doing this.
July 13, 2025 at 10:09 PM
It all started with this recent "private posts leak" in #pixelfed as described here: fokus.cool/2025/03/25/p...

Private posts meant only to a Mastodon account's followers were visible in Pixelfed. This is bad.
Pixelfed leaks private posts from other Fediverse instances - fiona fokus
fokus.cool
March 30, 2025 at 10:32 PM
Looks majestic! I hope it all goes well. 🤞
March 30, 2025 at 10:26 AM
My current concerns are price and time. Can I vibe a project and have it cost-effective in terms of end-price ($ for credits) and invest less dev time. I completely discard #vibecode for huge codebases with frequent features and lots of collaborators. I don’t see this happening unless modularized.
March 29, 2025 at 10:56 AM
I don't think 1-5 t/sec is an option. Your point is good though, bigger models are in fact quite capable. Like I said my point is to find out if there could be a (compromise) solution for on premises scenarios. Thanks.
March 25, 2025 at 2:09 PM
True. I'm thinking an on premises product, so I can't expect top hdw from customers, and so I'm trying to think of min. required hdw for inference i.e. using fine-tuned models trained on data that covers my niche use case. However, even with smaller params, inference can be quite resource hungry.
March 25, 2025 at 2:09 PM