sxlijin.bsky.social
sxlijin.bsky.social
@sxlijin.bsky.social
Oh, I agree- but the SO argument is what usually convinces people that it’s OK to rely on a fallible magic black box for answers.
June 4, 2025 at 4:33 PM
The biggest challenge for devs I think is the mindset shift: it’s really easy to say no; it’s a lot harder to say yes, learn the tooling, and discover how the limitations work in practice than just theorize about them.
June 2, 2025 at 11:26 PM
This isn’t any different than copy-pasting from StackOverflow or AskUbuntu or a random forum post though.
June 2, 2025 at 11:23 PM
Weeks absolutely not required. Days, maybe. I think of LLMs as souped-up search engines: you still have to learn how to use them and how to phrase around their quirks, but for the most part you can point a human at it and it will make sense how to use it.
June 2, 2025 at 11:22 PM
AltTab - Windows alt-tab on macOS
Windows alt-tab on macOS
alt-tab-macos.netlify.app
May 30, 2025 at 8:45 PM
AI/ML-backed products/features with the same brush and dismiss them as "it's all crap" and don't bother actually judging products/features on their individual merits, and just rely on their kneejerk emotional reaction.

It makes it easy to understand why the original Luddite movement happened.
May 30, 2025 at 8:44 PM
Totally agree that it's natural, but it's more than people lying about what AI can do. AI/ML at this point is a super broad label that gets equally applied to Amazon's Rufus assistant, chatgpt.com, and Cursor the IDE. People who are (rightfully) skeptical then just discard all nuance and paint all
May 30, 2025 at 8:42 PM
> Specifically, everyone that puts forward a pro practically gets immediately dogpiled on, and the criticisms generally read as more feeling-based (emotional) than evidence-based (logical).
May 28, 2025 at 5:51 PM
> because there's so little acknowledgment of the pros, that discussion of tradeoffs in this channel feel generally unproductive. [...]
May 28, 2025 at 5:51 PM
I feel this so, so, so much!

Here's a comment I made in one of my community Slacks back in January:

> I will say, as a yea-sayer: although I've definitely noticed certain issues that come up when using LLMs, I feel particularly unwilling to discuss them in this specific channel [...]
May 28, 2025 at 5:51 PM