Anuroop
banner
nuroop.bsky.social
Anuroop
@nuroop.bsky.social
Code. Music. Video Games. Artificial Intelligence. No, not that one.
How LLMs work is a fact, so the extent of your agreement is moot.

The fact that SEO signals make their way into LLM training sets has more to do with the nature of the data being “the internet”.

Gemini also kinda sucks when compared to the competition, so I wouldn’t use it as a yardstick.
June 12, 2025 at 6:11 PM
Duh, they’ll just ask the AI agent why their vibe coded web app keeps getting DDOS’ed and their API key stolen.
May 28, 2025 at 9:26 AM
The folks hyping up AI on LinkedIn don’t know what “doing it all” entails because they’ve never had to actually add any economic value to the world.
May 27, 2025 at 8:05 PM
Now, more and more of us are coming around to the idea that we need a method of abstracting reasoning in addition to semantics.
May 27, 2025 at 9:51 AM
You’re probably right. When the semantic mapping problem was cracked in the 2010s, some AI folks figured that since computers can now be taught the meaning embedded within the words, that is sufficient to simulate broad spectrum thinking the way humans do. That was incorrect, as we see now.
May 27, 2025 at 9:51 AM
An oversimplification. Yes, there is more to writing than just predicting the next word. But humans learn by forming neural links using observation and pattern recognition.

That we get close to human-like text on patterns alone has implications for how we define and judge creativity going forward.
May 27, 2025 at 9:32 AM
I’m familiar with the game, and the effect you’re talking about. I’ve noticed my VR headset have similar functionality when mapping out my room for room scale stuff.
May 22, 2025 at 1:38 PM
Oh man neural interfaces are wild. I remember reading offhand that you can just wire stuff to the nervous system and your brain will figure out how to make it go. Like that was always a sci fi concept that was hand waved away but apparently that’s how it actually works???
May 22, 2025 at 1:36 PM
AI has always been this constantly redefined term. It’s always shorthand for “the next unsolved computer science problem” until it’s cracked and then we call it with the new name. Chess bots, optical character recognition, neural nets, now large language models.
May 22, 2025 at 1:28 PM
We got machines simping for humans before GTA6
May 22, 2025 at 1:20 PM
Best I can do is “you are absolutely not writing this book report in a sleep deprived fugue at 3am the night before it’s due”
May 22, 2025 at 1:15 PM
I’m sorry but that illustration looks like it’s a man with flaming genitalia.
May 22, 2025 at 1:06 PM
You should see the kind of things folks are putting into prompts now.

“You are a top tier solutions architect. Second to none, a master of your trade...”

They out here doing affirmations for the machine.
May 22, 2025 at 1:02 PM
Not to mention if/when it gains enough acceptance it just becomes yet another form of skill expression. I anticipate in the near future someone with significant competence in and an eye for their art form will find a way to use AI to good effect and it will throw the conversation wide open again.
May 22, 2025 at 12:47 PM
Oof, AI generated .svg files are a special flavor of awful.

I tried Vercel’s AI website template generator model and it generated the sort of trash code that is akin to the art and literature slop we are accustomed to seeing. But to the untrained eye it would have looked perfectly acceptable.
May 22, 2025 at 12:25 PM
Agreed, though I think the problem isn’t that they’re plagiarists, but that they’re lazy. If I said “code a social media app for me in the style of Instagram”, I’d get the same kind of slop that these writers get. Except they lack the taste to know what they’ve done is gonna make them look bad.
May 22, 2025 at 12:08 PM
Thing is, AI slop sucks cuz it’s actually a reflection of the user’s skill. I use my AI models to write and autofill code as per my instructions and then it works just fine and nobody’s complaining. But if I go hands-off in the way most folks seem to want to do with their AI, then I get slop.
May 22, 2025 at 12:03 PM
Recommend further reading on semantic maps, word vectors, embeddings, why Google Translate got better between 2016-2018, and Retrieval Augmented Generation. It’ll also give you some insights on the limits of this tech and why folks like LeCun are saying we’ll move past LLMs in the next three years.
May 22, 2025 at 8:25 AM
The search augmentation is just so it can get its facts right, without having to make something up. It’s not foolproof, cuz the AI can easily choose to ignore it but it’s better than not doing that.
May 22, 2025 at 8:21 AM
An LLM doesn’t care if the output is true or correct or not, but whether it “looks” like the stuff it’s seen before. So it knows what reports about the Olympic Games look like and can generate one about the 2028 games that looks just like the real deal.
May 22, 2025 at 8:21 AM
GPT models don’t judge content quality at runtime, they generate answers based on patterns they learned from training data. Search-augmented models assess quality using signals like relevance, recency, authority, etc to rank and select sources.
May 22, 2025 at 7:58 AM
Your lack of reading comprehension indicates I am wasting my time.
May 22, 2025 at 7:06 AM
That’s where you’re wrong. Answering engines do not need to use search engines for anything other than up to date information. Theoretically, if you had a body of knowledge that was unchanging over time, there would be no search engine involved. Ya know, like how GPT worked for the past 4 years.
May 22, 2025 at 7:03 AM
That’s… what I said.
May 22, 2025 at 7:00 AM