Justin
jlee03.bsky.social
Justin
@jlee03.bsky.social
Not to poop too much on people who devote their lives researching these things (I do not), but saying so-and-so researcher knows more about the mind than Joe Schmo is like saying this puddle resembles the ocean more than this cup.
March 16, 2025 at 2:56 AM
Then there's Hemingway. Arthur Miller. Salinger, unintentionally. Wiesel, largely from his activism.
I guess this means my expiration date is around the 80s-90s.
February 26, 2025 at 1:49 AM
What a thinker. I imagine most writers get famous through adaptations to film / TV. Also "as great" is a high bar to pass.
My suspicion is that writers are shut-ins and don't like attention.
In the olden days of vain writers: Tom Wolfe and Norman Mailer.
February 26, 2025 at 1:49 AM
There's an interesting book from a developer of Deep Blue, the computer that beat Kasparov in chess in 1997. He added a preface in 2022 when the AI boom started. You may find it an interesting read.
He essentially calls Deep Blue a very specialized tool that definitely cannot be used in warfare.
Behind Deep Blue
The riveting quest to construct the machine that would take on the world’s greatest human chess player—told by the man who built itOn May 11, 1997, millions worldwide heard news of a stunning victory,...
www.google.com
January 24, 2025 at 8:13 PM
They can't even do math. AFAIK ChatGPT only does math when a developer codes in, "Hey, does this look like an equation to you?" and then the "regular code" takes over and outputs the result.
AI will have great impact, but if it's used to replace the "softer" skills, service will be worse.
January 24, 2025 at 7:59 PM
Thus "AI" will work very, very well for data-heavy industries ex. medicine and law.
For fields that are not data-heavy or have no formal concept of "data", I'm not sure what the implications will be.
I have written AI applications; there is a very poor understanding of logic.
January 24, 2025 at 7:59 PM
I just think people don't really know what it is, and I know this when people say "AI" instead of "machine learning".
AFAIK "AI" is very complicated vector math. Coupled with great silicon chips, machines do multivariable math very fast that produces the likeliest result given an input.
January 24, 2025 at 7:59 PM
Most cynical: as others said, free testing, and if they do find a secret sauce, then they can just private it.
January 21, 2025 at 2:10 AM
My least cynical is fairly optimistic: it's an intellectual trade, and knowledge is improved through peer review and sharing. Open source works for a reason. Also, code isn't like traditional stock: Facebook's code isn't special, its resources and promotion are.
January 21, 2025 at 2:10 AM
To show their stockholders that they're hip to this new AI thing. And that they're not losing to OpenAI (which recently rolled out Tasks <-> bit like Google Calendar).
January 16, 2025 at 10:45 PM
but they're also disincentivized to figure things out; they're happy that the agents produce coherent English in the first place, they're not interested in how Microsoft is going to use them.
Well, actually, with all the money, they probably are now, thus they're probably deeply unhappy.
January 8, 2025 at 5:44 AM
As a note, I'm a programmer by trade, and I have dabbled with these agents, but by no means am I an expert. That being said, from my observations, I think there are no experts ATM except for the really smart math people who understand how to make the NLP work
January 8, 2025 at 5:44 AM
Then there's the whole, "Oh well the agent broke this thing, but that's not stopping money from coming in" (again, this is EXTREME cynicism speaking).
January 8, 2025 at 5:44 AM
So, tentatively, I say No.
But, from a political perspective, the answer inclines towards Yes. The field is so new, that there are no tests for determining what is a "safe" agent and - this is my cynicism - software companies are not incentivized, ATM, to build these tests.
January 8, 2025 at 5:44 AM
This is a huge range of inputs. Compare this to BlueSky, where I can only interact through these textboxes and buttons - this is a substantially smaller set of inputs.
But, because "interpret" is really a complex statistical model, my feeling is that it can be "debugged".
January 8, 2025 at 5:44 AM
Now, the elephant in the room: the "figures out" part. Is this more dangerous than, say, a simple UI form? The answer is, probably. The whole use of the agent is to take English (or, your language of preference) in any format, any grammar and "interpret" them.
January 8, 2025 at 5:44 AM
to make AI "useful".)
But even then, this can be warded by probably more software. Again, the agent is but an interface, so you can code guardrails before a prompt is even sent, or code guardrails in the post-interpretative step.
January 8, 2025 at 5:44 AM
However, as others pointed out, it depends on how much data the bot can access. Let's say, goodness, the agent knows your SSN. Through clever prompt engineering, an attacker can say, Hey, send my SSN to my nana, her email is ---. (I wouldn't be surprised if this came to be in the current gold rush
January 8, 2025 at 5:44 AM
These scripts are written in a programming language. These scripts are as safe as the scripts written today. (I guess I should put "safe" in quotes then.)
So the agent is really just an interface of what your "intent" is.
January 8, 2025 at 5:44 AM
This is a great question, and my response for now is, No. But, it's also complicated.
From my limited experience, the agent takes your prompt, "figures out" what to do with it, and then decides whether to kick off certain scripts.
January 8, 2025 at 5:44 AM