I guess this means my expiration date is around the 80s-90s.
I guess this means my expiration date is around the 80s-90s.
My suspicion is that writers are shut-ins and don't like attention.
In the olden days of vain writers: Tom Wolfe and Norman Mailer.
My suspicion is that writers are shut-ins and don't like attention.
In the olden days of vain writers: Tom Wolfe and Norman Mailer.
He essentially calls Deep Blue a very specialized tool that definitely cannot be used in warfare.
He essentially calls Deep Blue a very specialized tool that definitely cannot be used in warfare.
AI will have great impact, but if it's used to replace the "softer" skills, service will be worse.
AI will have great impact, but if it's used to replace the "softer" skills, service will be worse.
For fields that are not data-heavy or have no formal concept of "data", I'm not sure what the implications will be.
I have written AI applications; there is a very poor understanding of logic.
For fields that are not data-heavy or have no formal concept of "data", I'm not sure what the implications will be.
I have written AI applications; there is a very poor understanding of logic.
AFAIK "AI" is very complicated vector math. Coupled with great silicon chips, machines do multivariable math very fast that produces the likeliest result given an input.
AFAIK "AI" is very complicated vector math. Coupled with great silicon chips, machines do multivariable math very fast that produces the likeliest result given an input.
Well, actually, with all the money, they probably are now, thus they're probably deeply unhappy.
Well, actually, with all the money, they probably are now, thus they're probably deeply unhappy.
But, from a political perspective, the answer inclines towards Yes. The field is so new, that there are no tests for determining what is a "safe" agent and - this is my cynicism - software companies are not incentivized, ATM, to build these tests.
But, from a political perspective, the answer inclines towards Yes. The field is so new, that there are no tests for determining what is a "safe" agent and - this is my cynicism - software companies are not incentivized, ATM, to build these tests.
But, because "interpret" is really a complex statistical model, my feeling is that it can be "debugged".
But, because "interpret" is really a complex statistical model, my feeling is that it can be "debugged".
But even then, this can be warded by probably more software. Again, the agent is but an interface, so you can code guardrails before a prompt is even sent, or code guardrails in the post-interpretative step.
But even then, this can be warded by probably more software. Again, the agent is but an interface, so you can code guardrails before a prompt is even sent, or code guardrails in the post-interpretative step.
So the agent is really just an interface of what your "intent" is.
So the agent is really just an interface of what your "intent" is.
From my limited experience, the agent takes your prompt, "figures out" what to do with it, and then decides whether to kick off certain scripts.
From my limited experience, the agent takes your prompt, "figures out" what to do with it, and then decides whether to kick off certain scripts.