LLMs only model one: these things often appear together. This sequence is in some sense likely.
Likelihood is not sufficient
It seems like you're saying the harmful trick is comparable to the experience of seeking medical care in our dysfunctional non system
It seems like you're saying the harmful trick is comparable to the experience of seeking medical care in our dysfunctional non system
people *feel* like it's listening.
it is not listening; it is a statistical word model.
people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.
it tricks them into trusting the output
people *feel* like it's listening.
it is not listening; it is a statistical word model.
people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.
it tricks them into trusting the output
people *feel* like it's listening.
it is not listening; it is a statistical word model.
people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.
it tricks them into trusting the output
in a large language model it does not mean the same thing
in a large language model it does not mean the same thing
but those cues do not mean the same thing because the language model is not a person.
and the advice is often dangerously incoherent
but those cues do not mean the same thing because the language model is not a person.
and the advice is often dangerously incoherent
people *feel* like it's listening.
it is not listening; it is a statistical word model.
people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.
it tricks them into trusting the output
people *feel* like it's listening.
it is not listening; it is a statistical word model.
people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.
it tricks them into trusting the output
they tell you somebody is paying attention to you, really listening, trying to understand with an open mind what you're experiencing.
those things are good because they are *indicators* of a good thing
they tell you somebody is paying attention to you, really listening, trying to understand with an open mind what you're experiencing.
those things are good because they are *indicators* of a good thing
where?
out there. in the world. you know, live. i'll be back later
where?
out there. in the world. you know, live. i'll be back later
People don't ask image generators to draw a picture of what's wrong with their body.
The reflexive assumption is that text = intention = mind = understanding
People don't ask image generators to draw a picture of what's wrong with their body.
The reflexive assumption is that text = intention = mind = understanding
Stuff like simple Wikipedia is a good example of trying to build an accessible context for making sense of things. And that's far better than statistical translation of jargon into plain language
Stuff like simple Wikipedia is a good example of trying to build an accessible context for making sense of things. And that's far better than statistical translation of jargon into plain language