It´s buggy, inconsistent, makes everything harder to use due to nonsensical UI decisions. A complete mess. Shockingly bad, really. Major lag in the most basic operations, like scrolling a Finder window in icon view. Audio dropouts when going full screen in the Music app. Super buggy. 🤢
It´s buggy, inconsistent, makes everything harder to use due to nonsensical UI decisions. A complete mess. Shockingly bad, really. Major lag in the most basic operations, like scrolling a Finder window in icon view. Audio dropouts when going full screen in the Music app. Super buggy. 🤢
Truth: behavior of currently available tech, with understanding of the technical underpinnings, limitations, and real world experience.
Hype: hypothetical future tech, speculation based on conjecture.
Truth: behavior of currently available tech, with understanding of the technical underpinnings, limitations, and real world experience.
Hype: hypothetical future tech, speculation based on conjecture.
I have provided links to content from a premier tech site, a scientific study, and quotes from an LLM itself.
Truth is not 'anti-hype'. It's truth.
I have provided links to content from a premier tech site, a scientific study, and quotes from an LLM itself.
Truth is not 'anti-hype'. It's truth.
'LLMs like GPT are pattern-matchers. They don’t know what words mean—they learn how words statistically relate to other words. Given a prompt, they predict the most likely continuation. This can produce breathtakingly fluent results, but it’s not the same as understanding.'
'LLMs like GPT are pattern-matchers. They don’t know what words mean—they learn how words statistically relate to other words. Given a prompt, they predict the most likely continuation. This can produce breathtakingly fluent results, but it’s not the same as understanding.'
ChatGPT:
'Yes. ...'
ChatGPT:
'Yes. ...'
Chat GPT:
'...I’m not truly intelligent in the human sense. I don’t have consciousness, self-awareness, emotions, or the ability to understand experience like humans do. I also can’t learn from past interactions..'
LLM said it, so it must be true.
Chat GPT:
'...I’m not truly intelligent in the human sense. I don’t have consciousness, self-awareness, emotions, or the ability to understand experience like humans do. I also can’t learn from past interactions..'
LLM said it, so it must be true.
'My responses come from patterns in the large dataset I was trained on, which includes text from books, articles, websites, and other sources, up until my knowledge cutoff (which is in 2023).'
LLM = assembly of PATTERNS OF WORDS based on a statistical dataset.
'My responses come from patterns in the large dataset I was trained on, which includes text from books, articles, websites, and other sources, up until my knowledge cutoff (which is in 2023).'
LLM = assembly of PATTERNS OF WORDS based on a statistical dataset.
There. Is. No. Artificial. Intelligence.
Sure, there may be AI in a few decades, but right now, all there is is hype.
And - see Are Technica article above - increasingly low confidence among the tech enthusiasts.
There. Is. No. Artificial. Intelligence.
Sure, there may be AI in a few decades, but right now, all there is is hype.
And - see Are Technica article above - increasingly low confidence among the tech enthusiasts.
This paper from this year look into LLM emergence.
It differentiates between 'emergent properties' - being able to identify the outline of an object in an image background and making assumptions about what is behind the object - vs. 'emergent intelligence'.
This paper from this year look into LLM emergence.
It differentiates between 'emergent properties' - being able to identify the outline of an object in an image background and making assumptions about what is behind the object - vs. 'emergent intelligence'.
An LLM giving different answers to the same question today and tomorrow is faulty code.
An LLM giving different answers to the same question today and tomorrow is faulty code.
There are no neural networks comparable to a human brain.
When a program, especially a complex program, does not do what the programmer expected, it is called an unexpected error. Hype ≠ Reality.
There are no neural networks comparable to a human brain.
When a program, especially a complex program, does not do what the programmer expected, it is called an unexpected error. Hype ≠ Reality.
I CAN THINK
and then print out the page and look at it, that does not mean that your printer suddenly acquired the the capability to think, and informed you about it.
I CAN THINK
and then print out the page and look at it, that does not mean that your printer suddenly acquired the the capability to think, and informed you about it.
An LLM is fundamentally not different to a spreadsheet program. It can do some basic calculations, but it cannot 'think' in any shape or form. Spreadsheet: 1+1=2, LLM 1+1=3.4
An LLM is fundamentally not different to a spreadsheet program. It can do some basic calculations, but it cannot 'think' in any shape or form. Spreadsheet: 1+1=2, LLM 1+1=3.4