Researching artificial intelligence, mental representation, representational formats, concepts.
PhD 2021 @ Monash University
🏠 https://iwanrwilliams.wordpress.com
Our take? We should think of current LLM-driven chatbots as proto-asserters.
[5/5]
Our take? We should think of current LLM-driven chatbots as proto-asserters.
[5/5]
Take young children: toddlers lack some of the cognitive capacities exercised by adult asserters, but many features are partially present.
In this phase, a child's speech is not (exactly) assertion but it's not *not* assertion: they are proto-asserters!
[4/5]
Take young children: toddlers lack some of the cognitive capacities exercised by adult asserters, but many features are partially present.
In this phase, a child's speech is not (exactly) assertion but it's not *not* assertion: they are proto-asserters!
[4/5]
We argue that previous attempts to do this – treating chatbots as asserters in a merely fictional sense, or holding that they only make "proxy"-assertions on behalf of humans – are unsatisfactory.
[3/5]
We argue that previous attempts to do this – treating chatbots as asserters in a merely fictional sense, or holding that they only make "proxy"-assertions on behalf of humans – are unsatisfactory.
[3/5]
We argue that neither flat rejection nor straightforward endorsement is compelling. So how should we think about chatbot assertion?
[2/5]
We argue that neither flat rejection nor straightforward endorsement is compelling. So how should we think about chatbot assertion?
[2/5]