https://philpeople.org/profiles/vincent-carchidi
All opinions entirely my own.
I make the case that human beings exhibit a species-specific form of intellectual freedom, expressible through natural language, and this is likely an unreachable threshold for computational systems.
philpapers.org/rec/CARCBC
I'd just suggest an additional problem for LLM writing beyond how they're trained: part of the oomph of writing, as Nathan discusses, is the person's voice. But that is a result of idiosyncratic motivations to use
How the current way of training language models destroys any voice (and hope of good writing).
www.interconnects.ai/p/why-ai-wri...
I'd just suggest an additional problem for LLM writing beyond how they're trained: part of the oomph of writing, as Nathan discusses, is the person's voice. But that is a result of idiosyncratic motivations to use
Came across a piece that cites a real article I co-authored, but lists my name as Victoria Carchidi, who is a real but completely unrelated researcher.
Here is a thread of my feelings
Came across a piece that cites a real article I co-authored, but lists my name as Victoria Carchidi, who is a real but completely unrelated researcher.
www.realtimetechpocalypse.com/p/noam-choms...
And I keep thinking about how destructive an attitude this is for the elite of an advanced society to hold.
And I keep thinking about how destructive an attitude this is for the elite of an advanced society to hold.
I saw something from Benedict Evans recently where he noticed that the term "AI slop" at some point shifted from its original meaning of "trash output" to "anything automated by an LLM."
It seems like having a barrage of even *better* outputs in areas like recruitment becomes slop
I saw something from Benedict Evans recently where he noticed that the term "AI slop" at some point shifted from its original meaning of "trash output" to "anything automated by an LLM."
It seems like having a barrage of even *better* outputs in areas like recruitment becomes slop
It's uh. Not working so well!
drive.google.com/file/d/14Sla...
This can quickly turn into a can of worms that I don't want to open, because many would likely say they do this already (or that humans don't do this).
I've been thinking recently that it also started getting misapplied in the 2023-present period.
(gyges may not agree with this part, just my hot take)
I've been thinking recently that it also started getting misapplied in the 2023-present period.
(gyges may not agree with this part, just my hot take)
AI Winters are seen as being in the rearview mirror, deep learning might be capturing general principles of intelligence, but it's useful enough either way to put those concerns aside.
ojs.aaai.org/aimagazine/i...
AI Winters are seen as being in the rearview mirror, deep learning might be capturing general principles of intelligence, but it's useful enough either way to put those concerns aside.
ojs.aaai.org/aimagazine/i...
philarchive.org/rec/KARAET-4
philarchive.org/rec/KARAET-4
"Small open high quality sources are increasingly more valuable than large data collections of questionable provenance."
"Small open high quality sources are increasingly more valuable than large data collections of questionable provenance."
www.reuters.com/world/asia-p...
www.reuters.com/world/asia-p...
That said, I really wouldn't discount the general feeling of "why the hell do I have to pretend like LLMs are everything?"
But I worry about the amount of people who swear by their reliance on it for self-enhancement, apparently unaware of its impacts on them.
I said my job depended on doing distinctive work. That was my selling point. If I started to sound like ChatGPT and turn out what it did, then how on earth could I justify doing it? What would that make me?
But I worry about the amount of people who swear by their reliance on it for self-enhancement, apparently unaware of its impacts on them.