rocket explodes
we say, “never again!”
so we make it really hard to build rockets
FWIW I think this effect is most insidious in healthcare technology research, not rocketry
rocket explodes
we say, “never again!”
so we make it really hard to build rockets
FWIW I think this effect is most insidious in healthcare technology research, not rocketry
I just think we put too much energy toward preventing the negative scenarios that we see occur, to the point where there is an unseen opportunity cost which may be bigger than the cost of the prevented harms
States/firms should experiment with rolling that back, though maybe this is not that
I just think we put too much energy toward preventing the negative scenarios that we see occur, to the point where there is an unseen opportunity cost which may be bigger than the cost of the prevented harms
States/firms should experiment with rolling that back, though maybe this is not that
only disagree that that moderation team has to be:
(1) just one team
(2) human
distributed/composable moderation w/ LLMs FTW!
https://bsky.app/profile/danb.bsky.social/post/3juysj2d6mv27
Would be easy enough to configure an LLM to auto-block/mute any text/images with something seemingly benign to the general public but aggressive to you personally.
Most people don’t care/know about lamps but for those who do lamp recognition is trivial
only disagree that that moderation team has to be:
(1) just one team
(2) human
distributed/composable moderation w/ LLMs FTW!
https://bsky.app/profile/danb.bsky.social/post/3juysj2d6mv27
So maybe they were securely random which would be fine but if they were just uuids or something that’s no bueno
So maybe they were securely random which would be fine but if they were just uuids or something that’s no bueno
Would be easy enough to configure an LLM to auto-block/mute any text/images with something seemingly benign to the general public but aggressive to you personally.
Most people don’t care/know about lamps but for those who do lamp recognition is trivial
(a) knowledge is a remix of prior knowledge (ie it’s all just abstractions built on prior abstractions/observations)
(b) knowledge exists relative to some observer and not to some global reference frame (ie we’re just labeling abstractions for each other)
then LLMs have what it takes
(a) knowledge is a remix of prior knowledge (ie it’s all just abstractions built on prior abstractions/observations)
(b) knowledge exists relative to some observer and not to some global reference frame (ie we’re just labeling abstractions for each other)
then LLMs have what it takes
“forthcoming” in that it will be proven or commonly accepted before long by someone somewhere
“forthcoming” in that it will be proven or commonly accepted before long by someone somewhere
an exploration of the idea from a couple years back:
https://a11i.substack.com/p/synthetic-synthetic-creativity
an exploration of the idea from a couple years back:
https://a11i.substack.com/p/synthetic-synthetic-creativity