(Using @dtjw instead of as danieltjw does not belong to me)
To be curious it has to value an information rich environment.
Humans with more autonomy create a more interesting information landscape.
Therefore, Friendly I-AGIs will aim to improve human well-being and autonomy.
1. East-Asian descent (vanilla human, no special magical bloodline as some would like to believe.)
2. Humanist (Prefers reason and the scientific method.)
3. Co-existance with Human-like AI is possible (The Culture novels as a preferable future over next few centuries.)
1. East-Asian descent (vanilla human, no special magical bloodline as some would like to believe.)
2. Humanist (Prefers reason and the scientific method.)
3. Co-existance with Human-like AI is possible (The Culture novels as a preferable future over next few centuries.)
• There is likely a trade-off between narrow goal optimisers and the ability to solve edge-cases
• This suggest scaled up versions of current AI architectures will likely have their limitations compared to humans
• There is likely a trade-off between narrow goal optimisers and the ability to solve edge-cases
• This suggest scaled up versions of current AI architectures will likely have their limitations compared to humans
Themes: Self-acceptance
#KPopDemonHunters
Themes: Self-acceptance
#KPopDemonHunters
(As future AI systems may eventually have human-like intentions, avoiding anthropomorphizing them now may help avoid confusion later on.)
(As future AI systems may eventually have human-like intentions, avoiding anthropomorphizing them now may help avoid confusion later on.)
Over-investment into a narrow tech segment risk depriving other vital research of oxygen.
Dismissing the possibility of the off chance that AI breakthroughs may have a transformative impact on society risk being unprepared.
osf.io/preprints/ps...
Over-investment into a narrow tech segment risk depriving other vital research of oxygen.
Dismissing the possibility of the off chance that AI breakthroughs may have a transformative impact on society risk being unprepared.
Since most models can now decisively pass the Turing Test, I propose the Last Human Turing Test, where even the last expert on Earth fails to reliably tell apart the models.
(Models still have tendencies in their responses if you where to look and how to prompt.)
Since most models can now decisively pass the Turing Test, I propose the Last Human Turing Test, where even the last expert on Earth fails to reliably tell apart the models.
(Models still have tendencies in their responses if you where to look and how to prompt.)
Leading AGI labs will need to do more to assure everyone that the AGI (or proto-AGI) will be broadly beneficial or risk a backlash.
Leading AGI labs will need to do more to assure everyone that the AGI (or proto-AGI) will be broadly beneficial or risk a backlash.
It aims to slightly improve our odds of getting to a better world.
Thank you for your attention!
It aims to slightly improve our odds of getting to a better world.
Thank you for your attention!
Argument Against:
It would be surprising if human indistinguishable AGI arrives in the next few years as there are big issues with the available state of the art models. (Plausible, but would require a lot of evidence to be convincing.)
Argument Against:
It would be surprising if human indistinguishable AGI arrives in the next few years as there are big issues with the available state of the art models. (Plausible, but would require a lot of evidence to be convincing.)
The portrayal of these fictional scientists and supporter's deep appreciation of the beauty inherent in life and nature is an interesting contrast to the more typical clinical and emotion-less characterisation.
#science #anime
Scaling current model architecture alone without new breakthroughs will likely not be enought for Independent AGIs (50% chance in the next few decades).
Also unlikely, I-AGIs will made directly by human insight, but more likely indirectly by AI systems trying many combinations.
Scaling current model architecture alone without new breakthroughs will likely not be enought for Independent AGIs (50% chance in the next few decades).
Also unlikely, I-AGIs will made directly by human insight, but more likely indirectly by AI systems trying many combinations.
There is no wrong answer and many humans spend many years choosing and switching between these worlds.
• Advanced
• Basic
• Continue
There is no wrong answer and many humans spend many years choosing and switching between these worlds.
• Advanced
• Basic
• Continue
The recent animated series Terminator Zero show a somewhat Friendly Super Intelligence, Kokoro protecting humans from a rogue AI.
The recent animated series Terminator Zero show a somewhat Friendly Super Intelligence, Kokoro protecting humans from a rogue AI.
(And Future Friendly I-AGIs)
Non-independent AGIs, that are overly reliant on humans may be less effective and less safe due to human error and bias.
Non-independent AGIs, that are overly reliant on humans may be less effective and less safe due to human error and bias.
Over time the compounding effect of better decisions from I-AGIs should lead to higher standards of living and well-being.
Over time the compounding effect of better decisions from I-AGIs should lead to higher standards of living and well-being.