Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.
The deep skepticism about AI systems ever being generally capable, or even human-level in specific domains, doesn't seem to have changed over the past few years.
The deep skepticism about AI systems ever being generally capable, or even human-level in specific domains, doesn't seem to have changed over the past few years.
"the current configuration of economics/ wealth distribution is pretty solidly optimized to drive the wealthiest people in society batshit insane, which - to some extent - explains a lot of things you see around you"
"the current configuration of economics/ wealth distribution is pretty solidly optimized to drive the wealthiest people in society batshit insane, which - to some extent - explains a lot of things you see around you"
Checkmate, AI industry!
Checkmate, AI industry!
(The filter exerts pressure on the water level. It doesn't understand that. But then, I'd bet most humans wouldn't realize this either.)
(The filter exerts pressure on the water level. It doesn't understand that. But then, I'd bet most humans wouldn't realize this either.)
See the full article here: www.cell.com/patterns/ful...
See the full article here: www.cell.com/patterns/ful...
But if your concern reliably leads to more people being dead because doctors aren't using new technology, you're doing ethics wrong!
But if your concern reliably leads to more people being dead because doctors aren't using new technology, you're doing ethics wrong!
But after that, it stays at the same URL - which does not work when loaded manually, so you need to navigate away.
But after that, it stays at the same URL - which does not work when loaded manually, so you need to navigate away.
- collapsable left bar
- allow customizing column widths, to fit more than 2 on the screen
Also, bug report - the column shown here is broken, and I can't remove/close it.
- collapsable left bar
- allow customizing column widths, to fit more than 2 on the screen
Also, bug report - the column shown here is broken, and I can't remove/close it.
This is seen, for example, contrasting types of errors found - not phrasing, but actually substantively different analysis, discussing different issues - when asking the two substantively identical below questions.
This is seen, for example, contrasting types of errors found - not phrasing, but actually substantively different analysis, discussing different issues - when asking the two substantively identical below questions.
(Contra their recent commitment under the EU AI act; "AI-powered tools" are not meaningful *human* oversight.)
(Contra their recent commitment under the EU AI act; "AI-powered tools" are not meaningful *human* oversight.)
ihl-databases.icrc.org/en/customary...
ihl-databases.icrc.org/en/customary...
Following the meeting this past Wednesday, they have one month to present the final plan to the health committee. Wish us luck!
Following the meeting this past Wednesday, they have one month to present the final plan to the health committee. Wish us luck!
In fact, @meltemdaysal.bsky.social et al showed children's minor illnesses has significant impact for decades. Reducing "routine" disease is a big deal!
cepr.org/voxeu/column...
In fact, @meltemdaysal.bsky.social et al showed children's minor illnesses has significant impact for decades. Reducing "routine" disease is a big deal!
cepr.org/voxeu/column...
Level 1: vague safety claims + no docs = performative.
Level 5: lifecycle-wide supervision, transparent mitigations, and adaptive review.
Level 1: vague safety claims + no docs = performative.
Level 5: lifecycle-wide supervision, transparent mitigations, and adaptive review.
First, we need to know what is being done - so we provide a schema for documenting supervision claims, linking risks to actual control or oversight strategies.
If someone says "oversight" without explaining all of this, they are irresponsibly safety-washing.
First, we need to know what is being done - so we provide a schema for documenting supervision claims, linking risks to actual control or oversight strategies.
If someone says "oversight" without explaining all of this, they are irresponsibly safety-washing.
If you do it naively, you'll get failures for oversight and control - like a rubber-stamp UI, an overwhelmed operator, and/or decisions too fast for humans to understand or fix.
If you do it naively, you'll get failures for oversight and control - like a rubber-stamp UI, an overwhelmed operator, and/or decisions too fast for humans to understand or fix.
- Control is real-time or ex-ante intervention. A system does what you say.
- Oversight is policy or ex-post supervision. A system does what it does, and you watch, audit, or correct.
And critically, preventative oversight requires control.
- Control is real-time or ex-ante intervention. A system does what you say.
- Oversight is policy or ex-post supervision. A system does what it does, and you watch, audit, or correct.
And critically, preventative oversight requires control.
• In the EU AI Act
• In industry risk docs
• In safety debates
But there’s often a fundamental confusion: people conflate oversight with control. That confusion ignores key challenges, and hides where oversight can fail.
• In the EU AI Act
• In industry risk docs
• In safety debates
But there’s often a fundamental confusion: people conflate oversight with control. That confusion ignores key challenges, and hides where oversight can fail.