I wrote something on this (pre-OpenAI) with a simple prediction:
1. Where the ground truth is human judgment, AI favors defense.
2. Where the ground truth is facts in the world, AI favors offense.
I wrote something on this (pre-OpenAI) with a simple prediction:
1. Where the ground truth is human judgment, AI favors defense.
2. Where the ground truth is facts in the world, AI favors offense.
(about the time spent back and forth between clipboard whiteboard blackboard & keyboard)
tecunningham.github.io/posts/2020-1...
(about the time spent back and forth between clipboard whiteboard blackboard & keyboard)
tecunningham.github.io/posts/2020-1...
1. If an AB test shows an effect of +2% (±1%) it’s very persuasive, but if it shows a an effect of +50% (±1%) then the experiment was probably misconfigured, and it’s not at all persuasive.
1. If an AB test shows an effect of +2% (±1%) it’s very persuasive, but if it shows a an effect of +50% (±1%) then the experiment was probably misconfigured, and it’s not at all persuasive.
With applications to (1) experiment launch rules; (2) ranking weights in a recommender; and (3) allocating headcount in a company.
With applications to (1) experiment launch rules; (2) ranking weights in a recommender; and (3) allocating headcount in a company.
The common thread is that people are pretty good intuitive Bayesian reasoners, so just summarize the relevant evidence and let a human be the judge:
tecunningham.github.io/posts/2023-0...
The common thread is that people are pretty good intuitive Bayesian reasoners, so just summarize the relevant evidence and let a human be the judge:
tecunningham.github.io/posts/2023-0...