jeremygoodman.bsky.social
@jeremygoodman.bsky.social
I asked Dan about this and now have a better sense of how he's thinking, but it's kind of complicated -- not unrelated to @benholguin.bsky.social's idea of Knowledge by Constraint though
October 18, 2025 at 12:02 AM
I asked Dan about this and now have a better sense of how he's thinking, but it's kind of complicated -- not unrelated to @benholguin.bsky.social's idea of Knowledge by Constraint though
Also, to the extent that the henchmen can't take for granted that they both remember things in exactly the same way, you can run a parallel argument with memory knowledge in place of perceptual knowledge.
October 17, 2025 at 11:58 PM
Also, to the extent that the henchmen can't take for granted that they both remember things in exactly the same way, you can run a parallel argument with memory knowledge in place of perceptual knowledge.
Good Q! @harveylederman.bsky.social's paper argues perception is insufficient for CK that the other person even exists. That's relying on perception again, but you might think that, if perception can't yield CK, then the henchmen can't have CK of each other's existence/plans to begin with.
October 17, 2025 at 11:58 PM
Good Q! @harveylederman.bsky.social's paper argues perception is insufficient for CK that the other person even exists. That's relying on perception again, but you might think that, if perception can't yield CK, then the henchmen can't have CK of each other's existence/plans to begin with.
Thanks! I still need to read Cohen's paper.
(Dan Greco replies to Lederman in his book w/ a model that "doesn’t include any possibilities where Alice and Bob are in different coarse-grained states of confidence" (p. 163). I don't see why that's legit, though, since it's a genuine possibility.)
(Dan Greco replies to Lederman in his book w/ a model that "doesn’t include any possibilities where Alice and Bob are in different coarse-grained states of confidence" (p. 163). I don't see why that's legit, though, since it's a genuine possibility.)
October 17, 2025 at 7:37 PM
Thanks! I still need to read Cohen's paper.
(Dan Greco replies to Lederman in his book w/ a model that "doesn’t include any possibilities where Alice and Bob are in different coarse-grained states of confidence" (p. 163). I don't see why that's legit, though, since it's a genuine possibility.)
(Dan Greco replies to Lederman in his book w/ a model that "doesn’t include any possibilities where Alice and Bob are in different coarse-grained states of confidence" (p. 163). I don't see why that's legit, though, since it's a genuine possibility.)
These models demonstrate the surprisingly weak assumptions needed for Lederman's argument! (They are much weaker than the assumptions of Williamson's more famous "anti-luminosity" argument against the possibility of infinitely iterated *intrapersonal* knowledge.)
October 17, 2025 at 6:09 PM
These models demonstrate the surprisingly weak assumptions needed for Lederman's argument! (They are much weaker than the assumptions of Williamson's more famous "anti-luminosity" argument against the possibility of infinitely iterated *intrapersonal* knowledge.)
For example, even though Alice *knows* that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. And so on.
Formally, <x,y,z> R_A <y+1,y,y+1> R_B <y+2,y+2,y+1> R_A <y+3,y+2,y+3> R_B <y+4,y+4,y+3> ...
Formally, <x,y,z> R_A <y+1,y,y+1> R_B <y+2,y+2,y+1> R_A <y+3,y+2,y+3> R_B <y+4,y+4,y+3> ...
October 17, 2025 at 6:09 PM
For example, even though Alice *knows* that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. And so on.
Formally, <x,y,z> R_A <y+1,y,y+1> R_B <y+2,y+2,y+1> R_A <y+3,y+2,y+3> R_B <y+4,y+4,y+3> ...
Formally, <x,y,z> R_A <y+1,y,y+1> R_B <y+2,y+2,y+1> R_A <y+3,y+2,y+3> R_B <y+4,y+4,y+3> ...
Here's the rub: from any possibility <x,y,z>, we can reach a possibility <x',y',z'> in which the temp x' is arbitrarily far from x using a finite number of steps of R_A and R_B. (The trick is to zig-zag between R_A and R_B.)
This means Alice and Bob have no non-trivial common knowledge of the temp!
This means Alice and Bob have no non-trivial common knowledge of the temp!
October 17, 2025 at 6:09 PM
Here's the rub: from any possibility <x,y,z>, we can reach a possibility <x',y',z'> in which the temp x' is arbitrarily far from x using a finite number of steps of R_A and R_B. (The trick is to zig-zag between R_A and R_B.)
This means Alice and Bob have no non-trivial common knowledge of the temp!
This means Alice and Bob have no non-trivial common knowledge of the temp!
So Alice and Bob know that both apps are at most one degree off, and that their phones might be one degree off in either direction. Each also knows what their own phone reads, which is encoded in the relations R_A and R_B:
<x,y,z> R_A <x',y',z'> iff y'=y
and
<x,y,z> R_B <x',y',z'> iff z'=z
<x,y,z> R_A <x',y',z'> iff y'=y
and
<x,y,z> R_B <x',y',z'> iff z'=z
October 17, 2025 at 6:09 PM
So Alice and Bob know that both apps are at most one degree off, and that their phones might be one degree off in either direction. Each also knows what their own phone reads, which is encoded in the relations R_A and R_B:
<x,y,z> R_A <x',y',z'> iff y'=y
and
<x,y,z> R_B <x',y',z'> iff z'=z
<x,y,z> R_A <x',y',z'> iff y'=y
and
<x,y,z> R_B <x',y',z'> iff z'=z
Now to Lederman's argument. Let W = {<x,y,z>: |x-y|≤1 and |x-z|≤1}. Here <x,y,z> is the possibility in which the temperature is x, Alice's app reads y, and Bob's app reads z. We're ignoring possibilities where their phones are broken, and so on, as that would just make common knowledge even harder.
October 17, 2025 at 6:09 PM
Now to Lederman's argument. Let W = {<x,y,z>: |x-y|≤1 and |x-z|≤1}. Here <x,y,z> is the possibility in which the temperature is x, Alice's app reads y, and Bob's app reads z. We're ignoring possibilities where their phones are broken, and so on, as that would just make common knowledge even harder.
These models also allow us to characterize agents' knowledge about each others' knowledge, and hence to model common knowledge: w is a situation in which what Alice and Bob *commonly know* is that they're in some situation v that can be reached from w by some sequence of steps of R_A and R_B.
October 17, 2025 at 6:09 PM
These models also allow us to characterize agents' knowledge about each others' knowledge, and hence to model common knowledge: w is a situation in which what Alice and Bob *commonly know* is that they're in some situation v that can be reached from w by some sequence of steps of R_A and R_B.
"w R_A v" means that, in w: for all Alice knows, she's in v. Likewise for Bob and R_B.
In this way we can model what different agents know in different situations: w is a situation in which Alice knows only that she's in some situation v such that w R_A v (and likewise for what Bob knows and R_B).
In this way we can model what different agents know in different situations: w is a situation in which Alice knows only that she's in some situation v such that w R_A v (and likewise for what Bob knows and R_B).
October 17, 2025 at 6:09 PM
"w R_A v" means that, in w: for all Alice knows, she's in v. Likewise for Bob and R_B.
In this way we can model what different agents know in different situations: w is a situation in which Alice knows only that she's in some situation v such that w R_A v (and likewise for what Bob knows and R_B).
In this way we can model what different agents know in different situations: w is a situation in which Alice knows only that she's in some situation v such that w R_A v (and likewise for what Bob knows and R_B).
In other words, even though Alice knows that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. Etc.
October 17, 2025 at 5:29 PM
In other words, even though Alice knows that the temperature is at most y+1, for all she knows, for all Bob knows, for all she knows, for all he knows, it's y+4. Etc.
But they don't have any non-trivial common knowledge about the temperature. Why? Because we can zig-zag with R_A and R_B to access possibilities with arbitrarily extreme temperatures. Formally, for any world <x,y,z>, we have:
<x,y,z>R_A<y+1,y,y+1>R_B<y+2,y+2,y+1>R_A<y+3,y+2,y+3>R_B<y+4,y+4,y+3>...
<x,y,z>R_A<y+1,y,y+1>R_B<y+2,y+2,y+1>R_A<y+3,y+2,y+3>R_B<y+4,y+4,y+3>...
October 17, 2025 at 5:29 PM
But they don't have any non-trivial common knowledge about the temperature. Why? Because we can zig-zag with R_A and R_B to access possibilities with arbitrarily extreme temperatures. Formally, for any world <x,y,z>, we have:
<x,y,z>R_A<y+1,y,y+1>R_B<y+2,y+2,y+1>R_A<y+3,y+2,y+3>R_B<y+4,y+4,y+3>...
<x,y,z>R_A<y+1,y,y+1>R_B<y+2,y+2,y+1>R_A<y+3,y+2,y+3>R_B<y+4,y+4,y+3>...
Both R_A and R_B are equivalence relations, so individual knowledge obeys S5 (i.e., Alice knows exactly what she does and doesn't know, and Bob knows exactly we he does and don't know). They each know what their phone reads and that both apps are at most one degree off.
October 17, 2025 at 5:29 PM
Both R_A and R_B are equivalence relations, so individual knowledge obeys S5 (i.e., Alice knows exactly what she does and doesn't know, and Bob knows exactly we he does and don't know). They each know what their phone reads and that both apps are at most one degree off.
Here's a model.
Let the space of possibilities be the set of triples <x,y,z> – representing temp, Alice's app, Bob's app – with |x-y| ≤ 1 and |x-z| ≤ 1.
We model each agent's knowledge using an accessibility relation:
<x,y,z>R_A<x',y',z'> iff y'=y
and
<x,y,z>R_B<x',y',z'> iff z'=z
Let the space of possibilities be the set of triples <x,y,z> – representing temp, Alice's app, Bob's app – with |x-y| ≤ 1 and |x-z| ≤ 1.
We model each agent's knowledge using an accessibility relation:
<x,y,z>R_A<x',y',z'> iff y'=y
and
<x,y,z>R_B<x',y',z'> iff z'=z
October 17, 2025 at 5:29 PM
Here's a model.
Let the space of possibilities be the set of triples <x,y,z> – representing temp, Alice's app, Bob's app – with |x-y| ≤ 1 and |x-z| ≤ 1.
We model each agent's knowledge using an accessibility relation:
<x,y,z>R_A<x',y',z'> iff y'=y
and
<x,y,z>R_B<x',y',z'> iff z'=z
Let the space of possibilities be the set of triples <x,y,z> – representing temp, Alice's app, Bob's app – with |x-y| ≤ 1 and |x-z| ≤ 1.
We model each agent's knowledge using an accessibility relation:
<x,y,z>R_A<x',y',z'> iff y'=y
and
<x,y,z>R_B<x',y',z'> iff z'=z
[oops – it actually doesn't make a difference, but I thought you were referring to Stewart Cohen! same issue though]
October 17, 2025 at 5:06 PM
[oops – it actually doesn't make a difference, but I thought you were referring to Stewart Cohen! same issue though]
No; at least, not in the Williamsonian sense which is relevant to KK/that Cohen is replying to. Our argument is perfectly compatible with "cliff-edge" knowledge: i.e., the thermometer reading n degrees, it actually being n+1 degrees, and you knowing that it's at most n+1 degrees.
October 17, 2025 at 5:04 PM
No; at least, not in the Williamsonian sense which is relevant to KK/that Cohen is replying to. Our argument is perfectly compatible with "cliff-edge" knowledge: i.e., the thermometer reading n degrees, it actually being n+1 degrees, and you knowing that it's at most n+1 degrees.
Hi Matt, There's no margin for error assumption in Lederman's argument. It's compatible with KK (and indeed with S5) for individual knowledge. The interpersonal case is very different from the intrapersonal case.
October 17, 2025 at 4:58 PM
Hi Matt, There's no margin for error assumption in Lederman's argument. It's compatible with KK (and indeed with S5) for individual knowledge. The interpersonal case is very different from the intrapersonal case.
You can read our full review (without a paywall) @ philpapers.org/archive/GOOK.... And you can also check out Harvey's paper that inspired us here: philpapers.org/archive/LEDU...
philpapers.org
October 17, 2025 at 2:43 AM
You can read our full review (without a paywall) @ philpapers.org/archive/GOOK.... And you can also check out Harvey's paper that inspired us here: philpapers.org/archive/LEDU...
Instead, our brief review draws on recent work by @harveylederman.bsky.social, which argues that people aren't ever in a position to know as much as common knowledge demands. If that's right, then common knowledge can't do the work that Pinker wants it to in explaining social coordination.
October 17, 2025 at 2:43 AM
Instead, our brief review draws on recent work by @harveylederman.bsky.social, which argues that people aren't ever in a position to know as much as common knowledge demands. If that's right, then common knowledge can't do the work that Pinker wants it to in explaining social coordination.
Our worry isn't that infinite layers of knowledge can't fit in finite brains. (We agree with Pinker that that concern rests on a contentious picture of how the mind works, one which we are happy to reject.)
October 17, 2025 at 2:43 AM
Our worry isn't that infinite layers of knowledge can't fit in finite brains. (We agree with Pinker that that concern rests on a contentious picture of how the mind works, one which we are happy to reject.)
Common knowledge, in the relevant technical sense, is infinitely iterated interpersonal knowledge—hence the crucial ellipsis in the book's title. Do we ever manage such a feat?
October 17, 2025 at 2:43 AM
Common knowledge, in the relevant technical sense, is infinitely iterated interpersonal knowledge—hence the crucial ellipsis in the book's title. Do we ever manage such a feat?
Among the book's many virtues, we most appreciated how Pinker centers experimental psychology in what is often a highly abstract and theoretical literature. In that interdisciplinary spirit, our review draws on work in epistemology, on whether common knowledge is really possible.
October 17, 2025 at 2:43 AM
Among the book's many virtues, we most appreciated how Pinker centers experimental psychology in what is often a highly abstract and theoretical literature. In that interdisciplinary spirit, our review draws on work in epistemology, on whether common knowledge is really possible.