John V. Kane
banner
uptonorwell.bsky.social
John V. Kane
@uptonorwell.bsky.social

Political Scientist. Professor at NYU’s Center for Global Affairs. Experiments, data analysis, guitar, drums, fan of comedy. Make guides for @statacorp.bsky.social users. Two boys and exhausted all the time. More at www.johnvkane.com .. more

Political science 48%
Physics 27%
Pinned
🚨It's finally out AND #OpenAccess!!!

Do you do survey experiments? This article is for you! 7 things that increase the risk of null/non-significant results & how to detect/prevent them. It's one of my fav things I've ever written so I hope you enjoy ☺️ polisky

cup.org/3OQhKNv
More than meets the ITT: A guide for anticipating and investigating nonsignificant results in survey experiments | Journal of Experimental Political Science | Cambridge Core
More than meets the ITT: A guide for anticipating and investigating nonsignificant results in survey experiments
cup.org

Thanks! The model controls for age, but I didn’t separately look at how party interacts with age. I’ll try to give it a look, though, thanks!

Reposted by David Darmofal

Are Americans polarized in their attitudes toward higher education? 🤔

Using some @electionstudies.bsky.social data, the answer seems to be: Yes, but not dramatically so, and there's a tendency for more experience with higher ed. to be associated w/ more positive ratings. 👍

Exactly. One thing I keep thinking is that (maybe?) lower approval in the public serves to lower the costs of defection within the GOP. Get below 35% and I’d expect more Liz Cheneys, Mitt Romneys, and MTGs. They’ve been there all along, but didn’t want to alienate the base.

I will find it hard to pay much attention to his approval rating until it dips below this 35% mark. To me, that would signal a true shift.

But it still might be unlikely. For reference, Richard Nixon—Nixon!—left office at 24%. That’s probably the real “floor,” not 0%.
This is good, and we need more of the ~35% of Americans who are wedded to Donald Trump even still to have the same eureka moment as this man.
Three-time Trump voter: I'm looking at this awful picture of the Obamas. What an embarrassment to our country. He is not worthy of the presidency. He takes bribes blatantly, and now he's being a racist blatantly. He's pathetic as a president, and I want to apologize for supporting this rotten man.

Reposted by John Kane

This is good, and we need more of the ~35% of Americans who are wedded to Donald Trump even still to have the same eureka moment as this man.
Three-time Trump voter: I'm looking at this awful picture of the Obamas. What an embarrassment to our country. He is not worthy of the presidency. He takes bribes blatantly, and now he's being a racist blatantly. He's pathetic as a president, and I want to apologize for supporting this rotten man.

💡For those teaching stats, data literacy, methods--I just wrote a short Medium piece for @asjadnaqvi.bsky.social's
amazing Stata Gallery. The piece covers how and why graphs with two y-axes can be so deceiving. Includes an applied example (with code). Hope it's useful! 😁 Link 👇

Here's a screenshot of the code. Admittedly it's a bit more complicated than the simple "xline()" approach, but until
@statacorp.bsky.social adds something like a "top" option (like how "citop" exists for CIs in -coefplot-), it might (?) be the best way to do it...

Ever need to add reference lines in
@statacorp.bsky.social graphs? If so, you've probably noticed that the lines go *behind* bars/bins. 🤷🏻‍♂️

Today I discovered a hack to fix this. Instead of using e.g., "xline()", use "scatteri". Ref lines will go *on top* of bars, not behind. Code in thread 👇

Key point: when the true effect is positive, same-sign relations b/w Z & X, and Z & Y make effects too positive. Opposite-signed relations make effects less positive.

But when the effect is negative, this flips: same-sign→less negative, opposite-sign→too negative. 👍

Just as in classic textbook examples of OVB, here exercise (X) is getting too much credit--credit that should (at least in part) be given to low caloric intake (Z).

An example: More exercise (X) should ⬇️ weight (Y).

Suppose caloric intake (Z) is negatively correlated with X, and positively correlated with Y, but we don't control for Z.

The estimated effect of X on Y will be *too negative*: high exercise people are ALSO low-calorie people (and vice versa).

When the true effect of X on Y is 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞, and Z is positively related to both X and Y, omitting Z will make the slope *more positive* than it should be.

When Z has opposite relations with X and Y (➖X,➕Y), the bias makes the effect *more negative*. 😯

We all know about omitted variable bias: when X⬆️Y, Z is ➕ correlated w/ X & Y, & we omit Z, it biases the effect of X on Y upward (too positive).🥱

But what about when the true effect is 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞? Is OVB just the mirror opposite, biasing the effect to be too negative? No.👇

Looks amazing! Congrats, Nathan!!! 🥳

Hoping to coin a new term: parAInoia

Definition: The increasing feeling of dread (felt among teachers/professors while grading a paper) that a student's work was written, at least in part, by AI.

Example: "It took forever to get my grading done because I was feeling intense parAInoia."

Glad to hear it! I wasn’t sure how many Stata users knew about it. Once you start doing it, it’s a game-changer. 👍
Stata users: if you aren't already, definitely start using bookmarks to create headings and sub-headings in your .do files. Such a big help! **# in a .do file creates a heading; **## = subheading; **### = sub-subheading. Then double-click on them to jump right to that section 👍

Wow, super helpful--thx, David! I don't usually run models like this in my own work, so there's a lot more I need to learn (looks like this book will help!).

In that graph I posted, it's clear that residuals are positively correlated, which is why I thought to cluster. But this article says no 🤯

Reposted by Efrén O. Pérez

Always amazed how, each semester, teaching data analysis inevitably teaches me something new.

This week I learned that spatially correlated residuals in FE models (w/ data from Bailey's great book) probably doesn't require clustering SEs. Thx to @nickchk.com's AMAZING website--what a resource! 🙏

I think a lot about how quant & qual methods relate/differ.

At its best, qual research feels like shining a huge light inside the black box that quant folks often ignore. 👍

At its worst, qual research feels like a scatterplot for which the researcher gets to choose which dots to show. 😬
Join us tomorrow as we repeat ourselves.
Join us as we repeat ourselves tomorrow.
Tomorrow, we repeat ourselves. Join us!

Thanks so much! 😁

Not sure Daniel has anywhere near enough time (and not totally sure I have anywhere near enough experience doing meta science). That said, thinking about null results is kind of an obsession of mine, so I’ll keep it in mind for the future! 🙏

Really enjoyed this, as always!👏

FWIW, one argument I find persuasive is that, yes, high SES kids have advantages on the SAT that are unrelated to the underlying trait.

BUT—banning it would give more weight to things that high SES kids are *even more* advantaged on (rec letters, HS quality, etc) 🤔

Indeed! Would love to see a side-by-side comparison of (1) z/t-statistic distributions for hypothesized effects (we'd see the classic drop right around 1.96ish), vs. (2) z/t-statistic distributions for any placebo tests--just how different would these distributions look? 😬

The more I think about it, p-stacking might be esp common for robustness checks & placebo tests, wherein the goal is to show a non-significant effect.

For example, rule out an alternative explanation via showing it's non-significant, which serves as more evidence for one's significant effect. 🤔

😂 that one crossed my mind as well.

Ultimately I thought better to go with something that evokes “going up” (and maybe something less violent lol)

Really enjoyed this!

I esp loved the discussion of reverse p-hacking as a means of purposely generating null results. I could picture this happening more as null results become more acceptable--it'd be yet another way of creating a "clear story." Might I suggest calling it: "p-stacking"? 😉

I didn’t even notice—I just feel that last sentence deep in my soul every time I have to grade a new batch of assignments 😭

Amen to this, Andrew!