Eric Pedersen
banner
ericjpedersen.bsky.social
Eric Pedersen
@ericjpedersen.bsky.social

Associate prof of biology prof Concordia University. Lost in the wilds between ecology, statistics, and dynamic systems. Always interested in chatting all things GAM- and and nonlinear-system related

Environmental science 80%
Geography 20%

I wasn't going to repost this until he mentioned his melancholy
This is abhorrent and repugnant. It’s further evidence of the vile and pernicious nature of these awful platforms. It makes me furious, incendiary with rage. Causes me to fall into deep sadness, melancholy and depression.
For each additional moral–emotional word in a social media post, the number of shares increases 13%

Our new meta-analysis finds robust evidence of moral contagion (N=4,821,006)

The moral contagion effect is even stronger in larger, pre-registered studies (17%).
academic.oup.com/pnasnexus/ar...

Reposted by Eric J. Pedersen

This is abhorrent and repugnant. It’s further evidence of the vile and pernicious nature of these awful platforms. It makes me furious, incendiary with rage. Causes me to fall into deep sadness, melancholy and depression.
For each additional moral–emotional word in a social media post, the number of shares increases 13%

Our new meta-analysis finds robust evidence of moral contagion (N=4,821,006)

The moral contagion effect is even stronger in larger, pre-registered studies (17%).
academic.oup.com/pnasnexus/ar...

In a similar low-fantasy procedural vein there's "The Witness for the Dead" and "The Grief of Stones" by Katherine Addison. Set in the same world as my favorite political thriller, "The Goblin Emperor"

Obsidian and Blood series by Aliette de Bodard: police procedural set in pre-contact Tenochtitlan where the investigator is the High Priest for the Dead, and has to deal with both murder and overly interested gods

The nth generation of species inventing paleontology would be increasingly confused by the horizons of acrylics in the rock record.

Then their chemists invented their own plastics and they suddenly got scared
📣Tomorrow our next series of online seminars restarts: Chris Klausmeier (MSU) will present:

⭐Microbial cross-feeding: coexistence and collapse, spatial patterns and population cycles⭐

Free and open to all:
Zoom link: iite.info/seminar/
Global Times: www.timeanddate.com/worldclock/f...

I hope it comes with a little toga you can pull over it's head
And the Michaelis-Menten model is the same one we use in ecology to model type-II predator functional responses! Enzymes function a lot like predators, "feeding" on reactants
Maud Menten was born in Canada in 1879 and completed her UG and MD education at University of Toronto. As a research assistant with Leonor Michaelis in 1912, they wrote the classic paper describing the “Michaelis-Menten” model of enzyme kinetics.. #WomenInScience
Maud Menten was born in Canada in 1879 and completed her UG and MD education at University of Toronto. As a research assistant with Leonor Michaelis in 1912, they wrote the classic paper describing the “Michaelis-Menten” model of enzyme kinetics.. #WomenInScience

Similar energy here

That makes sense. Also, probably not worth the effort to improve estimates of "average belief" anyway: as you noted in the post the estimand likely doesn't exist.

Before today I doubt that I had a number in my mind for "fraction of people who own guns", so I'd have to estimate it at survey time

I also wonder about incentives for YouGov here: I would guess it would look better if they reported median rather than mean responses at least for correcting for random answers, but "people think 10% of all people are trans" would likely get many fewer headlines

Thanks for writing this! Adding this to my intro stats reading list.

Is there any work looking at whether people are better at estimating proportions when asked about concrete frequencies (e.g. "how many Americans out of 100 Americans own a car?".

Seems related to this:
doi.org/10.1016/0010...
Redirecting
doi.org
Against my better instincts, I have written some notes on how human probability judgements work and what you should expect from surveys that ask people to guess what proportion of the population is transgender. I hope never to speak of this matter again
Some notes on probability judgement – Notes from a data witch
For the love of fuck, literally nobody thinks that 20% of the population is transgender. Please stop sharing that ridiculous YouGov statistic
blog.djnavarro.net

I've played mycelia once and enjoyed it. It does feel like you're playing as a fungus in a forest community. I found its strategy a bit hard to get on first play, though, and it's definitely a "crunchy" game

Undergrove is fantastic: you play as Douglas Fir trees trading resources with fungi to grow seedlings. By the same designer as Wingspan, and both very fun to play and incredibly well-researched

I keep meaning to learn targets, but it's just complex enough that I end up doing things like the hash trick because I'm short on time for a given project

In one recent project I ended up with a function that checks the hash of the data, the simulation file, and the saved output, then only reran the costly sims if the calculated hashes didn't match the ones in the saved sim file.

Do not ask what lengths I'll go to to avoid relearning make files

Of course, Icarus is the OG failson

(important disclaimer: I don't hold any special knowledge or expertise in Cree stories, that's just my impression from reading English language versions of some of them)

Although you could see Weesageechak as a war god who is also occasionally a bumbling idiot for comedic effect

A lot of trickster stories from many cultures have that kind of vibe for some of the stories, but the trickster is generally never depicted as a bumbling idiot across multiple stories

Inspired by @kjhealy.co's post on "Life at low Reynolds number" and a recent discussion with Jeremy Fox on the Dynamic Ecology blog
dynamicecology.wordpress.com/2025/08/29/t...
This Friday linkfest can swim at a low Reynolds number
This week: leaving evolutionary biology, economics vs. LLMs, current events vs. John Adams, llama vs. Napoleon Dynamite, and more.
dynamicecology.wordpress.com

What scientific paper do you find yourself re-reading because you love how it's written?

One of mine is Mallet 2012: "The struggle for existence: How the notion of carrying capacity, K, obscures the links between demography, Darwinian evolution, and speciation"

dash.harvard.edu/entities/pub...
The struggle for existence. How the notion of carrying capacity, K, obscures the links between demography, Darwinian evolution and speciation
Question: Population ecology and population genetics are treated separately in most textbooks. However, Darwin’s term the ‘struggle for existence’ included both natural selection and ecological compet...
dash.harvard.edu

From what I recall from the class I took on Hazard models: the issue with just adding a time-varying hazard rate in a PH model is that you are breaking the proportionate hazards assumption; the Poisson trick set things up to break the data into blocks within which the PH assumption could be valid

I see your point; however, I don't think there's a conceptual difference between the two cases: internally, a basis function that changes value over time just looks like a time-varying coefficient. I think the same approaches should work for both (breaking time into intervals between events)

I haven't tried it myself, but the mgcv help files do describe how to set up a time-varying covariate model in ?mgcv::cox.pht.

In short: you can set up a Poisson regression model via data argumentation. Alternatively, you can use `cox.ph` by setting up events as strata; not sure of the differences

I am not very convinced by Dutch Book arguments in general, but I do think that they show that *if* the environment is approximately stationary with multiple opposed decision makers, then any learning rule that doesn't approximate Bayesian updating will be vulnerable to exploitation by other agents

There is a lot of work on misspecification, so we know that e.g. a posterior should minimize the KL divergence between the true model and the specified one, but I don't think I've seen much on the question of "how does a learner come up with a model to compare with data in the first place?"

I feel like pretty much everyone working on decision theory seems to have tacitly agreed that "we don't really have any handle on the question of how you should learn the data generating process in the first place, so let's not dwell on it"