Filip Moons
filipmoons.bsky.social
Filip Moons
@filipmoons.bsky.social
Assistant Professor in Mathematics Education @ Utrecht University
Reposted by Filip Moons
Hoe groter een partij, hoe waarschijnlijker het is dat deze een restzetel krijgt. Hoe zit dat? Ik vroeg het aan Filip Moons van @utrechtuniversity.bsky.social.
Tot 4 november zonder betaalmuur te lezen! @nrc.nl @nrcwetenschap.bsky.social
www.nrc.nl/nieuws/2025/...
De wiskunde achter de restzetel: waarom groot nog groter wordt
Wiskunde: Hoe groter een partij, hoe waarschijnlijker het is dat deze een restzetel krijgt. Grote partijen kunnen er zelfs meer dan één krijgen.
www.nrc.nl
October 29, 2025 at 8:32 AM
Reposted by Filip Moons
„Alles wat je van een zetelverdelingssysteem zou wensen, kan nooit tegelijk worden vervuld. Een lichte voorkeur voor winnaars is dan een eervol compromis.”
@filipmoons.bsky.social

@alexvdbrandhof.bsky.social @nrcwetenschap.bsky.social
www.nrc.nl/nieuws/2025/...
October 29, 2025 at 6:00 AM
Reposted by Filip Moons
The European Mathematical Society is thrilled to announce its Lecture Series on Mathematics Education uniting leading experts to explore critical issues and innovative approaches in mathematics teaching, curriculum design, and policy making. 1/3
October 15, 2025 at 1:37 PM
📘 Nieuw: Van stem tot zetel – de wiskunde achter de Nederlandse verkiezingen

Hoe worden stemmen omgerekend naar zetels? Waarom zoveel partijen? En wist je dat het Nederlandse systeem eigenlijk Belgisch is?

Meer info & bestellen: epsilon-uitgaven.nl/zebra-reeks/...
September 30, 2025 at 5:19 PM
Reposted by Filip Moons
Cohen’s & Fleiss’ kappa assume each subject has one category. @filipmoons Filip Moons & Ellen Vandervieren introduce a generalized Fleiss’ kappa that handles multiple categories per subject, supports hierarchies & weights, and works with missing data.
Resources for Research
This section provides brief summaries of selected resources for research that have been published in journals of the Psychonomic Society, typically Behavior Research Methods. These resources consis…
buff.ly
September 26, 2025 at 8:01 PM
🚀 Cohen’s & Fleiss’ kappa assume one category per subject. But patients can have multiple diagnoses, texts can get multiple codes.
Our new statistic:
✅ lets raters assign several categories per subject
↔️ equals Fleiss’ kappa when only 1 category is used
link.springer.com/article/10.3...
Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss’ kappa - Behavior Research Methods
Cohen’s and Fleiss’ kappa are well-known measures of inter-rater agreement, but they restrict each rater to selecting only one category per subject. This limitation is consequential in contexts where ...
link.springer.com
September 21, 2025 at 7:35 PM
Reposted by Filip Moons
Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss’ kappa BehResM
Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss’ kappa
Cohen’s and Fleiss’ kappa are well-known measures of inter-rater agreement, but they restrict each rater to selecting only one category per subject. This limitation is consequential in contexts where subjects may belong to multiple categories, such as psychiatric diagnoses involving multiple disorders or classifying interview snippets into multiple codes of a codebook. We propose a generalized version of Fleiss’ kappa, which accommodates multiple raters assigning subjects to one or more nominal categories. Our proposed $$\kappa $$ statistic can incorporate category weights based on their importance and account for hierarchical category structures, such as primary disorders with sub-disorders. The new $$\kappa $$ statistic can also manage missing data and variations in the number of raters per subject or category. We review existing methods that allow for multiple category assignments and detail the derivation of our measure, proving its equivalence to Fleiss’ kappa when raters select a single category per subject. The paper discusses the assumptions, premises, and potential paradoxes of the new measure, as well as the range of possible values and guidelines for interpretation. The measure was developed to investigate the reliability of a new mathematics assessment method, of which an example is elaborated. The paper concludes with a worked-out example of psychiatrists diagnosing patients with multiple disorders. All calculations are provided as R script and an Excel sheet to facilitate access to the new $$\kappa $$ statistic.
dlvr.it
September 16, 2025 at 2:46 AM