Leon Wendt
l-p-wendt.bsky.social
Leon Wendt
@l-p-wendt.bsky.social
Postdoc position @University of Kassel | interests in personality psychology | construct validation | measurement | statistical modeling
The full framework is in our preprint (doi.org/10.31234/osf...).

Thanks for reading, sharing, and commenting!
OSF
doi.org
November 4, 2025 at 10:56 AM
We encourage researchers to apply our six criteria to ensure that construct validation efforts produce 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲, 𝗵𝗶𝗴𝗵-𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲 that can genuinely clarify which theoretical constructs are captured by a psychometric measure.
November 4, 2025 at 10:56 AM
(c) 𝗣𝗼𝘀𝘁 𝗵𝗼𝗰 𝗿𝗲𝘃𝗶𝘀𝗶𝗼𝗻𝘀 𝘁𝗼 𝘁𝗵𝗲 𝗻𝗼𝗺𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗻𝗲𝘁𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝗟𝗮𝗸𝗮𝘁𝗼𝘀𝗶𝗮𝗻 𝗱𝗲𝗳𝗲𝗻𝘀𝗲 (i.e., allowing measure and construct to co-evolve through empirically driven revisions to the nomological network) render construct validation c͟i͟r͟c͟u͟l͟a͟r.
November 4, 2025 at 10:56 AM
(b) 𝗥𝗲𝗹𝘆𝗶𝗻𝗴 𝗼𝗻 𝗴𝗲𝗻𝗲𝗿𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗽𝘀𝘆𝗰𝗵𝗼𝗺𝗲𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗵𝗼𝗱𝘀 (i.e., ignoring the conceptual blueprint of the target construct and plausible non-target constructs) offers little opportunity to understand the unique nature of the measure.
November 4, 2025 at 10:56 AM
(a) 𝗧𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲𝗼𝗿𝗲𝘁𝗶𝗰𝗮𝗹 𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝘀 𝗮𝘀 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘂𝗮𝗹 𝗽𝗹𝗮𝗰𝗲𝗵𝗼𝗹𝗱𝗲𝗿𝘀 (i.e., leaving them ill-defined and conceptually flexible) limits the ability to test meaningful predictions.
November 4, 2025 at 10:56 AM
We critically examine three common and conventionally accepted 𝐯𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬, showing how they lead to validity-supporting findings that foster an illusion of “validated” measures, despite offering only weak evidence of construct validity.
November 4, 2025 at 10:56 AM
6️⃣ 𝗣𝗿𝗲𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: Were the predictions and inferential criteria preregistered before data collection/analysis—or not?
November 4, 2025 at 10:56 AM
5️⃣ 𝗛𝗶𝗴𝗵 𝗽𝗿𝗶𝗼𝗿 𝗽𝗹𝗮𝘂𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗼𝗳 𝘁𝗵𝗲 𝗻𝗼𝗺𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗻𝗲𝘁𝘄𝗼𝗿𝗸: Are predictions grounded in theoretical propositions that are highly plausible a priori—or are they largely speculative?
November 4, 2025 at 10:56 AM
4️⃣ 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗽𝗹𝗮𝘂𝘀𝗶𝗯𝗹𝗲 𝗮𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗮𝘁𝗶𝗼𝗻𝘀: Are predictions designed to distinguish the target construct from plausible non-target constructs—or do predictions overlap?
November 4, 2025 at 10:56 AM
3️⃣ 𝗖𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻𝘀: Are predictions specifically tied to the target construct’s nomological network—or do they rely on generic psychometric methods and default benchmarks (e.g., empirically deriving the number of factors using EFA)?
November 4, 2025 at 10:56 AM
2️⃣𝐏𝐫𝐞𝐜𝐢𝐬𝐞 𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧𝐬: Are predictions stated in precise terms—or are they left vague (e.g., only specifying the sign of a correlation)?
November 4, 2025 at 10:56 AM
We outline six criteria for informative (high-quality) tests of construct validity.

1️⃣𝐂𝐨𝐧𝐜𝐞𝐩𝐭𝐮𝐚𝐥 𝐜𝐥𝐚𝐫𝐢𝐭𝐲: Are the target construct and plausible alternative interpretations conceptually clear—or are they little more than (descriptive) labels?
November 4, 2025 at 10:56 AM
By rearranging the formula and considering the 𝘢 𝘱𝘳𝘪𝘰𝘳𝘪 plausibility of the propositions involved, we can determine under what circumstances the occurrence (O) or non-occurrence (¬O) of the predicted observation provides 𝗺𝗲𝗮𝗻𝗶𝗻𝗴𝗳𝘂𝗹 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲 for supporting (or challenging) the hypothesis (T).
November 4, 2025 at 10:56 AM
Meehl’s formula states:

(T ∧ A ∧ C) → O

The predicted observation (O) follows logically (→) if the hypothesis (T) is true, and (∧) if the auxiliary assumptions (A) hold, and (∧) if the data realize the empirical conditions (C) under which the observation is predicted.
November 4, 2025 at 10:56 AM
When we test construct validity, we test the hypothesis that measure 𝘵 captures theoretical construct 𝘟.

Building on Meehl’s (1978) logic of hypothesis testing, we outline a framework that aims to explain 𝐰𝐡𝐲 some validation approaches provide stronger evidence than others.
November 4, 2025 at 10:56 AM