Progress in conceptualising and measuring interindividual differences in intraindividual variability (e.g., cognitive flexibility) is stifled by uncritically assuming identity between construct validity and predictive utility of measurement outcomes. Construct validity is often claimed to be established via success in the statistical prediction of a criterion. However, a measurement outcome can be predictively useful but may not be indicative of construct validity of the instrument used. By contrast, a measurement outcome may be indicative of construct validity yet still lacks predictive utility. Predictive utility and construct validity overlap only if the criterion has ‘real-life’ relevance and represents a dimensionally clearly defined construct. This, however, is rarely the case. The real-life behaviours that we are interested in understanding (e.g., job performance) are usually multi-facetted, multi-componential, and multi-dimensional. Therefore, rather than using constructs such as abilities or personality traits to conceptualise these criteria, the concept of competency is used instead. The constituents of competency vary across individuals at any given time and situation. Success in predicting competency can therefore not be interpreted as an indication of construct validity of the instrument used. Conversely, predictions of construct-specific criteria (e.g., performance on an intelligence test) can serve as indication of construct validity of the instrument used. However, such perspectives run the risk of circularity (e.g., Test A correlates with Test B, therefore they are assumed to measure the same construct or something similar). Circumventing circularity requires a theory-based explication of the construct sensitivity of the criterion itself that cannot be delegated to a correlation coefficient. What should be called attempts of ‘validation by association’ is a rather weak foundation for claims of construct validity. To support our plea to refrain from conceptual hubris, we briefly discuss and contrast implications for common practices in applied educational research and psychological research.
Previous Article in event
Next Article in event
Prediction and Validity: Same sides of different coins
Published:
20 March 2026
by MDPI
in The 1st International Online Conference on Human Intelligence
session Theoretical Contributions and Measurement of Intelligence
Abstract:
Keywords: construct validity; predictive utility; flexibility
