Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 7 Reads
Problem-Solving Behaviour in Reasoning Tests: Impact of Reasoning Ability, Item Difficulty and Item-Position.

Performance on reasoning tests is a valid predictor for future success, yet what creates individual differences in problem-solving behaviour remains unclear. A systematic investigation of the problem matrix with fewer switches to and from response alternatives is linked to higher reasoning ability (e.g., Vigneau et al., 2006). Further evidence (Gonthier & Roulin, 2020) suggests that with increasing item difficulty, individuals switch more often. An analysis based on three eye-tracking metrics (von Gugelberg & Troche, 2025) indicated different direction of effects and a possible impact of item position. The present study examines the effects of reasoning ability, item difficulty, and item position on three eye-tracking metrics commonly used to assess problem-solving behaviour in reasoning tests: toggle rate (TR), proportional time on the problem matrix (PM), and proportional time to first fixation on response alternatives (PR). Complete data of 301 participants solving two different reasoning tests was obtained. For both tests a 30-minute time limit was set; this is standard procedure for the Figural Matrices (Kyllonen et al., 2019), but not the Advanced Raven Progressive Matrices (Raven et al., 1998). Correlations between ability scores and eye-tracking metrics in both tests align with other studies (e.g., Rivollier et al., 2021). Mixed-effects models revealed varying patterns of results across the eye-tracking metrics and the different reasoning tests. It seems likely that the different eye-tracking metrics capture distinct aspects of problem-solving behaviour. While TR and PM summarize behaviour over a whole item, PR only assesses one specific behaviour (i.e., when, during the problem-solving behaviour, one looks at the response alternatives for the first time). Interestingly, for all eye-tracking metrics and both reasoning tests, the effect of item position varied greatly across individuals (random slope) when added as a random effect, whereas item difficulty did not. Different possible explanations for this phenomenon are discussed.

  • Open access
  • 21 Reads
Construct Validity of the Indonesian WISC-V (WISC-V-ID): Evidence from the 10 Primary Subtests and Ancillary Index Analyses

The Wechsler Intelligence Scale for Children–Fifth Edition (WISC-V) has been adapted for the Indonesian population becoming the WISC-V-ID. The original test comprises 16 subtests, 10 of which are used to estimate five primary indexes: Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS) as well as three ancillary indices: the General Ability Index (GAI), the Cognitive Proficiency Index (CPI), and the Non-Verbal Index (NVI). This study examined the construct validity of both the primary and ancillary indices of the WISC-V-ID using a series of confirmatory factor analyses. The participants included were 1,508 children aged 6–16 years, who were drawn from the Indonesian normative sample. The results indicated that both the hypothesized four- and five-factor higher-order models demonstrated an excellent global fit (CFI = .976–.998; TLI = .965–.996; RMSEA = .016–.047; SRMR = .011–.025), although the correlated five-factor model provided the best representation of the data (χ² = 35.228, df = 25, p = .084; CFI = .998; TLI = .996; RMSEA = .016; SRMR = .011). For the ancillary indices, a second-order oblique model best described the structure of GAI and CPI, which were strongly correlated. The NVI was optimally modeled by a higher-order configuration rather than a first-order model. These findings support the structural validity of the primary indexes WISC-V-ID and provide additional evidence regarding the factorial structure of ancillary indices, which have received limited empirical attention in the international literature.

  • Open access
  • 44 Reads
Context-Aware Adaptation as Intelligence: A Software-Inspired Perspective on Cognitive Offloading

Conventional intelligence models typically assess performance based on isolated cognitive metrics such as memory, reasoning, or processing speed, often ignoring how individuals engineer their own strategies to function effectively in complex environments. In contrast, adaptive software systems are evaluated based on their ability to externalize state, restructure tasks, and respond dynamically to context.

This paper proposes that such abilities in humans, manifested through self-structured cognitive systems like color-coded organization, spatial grouping, and symbolic labelling, reflect a deliberate engineering reflex, not compensatory behavior. Individuals who may appear underperforming in conventional terms often exhibit high contextual intelligence, redesigning their environment and workflows to suit their cognitive style.

Furthermore, this framework also parallels the development of artificial intelligence systems, where increasing maturity is marked not only by computational power, but by enhanced contextual awareness and adaptive behavior. As in humans, the intelligence of a system may be better reflected in its capacity to restructure, externalize, and align with its environment rather than in isolated processing capabilities.

Drawing on principles from adaptive system design, we argue that intelligence should also encompass the ability to restructure one’s context, just as intelligent systems reconfigure themselves to optimize performance. This reframing challenges traditional assessment models and provides a foundation for rethinking learning methods, intelligent interfaces, and adaptive systems that align with how people naturally optimize cognition.

  • Open access
  • 11 Reads
Prediction and Validity: Same sides of different coins

Progress in conceptualising and measuring interindividual differences in intraindividual variability (e.g., cognitive flexibility) is stifled by uncritically assuming identity between construct validity and predictive utility of measurement outcomes. Construct validity is often claimed to be established via success in the statistical prediction of a criterion. However, a measurement outcome can be predictively useful but may not be indicative of construct validity of the instrument used. By contrast, a measurement outcome may be indicative of construct validity yet still lacks predictive utility. Predictive utility and construct validity overlap only if the criterion has ‘real-life’ relevance and represents a dimensionally clearly defined construct. This, however, is rarely the case. The real-life behaviours that we are interested in understanding (e.g., job performance) are usually multi-facetted, multi-componential, and multi-dimensional. Therefore, rather than using constructs such as abilities or personality traits to conceptualise these criteria, the concept of competency is used instead. The constituents of competency vary across individuals at any given time and situation. Success in predicting competency can therefore not be interpreted as an indication of construct validity of the instrument used. Conversely, predictions of construct-specific criteria (e.g., performance on an intelligence test) can serve as indication of construct validity of the instrument used. However, such perspectives run the risk of circularity (e.g., Test A correlates with Test B, therefore they are assumed to measure the same construct or something similar). Circumventing circularity requires a theory-based explication of the construct sensitivity of the criterion itself that cannot be delegated to a correlation coefficient. What should be called attempts of ‘validation by association’ is a rather weak foundation for claims of construct validity. To support our plea to refrain from conceptual hubris, we briefly discuss and contrast implications for common practices in applied educational research and psychological research.

  • Open access
  • 5 Reads
Creative Scientific Intelligence: A Cognitive Model of Human Epistemic and Structural Creativity

Human intelligence is distinguished not merely by the capacity to solve problems, but by the ability to generate new explanatory structures — to create, test, and revise ideas about the world. This paper introduces Creative Scientific Intelligence (CSI) as a theoretical and computational model of this uniquely human capability. Building on insights from cognitive psychology, epistemology, and artificial intelligence, CSI formalizes creativity as a recursive process of epistemic calibration: agents generate hypotheses, perform interventions, and revise internal models to resolve tension between expectation and observation. Structurally, CSI integrates three mechanisms central to human creative reasoning — recursive abstraction, analogical transfer, and epistemic tension regulation — implemented through multi-timescale calibration loops that parallel human learning, development, and insight formation.

Empirical demonstrations in symbolic physics and ecosystem-simulation environments show how a CSI agent can autonomously discover hidden causal laws and generate novel structural models through self-directed exploration. These results suggest that scientific creativity and cognitive intelligence share a common recursive architecture grounded in explanation-driven learning. By modeling intelligence as the dynamic capacity to construct, test, and refine internal theories, CSI provides a formal account of how understanding — and not just performance — emerges from curiosity, surprise, and self-correction.

The framework offers a unified view of creative and scientific cognition, with implications for measuring and enhancing human intelligence across educational, artistic, and scientific domains.

  • Open access
  • 4 Reads
The ATHENA Competency Framework: An Evaluation of its Validity according to Instructional Designers and Human Resource Development Professionals

Introduction
The ATHENA competency framework offers a multidimensional and agentic model of human performance, integrating cognitive, conative, emotional, knowledge-based, and sensorimotor resources into 60 fine-grained facets. While the framework is conceptually grounded in contemporary psychological theory, its applied usability depends on whether professionals who rely on competency frameworks can clearly understand and differentiate its components. This study provides the first empirical assessment of the semantic clarity and perceived dimensional coherence of ATHENA’s 60 facets among instructional designers and human resource development (HRD) professionals.

Methods
Seventy-five practitioners (46 instructional designers; 29 HRD professionals) evaluated the clarity, appropriateness, and expected meaning of each facet using standardized definitions. Participants also assigned each facet to one of ATHENA’s five theoretical dimensions. Ratings were collected via an online questionnaire and analyzed descriptively.

Results
Overall, practitioners judged the facet definitions as clear and meaningful, with high average ratings across the 60 facets and strong alignment with participants’ expectations. Only two facets—heuristics and functional synesthesia—showed consistently lower alignment with preconceived interpretations. Dimensional assignment matched the theoretical structure for most facets; however, 14 facets (23%) were systematically reassigned by participants, often shifted toward the cognitive domain. This suggests that while definitions were understood, the psychological rationale for some facets’ dimensional placement was not intuitively perceived.

Conclusions
The results support the semantic robustness of ATHENA’s facets and their overall suitability for instructional and HRD applications. The misalignment of several facets’ dimensional placement highlights the need for refined labels, clearer theoretical justification, or revised explanatory materials. These findings represent a critical validation step for ATHENA before larger-scale psychometric work and provide guidance for strengthening its integration within hybrid intelligence talent development systems.

1 2 3
Top