Main Article Content

Personality measurement of students using Item Response Theory Models: stability responses from Nigerian institutions


Olawale Ayoola Ogunsanmi
Temitope Babatimehin
Yejide Adepeju Ibikunle

Abstract

Item Response Theory (IRT) is utilised to detect bias in assessment tools and address issues such as faked or manipulated responses, enhancing the reliability and stability of conclusions in personality assessment. This article examines the item parameter estimates of a scale and the effectiveness of one-, two-, and three-parameter logistic models in analysing response stability in personality measurement from repeated administration. Three hundred undergraduate students at three tertiary institutions in Nigeria were sampled using a multi-stage sampling procedure. Data was collected using an adapted version of the Big Five Inventory (BFI) with a reliability coefficient of 0.85. The results showed that the item parameter estimates (mean threshold) are within the recommended benchmarks. A comparison of the three IRT models based on the Likelihood ratio (InL), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) values revealed that the two-parameter logistic model best fit the personality data among undergraduates from repeated administration. It is recommended that, rather than relying solely on a statistical decision-making process, IRT fit and model comparison should be applied to gain insight into the functioning of items and tests.


Journal Identifiers


eISSN: 2313-5069