Explore tens of thousands of sets crafted by our community.
Reliability in Psychological Testing
10
Flashcards
0/10
Test-Retest Reliability
Refers to the consistency of test scores over time. It's assessed by measuring the test on two different occasions and correlating the scores.
Inter-Rater Reliability
Assesses the extent to which different raters/observers give consistent estimates of the same phenomenon. It's usually assessed by computing the level of agreement between the raters.
Parallel-Forms Reliability
Refers to the consistency of the results of two tests constructed in the same way from the same content domain. It is assessed by comparing the two different forms of the test.
Internal Consistency Reliability
Relates to the consistency of results across items within a test. It is usually assessed with Cronbach's alpha or split-half reliability.
Split-Half Reliability
A measure of internal consistency where the test is divided into two equal halves and the scores on both halves are correlated.
Intraclass Correlation
A form of reliability that assesses the consistency of different measurements of the same target. It's a more general form of reliability than others, encompassing both inter-rater and test-retest in some cases.
Kuder-Richardson Formulas
The Kuder-Richardson formulas, notably KR20 and KR21, are measures of internal consistency reliability for tests with binary (correct/incorrect) items.
Coefficient Alpha
Also known as Cronbach's alpha, it assesses the internal consistency of a test, indicating how closely related a set of items are as a group.
Absolute Stability
This form of reliability refers to stable scores across time without regard to the specific content or area being measured, which often uses test-retest reliability assessment.
Equivalence Reliability
Assesses the extent to which different forms of the same test are consistent, measured by correlating performance on the different forms.
© Hypatia.Tech. 2024 All rights reserved.