It depends on who you ask. Some Researchers would say yes. They argue that correlations between ratings provided by multiple raters provide the “correct” estimates of the reliability of job performance ratings (Schmidt & Hunter, 1996; Viswesvaran et al., 1996). However, Murphy and De Shon (2000) provide an interesting insight into why this may not […]
arnold on Oct 09, 2008
Lets Start With Some Definitions: Cronbach’s Alpha is mathematically equivalent to the average of all possible split-half estimates, although that’s not how we compute it (socialresearchmethods.net). Cronbach’s alpha will generally increase when the correlations between the items increase. For this reason the coefficient is also called the internal consistency or the internal consistency reliability of […]
arnold on Oct 05, 2008
Explain reliability in terms of classical test theory: Nunnally (1967) defined reliability as “the extent to which [measurements] are repeatable and that any random influence which tends to make measurements different from occasion to occasion is a source of measurement error” (p. 206). There are many factors can prevent measurements from being repeated perfectly. Crocker […]
arnold on Sep 28, 2008
Reliability is the desired consistency (or reproducibility) of test scores. In practical terms reliability is the degree to which individual’ deviation scores, or z-scores, remain relatively consistent over repeated administration of the same test or alternate test forms (Crocker and Algina, 1986, p. 105). Test developers must demonstrate that the scores obtained are reliable otherwise […]
arnold on Sep 17, 2008