Understanding and Testing Reliability

The Relationship of Reliability and Validity Test validity is requisite to test reliability.

Resources for Understanding and Testing Reliability

The 2000 and 2008 studies present evidence that Ohio's mandated accountability tests are not valid, that the conclusions and decisions that are made on the basis of OPT performance are not based upon what the test claims to be measuring.

Test manuals and independent review of tests provide information on test reliability.

Difference Between Reliability and Validity

Using the split-half method means the same participant can be used without having to wait for them to ‘forget’ the questions between the two halves of the test, and it is therefore a quick and easy way to establish reliability.

If the results from both old and new tests significantly correlate then the new test is valid.

If, for example, the kind of problem-solving ability required for the two positions is different, or the reading level of the test is not suitable for clerical applicants, the test results may be valid for managers, but not for clerical employees.

Test developers have the responsibility of describing the reference groups used to develop the test.

Validity refers to  the test measures and  the test measures that characteristic.


Module 3: Reliability (screen 2 of 4) Reliability and Validity

Developing a valid and reliable instrument usually requires multiple iterations of piloting and testing which can be resource intensive. Therefore, when available, I suggest using already established valid and reliable instruments, such as those published in peer-reviewed journal articles. However, even when using these instruments, you should re-check validity and reliability, using the methods of your study and your own participants’ data before running additional statistical analyses. This process will confirm that the instrument performs, as intended, in your study with the population you are studying, even though they are identical to the purpose and population for which the instrument was initially developed. Below are a few additional, useful readings to further inform your understanding of validity and reliability.

The Validity & Reliability of Employment Testing | Bizfluent

Reliability refers to the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities.

Reliability and validity for A level psychology - Psychteacher

French (1990) offers situational examples of when each method of validity may be applied.

First, as an example of criterion-related validity, take the position of millwright.

Test Reliability and Validity Defined

Often times, when developing, modifying, and interpreting the validity of a given instrument, rather than view or test each type of validity individually, researchers and evaluators test for evidence of several different forms of validity, collectively (e.g., see regarding validity).

Validity and Reliability - SlideShare

If the correlation is high, it can be said that the test has a high degree of validation support, and its use as a selection tool would be appropriate.

Second, the content validation method may be used when you want to determine if there is a relationship between behaviors measured by a test and behaviors involved in the job.