Esprit Rock

Reliability is an assessment tool that assesses stable and consistent results

Reliability is an assessment tool that assesses stable and consistent results.
There are many types of measuring reliability:
• The Test-Retest reliability. This is obtained by administering the same test twice over a period to the same group of individuals. These results are correlated to evaluate the test for stability over time.
• The parallel forms reliability. This is obtained by administering different versions of an assessment tool (both versions must contain items that probe the same skill, knowledge) to the same group of individuals. Results of both versions can be finally correlated to evaluate the consistency of results across alternate versions.
• The inter-rater reliability – This is used to assess the degree to which different judges or ratters agree in their assessment decisions. This is useful as observers will not interpret their findings in the same way. They may disagree upon how certain material demonstrates skill being assessed.
• The Internal consistency and reliability test – This is used to evaluate the degree to which different test items that probe the same construct and produce comparable results. This can be average correlation (obtained by taking all the items on a test that probe the same construct, determining the correlation coefficient for each pair of items, and finally taking the average of these correlation coefficients) or split half reliability (is obtained by ‘splitting in half’ all items of a test that are intended to probe the same area of knowledge, to form two ‘sets’ of items.
The test is administered to a group of individuals, the total score for each set is computed, and finally the split-half reliability is obtained by determining the correlation between the two totals ‘set’ scores).
You have to make sure the study is reliable. This method can be used in documentation reviews when using historical research. If two or even more authors are writing about a certain subject as being a fact, we may consider that data reliable, and we can use it to make our study as accurate and reliable as possible. It doesn’t make their entire document or work reliable. Validity refers to how well a test measures what it is purported to measure.

Types of validity:
• face validity – This is not a very scientific type of validity, it ascertains that the measure appears to be assessing the intended construct under study;
• Construct validity – This is used to ensure that the measure is measure what it is intended to measure, and no other variables.
• Criterion-related validity – This is used to predict future or current performance; it correlates test results with another criterion of interest;
• Formative validity- This is applied to outcomes assessments to assess how well a measure can provide information to help improve the program under study;
• Sampling validity (or content validity) – This ensures that the measure covers the broad range of areas within the concept under study.
It is important to ensure the results of a study are valid. If not, then they are becoming meaningless to my project. If they do not measure what they aim to measure, then the results cannot be used to answer the research question. This is the main aim of the study. Results cannot be used to generalise any findings and become a waste of time and effort.