Problems in Measurement should be precise and unambiguous in an ideal research study. This objective, however, is often not met with in entirety. As such the researcher must be aware about the sources of error in measurement. The following are the possible sources of error in measurement.
- Respondent: At times the respondent may be reluctant to express strong negative feelings or it is just possible that he may have very little knowledge but may not admit his ignorance. All this reluctance is likely to result in an interview of ‘guesses.’ Transient factors like fatigue, boredom, anxiety, etc. may limit the ability of the respondent to respond accurately and fully.
- Situation: Situational factors may also come in the way of correct measurement. Any condition which places a strain on interview can have serious effects on the interviewer-respondent rapport. For instance, if someone else is present, he can distort responses by joining in or merely by being present. If the respondent feels that anonymity is not assured, he may be reluctant to express certain feelings.
- Measurer: The interviewer can distort responses by rewording or reordering questions. His behaviour, style and looks may encourage or discourage certain replies from respondents. Careless mechanical processing may distort the findings. Errors may also creep in because of incorrect coding, faulty tabulation and/or statistical calculations, particularly in the data-analysis stage.
- Instrument: Error may arise because of the defective measuring instrument. The use of complex words, beyond the comprehension of the respondent, ambiguous meanings, poor printing, inadequate space for replies, response choice omissions, etc. are a few things that make the measuring instrument defective and may result in measurement errors. Another type of instrument deficiency is the poor sampling of the universe of items of concern.
- Researcher must know that correct measurement depends on successfully meeting all of the problems listed above. He must, to the extent possible, try to eliminate, neutralize or otherwise deal with all the possible sources of error so that the final results may not be contaminated.
A test must also be reliable. Reliability is “Self-correlation of the test.” It shows the extent to which the results obtained are consisted when the test is administered. Once or more than once on the same sample with a reasonable gap. Consistency in results obtained in a single administration is the index of internal consistency of the test and consistency in results obtained upon testing and retesting is the index of temporal consistency. Reliability thus, includes both internal consistency as well as temporal consistency. A test to be called sound must be reliable because reliability indicates the extent to which the scores obtained in the test are free from such internal defects of standardization, which are likely to produce errors of measurement.
Types of Reliability:
(i) Internal reliability
(ii) External reliability
- Internal Reliability; Internal reliability assesses the consistency of results across items within a test.
- External Reliability; External reliability refers to the extent to which a measure varies from one use to another.
Errors in Reliability:
At a time scores are not consistent because some other factors also affect reliability e.g.
There is always a chance of 5% error in reliability which is acceptable.
Validity is another prerequisite for a test to be sound. Validity indicates the extent to which the test measure what it intends to measure, when compared with some outside independent criteria. In other words it is the correlation of the test with some outside criteria. The criteria should be independent one and should be regarded as the best index of trait or ability being measured by the test. Generally, validity of a test is dependent upon the reliability because a test which yields inconsistent results (poor reliability) is ordinarily not expected to correlate with some outside independent criteria.
TYPES OF ERRORS
(i) Random error
(ii) Systematic error
(i) Random error
Random error exists in every measurement and is often major source of uncertainty. These errors have no particular assignable cause. These errors can never be totally eliminated or corrected. These are caused by many uncontrollable variables that are inevitable part of every analysis made by human being. These variables are impossible to identified, even if we identify some they cannot be measured because most of them are so small.
(ii) Systematic error
Systematic error is caused due to instruments, machines, and measuring tools. It is not due to individuals. Systematic error is acceptable we can fix and handled it.
WAYS OF FINDING RELIABILITY:
Following are the methods to check reliability
- Alternate form
- Split –half method
It is the oldest and commonly used method of testing reliability. The test retest method assesses the external consistency of a test. Examples of appropriate tests include questionnaires and psycho metric tests. It measures the stability of a test over time.
A typical assessment would involve giving participants the same test on two separate occasions. Each and every thing from start to end will be same in both tests. Results of first test need to be correlated with the result of second test. If the same or similar results are obtained then external reliability is established.
The timing of the test is important if the duration is to brief then participants may recall information from the first test which could bias the results. Alternatively, if the duration is too long it is feasible that the participants could have changed in some important way which could also bias the results.
Utility and worth of a psychological test decreases with time so the test should be revised and updated. When tests are not revised systematic error may arise.
In alternate form two equivalent forms of the test are administered to the same group of examinees. An individual has given one form of the test and after a period of time the person is given a different version of the same test. The two form of the rest are then correlated to yield a coefficient of equivalence.
In alternate form no deal to wait for time.
It is very hectic and risky task to make two test of equivalent level.
The split half method assesses the internal consistency of a test. It measures the extent to which all parts of the test contribute equally to what is being measured. The test is technically spitted into odd and even form. The reason behind this is when we making test we always have the items in order of increasing difficulty if we put (1,2,—-10) in one half and (11,12,—-20) in another half then all easy question/items will goes to one group and all difficult questions/items will goes to the second group.
When we split the test we should split it with same format/theme e.g. Multiple questions – multiple questions or blanks – blanks.