Data evaluation included identification of research aim, presence of a research question, type of methodology, data collection processes, sample information, data analysis techniques and study outcomes.
Analysis, interpretation, and use of evaluation data
The approach to data analysis and interpretation of evaluation data will depend largely on the type of information collected and how it is going to be used and presented (e.g., as part of a journal article or report, or as an instant representation of feedback about an activity at a public event such as a physical chart, an online graphic, or via social media).
Evaluation data from Telling Stories have been used in numerous ways which include:
- Statistical analysis of web usage for use in journal articles, presentations, and impact statements for research purposes;
- Thematic analysis of post-launch interviews in reports, to explore longer-term views of the resource and its impact on teaching practice, and to support funding applications;
- Storyteller quotes and testimonials in education sessions, presentations, journal articles, and promotional material, to evidence the rationale for developing the resource and the motivation of storytellers to share their experiences.
Data interpretation refers to the implementation of processes through which data is reviewed for the purpose of arriving at an informed conclusion. The interpretation of data assigns a meaning to the information analyzed and determines its signification and implications.
The importance of data interpretation is evident and this is why it needs to be done properly. Data is very likely to arrive from multiple sources and has a tendency to enter the analysis process with haphazard ordering. Data analysis tends to be extremely subjective. That is to say, the nature and goal of interpretation will vary from business to business, likely correlating to the type of data being analyzed. While there are several different types of processes that are implemented based on individual data nature, the two broadest and most common categories are “quantitative analysis” and “qualitative analysis”.
Scale of measurement must be decided for the data as this will have a long-term impact on data interpretation ROI. The varying scales include:
- Nominal Scale: non-numeric categories that cannot be ranked or compared quantitatively. Variables are exclusive and exhaustive.
- Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical order. Quality ratings and agreement ratings are examples of ordinal scales (i.e., good, very good, fair, etc., OR agree, strongly agree, disagree, etc.).
- Interval: a measurement scale where data is grouped into categories with orderly and equal distances between the categories. There is always an arbitrary zero point.
- Ratio: contains features of all three.
Qualitative Data Interpretation
Qualitative data analysis can be summed up in one word; categorical. With qualitative analysis, data is not described through numerical values or patterns, but through the use of descriptive context (i.e., text). Typically, narrative data is gathered by employing a wide variety of person-to-person techniques. These techniques include:
Documents: much like how patterns of behavior can be observed, different types of documentation resources can be coded and divided based on the type of material they contain.
Observations: detailing behavioral patterns that occur within an observation group. These patterns could be the amount of time spent in an activity, the type of activity and the method of communication employed.
Interviews: one of the best collection methods for narrative data. Enquiry responses can be grouped by theme, topic or category. The interview approach allows for highly-focused data segmentation.
Quantitative Data Interpretation
If quantitative data interpretation could be summed up in one word (and it really can’t) that word would be “numerical.” There are few certainties when it comes to data analysis, but you can be sure that if the research you are engaging in has no numbers involved, it is not quantitative research. Quantitative analysis refers to a set of processes by which numerical data is analyzed. More often than not, it involves the use of statistical modeling such as standard deviation, mean and median. Let’s quickly review the most common statistical terms:
Standard deviation: this is another statistical term commonly appearing in quantitative analysis. Standard deviation reveals the distribution of the responses around the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.
Mean: a mean represents a numerical average for a set of responses. When dealing with a data set (or multiple data sets), a mean will represent a central value of a specific set of numbers. It is the sum of the values divided by the number of values within the data set. Other terms that can be used to describe the concept are arithmetic mean, average and mathematical expectation.
Frequency distribution: this is a measurement gauging the rate of a response appearance within a data set. When using a survey, for example, frequency distribution has the capability of determining the number of times a specific ordinal scale response appears (i.e., agree, strongly agree, disagree, etc.). Frequency distribution is extremely keen in determining the degree of consensus among data points.