Applied Sociology: Evaluation Research 101
If you have taken a sociology class, you know that sociology has many practical applications. Some sociologists use the tools of the discipline to help organizations make decisions—this can include anything from a small nonprofit to your university and even the government.
Evaluation research can take on many forms, but put simply, its purpose is to determine whether or not a particular program, technique, or approach to addressing an issue is effective. This can be very helpful when deciding how an organization might spend its time or money. Why invest in a program that isn’t effective, or assume that something won’t work without first testing it and finding out?
Someone who has no particular stake in the results might be the best person to evaluate this program. That’s where applied sociologists come in with their methodological tool kits to help an organization, such as a school board, make a decision about this program.
This research would consist of an experiment to test your hypothesis that your new program will cause more people to graduate from high school than those not involved in the program. Researchers will need to create an experimental group and a control group drawn randomly from the same population.
This selection process is vital: If you cherry-pick the best students to join this program, then your results will be tainted because these students might be more likely to graduate from high school regardless of the intervention.
After researchers randomly place people into the experimental and control group, the experimental group participates in your program and the control group does not. If you are comparing two or more programs, researchers might place people in multiple groups, but in its most simplified form, a control group experiences no particular change to what they have been doing already.
Results might take a while, depending on when this program takes place. You will have to wait and see who graduates from high school and then compare the two groups. Did those in your program graduate at higher rates? If so, was this difference statistically significant, or unlikely to be the result of chance? Were the results especially significant for one subset of your group (based on gender, for instance)? Can you identify factors that might limit the magnitude of your findings—did a lot of people drop out of the program, and if so why?
Applied sociologists will then evaluate the findings and make recommendations based on the results. They might offer suggestions for improving the program based on their findings as well.
Why is this research important? Not only is a lot at stake for the students involved—high school graduation is a key marker both socially and economically—but also when resources are limited it is valuable to have data to help decision makers determine whether a program is worth the investment.
Ideally, most public policy decisions would be data-driven, but this is not always the case. Sometimes a program is cut even though it has been proven to be effective, and sometimes a program continues because it is popular although there is no data demonstrating that it meets the goals it claims to achieve.
Sociologists even conduct evaluation research on their teaching methods to learn from others what techniques might be most effective. The journal Teaching Sociology publishes evaluations to help instructors implement the best practices in the classroom. A recent article in the journal evaluated the effectiveness of including discussion groups and simulations in research methods courses. They compared test scores, depth of understanding, attendance, and student evaluations and found relatively little difference compared with traditional lectures.
Does this mean there is no point to incorporating discussions or activities in a research methods class? Not necessarily. In fact, one could argue that although the test scores weren’t significantly higher on many measures, they weren’t dramatically lower either. The author concludes, “Perhaps one or two discussion periods per class session and eight simulations were not sufficient to produce measurable positive impacts. This study was also limited to only one course taught by one instructor at one university.” Which means there is more work to be done evaluating this or other new teaching techniques.
Evaluation research can help us fine-tune work in small-scale settings, such as classrooms, and for public policies impacting millions of people.
Comments