As we mentioned, there are two types of evaluation: process evaluation, and impact evaluation. They complement each other.
A process evaluation asks, Was the response implemented as planned? Did all of the response components work? Or, stated more bluntly, Did you do what you said you would do? This is a question of accountability.
Let’s begin with a hypothetical example. Though fictitious, this example is based on an actual anti-prostitution effort in London (Matthews, 1992). We will return to this example repeatedly in this guide to illustrate numerous points.
After a careful analysis, a problem-solving team determines that to control a street prostitution problem, they will ask the city’s traffic engineering department to make a major thoroughfare one-way and create several dead-end streets to thwart cruising by “johns.’ This will be implemented immediately after a comprehensive crackdown on the prostitutes in the target area. Arrested prostitutes, if convicted, are to be given probation under the condition that they cannot be in the target area for a year. Finally, a non-profit organization will assist women who want to leave sex work to gain the necessary skills to become legitimately employed. The vice squad, district patrol officers, prosecutor, local judges, probation office, sheriff’s department, traffic engineering department, and non-profit organization have all agreed to this plan.
A process evaluation would look at whether the crackdown was implemented, how many arrests were made during the crackdown, whether the street patterns were altered as planned, how many prostitutes asked for assistance in gaining new job skills, and how many prostitutes were able to find legitimate employment. The process evaluation would also examine whether all of this occurred in the planned sequence. Here is what the process evaluation found: The crackdown did not occur until after the street alterations had been made. Only a fraction of the prostitutes operating in the area were arrested, and none of them sought job skills. Based on this, one would suspect that the plan was not fully carried out or was not carried out in the specified sequence. One might conclude that the response was a colossal failure. The fact is, however, this assessment gives us no evidence of success or failure, because a process evaluation only answers the question, “What actions were taken?” It does not answer the question, “What happened to the problem?”
To determine what happened to the problem, one needs an impact evaluation. An impact evaluation asks the questions: Did the problem decline substantially? If so, did the response cause this decline? Continuing with the example given above, let’s look at how this might work. During the analysis stage of the problem-solving process, patrol officers and vice detectives conducted a census of prostitutes operating in the target area. They also asked the traffic engineering department to install traffic counters on the major thoroughfare and critical side streets to measure traffic flow. These were used to determine how customers moved through the area. The vice squad made covert video recordings of the target area to document the methods by which prostitutes interacted with potential customers. All of this was done before a response was selected, and the information gained helped the team create the response.
After the response was implemented (though not the planned response, as we have seen), the team repeated these measures. They discovered that instead of the 23 prostitutes counted in the first census, only 10 could be found. They also found that there was a slight decline in traffic on the major thoroughfare on Friday and Saturday nights, but not at other times. However, there was a substantial decline in side-street traffic on Friday and Saturday evenings. New covert video recordings showed that prostitutes in the area had altered the way they approached vehicles and that they were acting more cautiously. In short, the team had evidence that the problem had declined from what it had been before the response.
So what caused the problem to decline? This question may not be as important as it first appears. After all, if the goal was to reduce or eliminate the problem and this was achieved, what difference does it make what the cause was? It does not matter, unless you are interested in using the same form of response in similar situations in the future. If you have no interest in using the response again, all that matters is that the goal has been achieved. Then, the resources devoted to addressing the problem can be used on a more pressing concern. But if you believe that the response can be used again, it is very important to determine whether the response was responsible for the decline of the problem.
Let’s assume that the prostitution problem-solving team believed that the response might be useful for addressing similar problems. The response, though not implemented according to plan, might have caused the decline, but it was also possible that something else caused the decline. There are two reasons that the team took this second possibility seriously. First, the actual response departed from the planned response, which had been designed to fit the problem. If the planned response had been implemented, the team would have had a plausible explanation for the decline in the problem. But the jury-rigged nature of the actual response makes it a far less plausible explanation for the decline. Second, the impact evaluation was not particularly strong. Later, we will discuss why this was a weak evaluation and what can be done to strengthen it.
Process and impact evaluations answer different questions, and their combined results are often highly informative. Table 1 summarizes the information that can be gleaned from both evaluations. As will be seen in Appendix B, the interpretation of this table depends on the type of design used for the impact evaluation.
When a response is implemented as planned (or nearly so), the conclusions are much easier to interpret (cells A and B in Table 1). When the response is not implemented as planned, we have more difficulty determining what happened and what to do next (cells C and D). Cell D is particularly troublesome because all you really know is that “we did not do it and it did not work.” Should you try to implement your original plan, or should you start over from scratch?
Outcomes that fall into cell C are worth further discussion. The decline in the problem means that you could call an end to this problem-solving process and go on to something else. If the problem has declined substantially, this might be satisfactory. If, however, the problem is still large, you do not know if the response should be continued. Alternatively, you could seek a different response, on the assumption that the response is not working well and something else is needed. Additionally, you do not know whether the response will be useful for similar problems.
A process evaluation involves comparing the planned response to what actually occurred. Information about how the response was implemented usually becomes apparent while managing a problem-solving process, but only if you look for it. If the vice squad is supposed to make a series of arrests of prostitutes in the target area, one can determine this from departmental records and discussions with members of the vice squad. There will be judgment calls, nevertheless. For example, how many arrests are required? The plan may have called for the arrest of 75 percent of the prostitutes, but only 60 percent were arrested. It may be difficult to determine whether this is a serious violation of the response plan. Much of a process evaluation is descriptive: these things were done, in this order, by these people, using the following procedures. Nevertheless, numbers can help. In our example, data on traffic volume showed where street alterations had changed driving patterns, and the changes in driving patterns are consistent with what had been anticipated in the response plan.
In short, a process evaluation tells what happened in the response, when it happened, and to whom it happened. Though it does not tell whether the response made a difference in the problem, it is very useful for determining how to interpret impact evaluation results.
|TABLE 1: INTERPRETING RESULTS OF PROCESS AND IMPACT EVALUATIONS|
|Implemented nearly as planned||Not implemented or implemented in a radically different manner than planned|
|IMPACT EVALUATION RESULTS||Problem declined||A. Evidence that the response caused a decline in the problem.||C. Suggests that other factors may have caused the decline in the problem, or the response was accidentally effective.|
|Problem did not decline||B. Evidence that the response was ineffective and that a different response should be attempted.||D. Little was learned; perhaps better results would have been noted if the response had been implemented as planned, but this is speculative.|
You may order free bound copies in any of three ways:
Phone: 800-421-6770 or 202-307-1480
Allow several days for delivery.
Send an e-mail with a link to this guide.
Error sending email. Please review your enteries below.