Chapter 3: Focus the Evaluation Design

Vocabulary


(Centers for Disease Control and Prevention, 2011)

Overview

The amount of information you can gather concerning your program is potentially limitless. Evaluations, however, are always restricted by the number of questions that can be realistically asked and answered with quality, the methods that can be employed, the feasibility of data collection, and the available resources. These are the issues at the heart of Step 3 in the CDC framework: focusing the evaluation. The scope and depth of any program evaluation is dependent on program and stakeholder priorities; available resources, including financial resources; staff and contractor availability; and amount of time committed to the evaluation. The program staff should work together with the ESW to determine the priority and feasibility of these questions and identify the uses of results before designing the evaluation plan. In this part of the plan, you will apply the purposes of the evaluation, its uses, and the program description to narrow the evaluation questions and focus the evaluation for program improvement and decision making. In this step, you may begin to notice the repetitive process of developing the evaluation plan as you revisit aspects of Step 1 and Step 2 to inform decisions to be made in Step 3.

Useful evaluations are not about special research interests or what is easiest to implement but what information will be used by the program, stakeholders (including funders), and decision-makers to improve the program and make decisions. Establishing the focus of the evaluation began with the identification of the primary purposes and the primary intended users of the evaluation. This process was further solidified through the selection of the ESW. Developing the purposeful intention to use evaluation information and not just produce another evaluation report starts at the very beginning with program planning and your evaluation plan. You need to garner stakeholders’ interests and prepare them for evaluation use. This step facilitates conceptualizing what the evaluation can and cannot deliver.

It is important to collaboratively focus the evaluation design with your ESW based on the identified purposes, program context, logic model, and stage of development. Additionally, issues of priority, feasibility, and efficiency need to be discussed with the ESW and those responsible for the implementation of the evaluation. Transparency is particularly important in this step. Stakeholders and users of the evaluation will need to understand why some questions were identified as high priorities while others were rejected or delayed.

Developing Evaluation Questions

In this step, it is important to solicit evaluation questions from your various stakeholder groups based on the stated purposes of the evaluation. The questions should then be considered through the lens of the logic model, program description, and stage of development of the program. Evaluation questions should be checked against the logic model. Changes may be made to either the questions or the logic model, thus reinforcing the iterative nature of the evaluation planning process. The stage of development discussed in the previous chapter will facilitate further narrowing the evaluation questions. It is important to remember that a program may experience characteristics of several stages simultaneously once past the initial planning stage. You may want to ask yourself how long your program has been in existence? If your program is in the planning stage, it is unlikely that measuring distal outcomes will be useful for informing program decision-making. However, in a multi-year evaluation plan, you may begin to plan for and develop the appropriate surveillance and evaluation systems and baseline information needed to measure these distal outcomes (to be conducted in the final initiative year) as early as year one. In another scenario, you may have a coalition that has been established for 10 years and is in the maintenance stage. However, contextual changes may require you to rethink the programmatic approach being taken. In this situation, you may want to do an evaluation that looks at both planning stage questions (Are the right folks at the table? And are they really engaged?), as well as maintenance stage questions (Are we having the intended programmatic impact?). Questions can be further prioritized based on the ESW and program information needs as well as feasibility and efficiency issues.

Often if a funder requires an evaluation plan, you might notice text like the following: Submit with application a comprehensive written evaluation plan that includes activities for both process and outcome measures. Distinguishing between process and outcome evaluation can be similar to considering the stage of development of your program against your program logic model. In general, process evaluation focuses on the first three boxes of the logic model: inputs, activities, and outputs (CDC, 2008). This discussion with your ESW can further facilitate the focus of your evaluation.

EXAMPLE OF A PROCESS TO DEVELOP EVALUATION QUESTIONS

Program: SafeTeens in Lincoln County. The county health department is starting a project to form a teen safety coalition and conduct a seatbelt campaign.

The various stakeholders may have different evaluation questions they want answered, and the answers will come in different stages of the program.

Stakeholder

Evaluation Question

Stage of Program Development

Teens

How did the rate of seatbelt use change after the campaign?

Planning 

Law Enforcement

How many seatbelt citations were issued before and after campaign?

Planning and maintenance

School Teachers

How many teens are attending coalition meetings?

Maintenance

Dept of Transportation

How did the rate of seatbelt use change after changing education materials

Implementation

Health Dept

How did the rate of seatbelt use change after the campaign?

Planning 

 

Planners need to assess the feasibility of each question.

Evaluation Question:

How did the rate of seatbelt use change after the campaign?

Data Needed to Answer Question:

Rates of seatbelt use before and after campaign

Required Assumptions:

Seatbelt use depends on attitudes.
Attitudes may be changed through marketing.

Resources Needed:

Observational studies of seatbelt use.
Documented rates of seatbelt use.

Limitations:

Teens available to do observational studies.
Weather, visibility, traffic conditions.

Evaluation Question:

How many seatbelt citations were issued before and after campaign?

Data Needed to Answer Question:

Number of seatbelt citations issued before and after campaign

Required Assumptions:

Records will be available.
Other factors did not affect citation rate.

Resources Needed:

Records from law enforcement citations.
Historical rates from different periods.

Limitations:

May be difficult to get permission to see data.
Data may not be accurate.

Evaluation Question:

How many teens are attending coalition meetings?

Data Needed to Answer Question:

Attendance records for meetings

Required Assumptions:

Records are being kept accurately. 
Attendance assumes engagement.

Resources Needed:

Person to keep attendance.
Process to record and maintain rosters.

Limitations:

Records can get lost.
Records may not be accurate.

Evaluation Question:

How did the rate of seatbelt use change after changing education materials?

Data Needed to Answer Question:

Rates of seatbelt use before and after changing materials

Required Assumptions:

Materials were different when different rates were documented.
Education changes behavior.

Resources Needed:

Historical data on rates of seatbelt use.
Data from seatbelt observation studies.

Limitation:

Difficult to change materials for statewide system.
Historical records may not be accurate.

 

After assessing stakeholders’ expectations and the feasibility, final evaluation questions can be written.

Evaluation Question

How does the answer to this question help achieve the evaluation purpose?

How did the rate of seatbelt use change after marketing and engaging students?

This will show if the campaign makes a difference in seatbelt usage.

How many seatbelt citations were issued by law enforcement before and after the marketing messages?

This will show if seatbelt citations make a difference in seatbelt usage.

How many teens are attending coalition meetings and engaged in safety activities?

This will show if teens can be engaged to focus on safety.

 

  

Process and Outcome Evaluation in Harmony in the Evaluation Plan

As the program can experience the characteristics of several stages of development at once, so, too, a single evaluation plan can and often does include both process and outcome evaluation questions. Excluding process evaluation questions in favor of outcome evaluation questions often eliminates the understanding of the foundation that supports outcomes. 


Process Evaluation Focus. For a more in-depth description, see the appendix.

For a description of the image, access the appendix

Outcome evaluation, as the term implies, focuses on the last three outcome boxes of the logic model: short-term, intermediate, and long-term outcomes.

Outcome Evaluation Focus. For a more in-depth description, see the appendix.

For a description of the image, access the appendix

As a program can experience the characteristics of several stages of development at once, so, too, a single evaluation plan can and should include both process and outcome evaluation questions at the same time. Excluding process evaluation questions in favor of outcome evaluation questions often eliminates the understanding of the foundation that supports outcomes.

As you and the ESW take ownership of the evaluation, you will find that honing the evaluation focus will likely solidify interest in the evaluation. Selection of final evaluation questions should balance what is most useful to achieving your program’s information needs while also meeting your stakeholders’ information needs. Having stakeholders participate in the selection of questions increases the likelihood of their securing evaluation resources, providing access to data, and using the results. This process increases personal ownership of the evaluation by the ESW. However, given that resources are limited, the evaluation cannot answer all potential questions.

The ultimate goal is to focus the evaluation design such that it reflects the program stage of development, selected purpose of the evaluation, uses, and questions to be answered. Transparency related to the selection of evaluation questions is critical to stakeholder acceptance of evaluation results and possibly the continued support of the program.

Even with an established multi-year plan, Step 3 should be revisited with your ESW annually (or more often if needed) to determine if priorities and feasibility issues still hold for the planned evaluation activities. This highlights the dynamic nature of the evaluation plan. Ideally, your plan should be intentional and strategic by design and generally cover multiple years for planning purposes. But the plan is not set in stone. It should also be flexible and adaptive. It is flexible because resources and priorities change and adaptive because opportunities and programs change. You may have a new funding opportunity and a short-term program added to your overall program. This may require insertion of a smaller evaluation plan specific to the newly funded project, but with the overall program evaluation goals and objectives in mind. Or, resources could be cut for a particular program, requiring a reduction in the evaluation budget. The planned evaluation may have to be reduced or delayed. Your evaluation plan should be flexible and adaptive to accommodate these scenarios while still focusing on the evaluation goals and objectives of the program and the ESW.

Budget and Resources

Discussion of budget and resources (financial and human) that can be allocated to the evaluation will likely be included in your feasibility discussion. In the Best Practices for Comprehensive Tobacco Control Programs (2007), it is recommended that at least 10% of your total program resources be allocated to surveillance and program evaluation. The questions and subsequent methods selected will have a direct relationship to the financial resources available, evaluation team member skills, and environmental constraints (for example, you might like to do an in-person home interview of the target population, but the neighborhood is not one that interviewers can visit safely). Stakeholder involvement may facilitate advocating for the resources needed to implement the evaluation necessary to answer priority questions. However, sometimes, you might not have the resources necessary to fund the evaluation questions you would like to answer most. A thorough discussion of feasibility and recognition of real constraints will facilitate a shared understanding of what the evaluation can and cannot deliver. The process of selecting the appropriate methods to answer the priority questions and discussing feasibility and efficiency is iterative. Steps 3–5 in planning the evaluation will often be visited concurrently in a back-and-forth progression until the group comes to consensus.


Evaluation Plan Tips for Step 3


At This Point, Your Plan Should Include the Following:



References

Centers for Disease Control and Prevention. (2011). Developing an Effective Evaluation Plan. CDC. https://www.cdc.gov/obesity/downloads/cdc-evaluation-workbook-508.pdf


This content is provided to you freely by BYU-I Books.

Access it online or download it at https://books.byui.edu/pubh_381_readings/step_3_focus_the_evaluation_design.