Tools and Settings
Content
Questions and Tasks
Formative Evaluation: Formative evaluation ensures that a program or program activity is feasible, appropriate, and acceptable before it is fully implemented. It is usually conducted when a new program or activity is being developed or when an existing one is being adapted or modified.
Impact Evaluation: Impact evaluation assesses a program’s effectiveness in achieving its ultimate goals.
Outcome Evaluation: Outcome evaluation measures a program’s effectiveness in the target population. It does this by assessing the progress in the outcomes or outcome objectives that the program is trying to achieve.
Process Evaluation: Process Evaluation determines whether program activities have been implemented as intended and resulted in certain outputs.
Summative Evaluation: Summative evaluation involves making judgments about the efficacy of a program or course at its conclusion.
Evaluating a health program seems to come at the end, when we want to show how the program worked. However, we need to anticipate the end from the beginning. Ideas for evaluation should be included early in the planning.Public health efforts are often required to justify their effectiveness so they can qualify for renewed funding—leading to evaluation becoming more important all the time.
Based on our goals and objectives, we should already know what we need to measure. Next, we need to plan how to collect this data. Too many programs are unable to perform an evaluation because they didn’t collect the needed baseline data at the beginning.
(CDC, 2012)
Evaluation falls into one of two broad categories: formative and summative.
Formative evaluations start at the beginning during program formation.
Process evaluation is one of the formative evaluations. It determines whether program activities have been implemented as intended and resulted in certain outputs.
Summative evaluations help summarize results at the end.
Outcome evaluation is one of the summative evaluations, assessing the progress toward the desired outcomes.
(CDC, n.d.-a)
The following chart shows several different types of evaluations and how they can be used.
Evaluation Types
When to use
What it shows
Why it is useful
Formative Evaluation
Evaluability
Assessment
Needs Assessment
During the development of a new program.
When an existing program is being modified or is being used in a new setting or with a new population.
Whether the proposed program elements are likely to be needed, understood, and accepted by the population you want to reach.
The extent to which an evaluation is possible, based on the goals and objectives.
It allows for modifications to be made to the plan before full implementation begins.
Maximizes the likelihood that the program will succeed.
Process Evaluation
Program Monitoring
As soon as program implementation begins.
During operation of an existing program.
How well the program is working.
The extent to which the program is being implemented as designed.
Whether the program is accessible and acceptable to its target population.
Provides an early warning for any problems that may occur.
Allows programs to monitor how well their program plans and activities are working.
Outcome Evaluation
Objectives-Based Evaluation
After the program has made contact with at least one person or group in the target population.
The degree to which the program is having an effect on the target population’s behaviors.
Tells whether the program is being effective in meeting its objectives.
Economic Evaluation: Cost Analysis, Cost-Effectiveness Evaluation, Cost-Benefit Analysis, Cost-Utility Analysis
At the beginning of a program.
During the operation of an existing program.
What resources are being used in a program and their costs (direct and indirect) compared to outcomes.
Provides program managers and funders a way to assess cost relative to effects. How effective was the use of funds for this program.
Impact Evaluation
During the operation of an existing program at appropriate intervals.
At the end of a program.
The degree to which the program meets its ultimate goal.
Provides evidence for use in policy and funding decisions.
The framework developed by the CDC offers steps to follow and standards to be achieved for an effective evaluation.
(CTB, n.d.; CDC, 2017)
[Note: This will be used in Section 6 of the Term Project]
Access the appendix for a description of the image
Below are the Six Recommended Steps (CDC, 2021a)
1. Engage Stakeholders
Stakeholders must be part of the evaluation to ensure that their unique perspectives are understood. Although all stakeholders are interested in your program’s success, they may come from varied backgrounds and have different perspectives on what they want evaluated.
2. Describe the Program
Summarize the intervention being evaluated. Explain what the program is trying to accomplish. Illustrate the program's core components and elements, its ability to make changes, its stage of development, and how the program fits into the larger organizational and communal environment.
3. Focus the Evaluation Design
Depending on what you want to learn, some types of evaluation will be better suited than others. Funders may have specific evaluation requirements. The design of the evaluation may be one of the following:
Experimental designs use random assignment to compare the effect of an intervention between similar groups. An example is comparing a randomly assigned group of students in an after-school reading program with those not in the program.
Quasi-experimental methods make comparisons between unequal groups or within a group over time. An example is an interrupted time series in which the intervention is introduced sequentially across different individuals, groups, or contexts.
Observational or case study methods use comparisons within a group to describe and explain what happens, such as a comparative case study with multiple communities.
Each method option has its own biases and limitations. Using multiple evaluation methods, called the Mixed Methods approach, can give a better understanding.
4. Gathering Credible Evidence
Having credible evidence strengthens the evaluation results as well as the recommendations that follow from them. When more stakeholders participate, they will be more likely to accept the evaluation's conclusions and to act on its recommendations.
5. Justify Conclusions
Evidence must be carefully considered from a number of different stakeholders' perspectives to reach conclusions that are substantiated. Conclusions are justified if they are linked to the evidence and judged against values set by the stakeholders. From the conclusions reached, the stakeholders will help you form recommendations about future actions: to continue or expand the program, or to try a different approach.
6. ENSURE USE AND SHARE LESSONS LEARNED
Ideally, lessons learned in an evaluation will be used in decision making and future actions. This requires strategically watching for opportunities to communicate and influence. It can begin in the earliest stages of the process and continue throughout the evaluation.
Dissemination is the process of communicating the lessons learned from an evaluation to the right people in a timely fashion. The goal for dissemination is to achieve full disclosure and impartial reporting.
What reports should be disseminated?
Effects of the program, according to shareholder expectation: Find out what the key people want to know. Be sure to address any information you know they're going to want to hear about.
Differences in the behaviors of key individuals: Find out how your program efforts have changed the behaviors of your targets and agents of change. Have any of your strategies caused people to cut down on risky behaviors? Have any increased behaviors that protect them from risk? Are key people in the community cooperating with your efforts?
Differences in conditions in the community: Find out what has changed. Is the public aware of your coalition or group's efforts? Do they support you? What steps are they taking to help you achieve your goals? Have your efforts caused any changes in local laws or practices?
You'll probably also include specific data, annual reports, quarterly or monthly reports from the monitoring system, and anything else that is mutually agreed upon between the organization and the evaluation team.
(CDC, 2021b)
The Joint Committee on Standards for Educational Evaluation developed "The Program Evaluation Standards" to ensure evaluations are well-designed and fair. These standards offer principles to follow for interventions related to community health. They also help to guard against an imbalanced or impractical evaluation.
The 30 Specific Standards Are grouped into Four Categories:
Utility
Feasibility
Propriety
Accuracy
Utility standards ensure that the evaluation is useful to all stakeholders and potential readers of the information in the future.
Stakeholder Identification: People who are involved in (or will be affected by) the evaluation should be identified so that their needs can be addressed.
Evaluator Credibility: The people conducting the evaluation should be both trustworthy and competent so that the evaluation will be generally accepted as credible or believable.
Information Scope and Selection: Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.
Values Identification: The perspectives, procedures, and rationale used to interpret the findings should be carefully described so that the bases for judgments about merit and value are clear.
Report Clarity: Evaluation reports should clearly describe the program being evaluated, including its context and the purposes, procedures, and findings of the evaluation. This will help ensure that essential information is provided and easily understood.
Report Timeliness and Dissemination: Significant midcourse findings and evaluation reports should be shared with intended users so that they can be used in a timely fashion.
Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that encourage follow through by stakeholders so that the evaluation will be used.
The feasibility standards are to ensure that the evaluation makes sense - that the planned steps are both viable and pragmatic.
The feasibility standards are:
Practical Procedures: The evaluation procedures should be practical. This helps to keep disruption of everyday activities to a minimum while needed information is obtained.
Political Viability: The evaluation should be planned and conducted with anticipation of the different positions or interests of various groups. This should help in obtaining their cooperation so that possible attempts by these groups to curtail evaluation operations or to misuse the results can be avoided or counteracted.
Cost Effectiveness: The evaluation should be efficient and produce enough valuable information that the resources used can be justified.
The propriety standards ensure that the evaluation is ethical and conducted with regard for the rights and interests of those involved. The eight propriety standards follow:
Service Orientation: Evaluations should be designed to help organizations effectively serve the needs of all of the targeted participants.
Formal Agreements: The responsibilities in an evaluation (what is to be done, how, by whom, when) should be agreed to in writing so that those involved are obligated to follow all conditions of the agreement or formally renegotiate it.
Rights of Human Subjects: Evaluation should be designed and conducted to respect and protect the rights and welfare of all participants in the study.
Human Interactions: Evaluators should respect basic human dignity and worth when working with other people in an evaluation so that participants don't feel threatened or harmed.
Complete and Fair Assessment: The evaluation should be complete and fair in its examination. It should record both strengths and weaknesses of the program being evaluated. This allows strengths to be built upon and problem areas to be addressed.
Disclosure of Findings: The people working on the evaluation should ensure that all of the evaluation findings, along with the limitations of the evaluation, are accessible to everyone affected by the evaluation and any others with expressed legal rights to receive the results.
Conflicts of Interest: Conflicts of interest should be dealt with openly and honestly so that they do not compromise the evaluation processes and results.
Fiscal Responsibility: The evaluator’s use of resources should reflect prudent, ethical ,and sound accountability procedures. This ensures that expenditures are accounted for and are appropriate.
The accuracy standards ensure that the evaluation findings are correct.
There are 12 accuracy standards:
Program Documentation: The program should be described and documented clearly and accurately so that what is being evaluated is clearly identified.
Context Analysis: The context in which the program exists should be thoroughly examined so that likely influences on the program can be identified.
Described Purposes and Procedures: The purposes and procedures of the evaluation should be monitored and described in enough detail that they can be identified and assessed.
Defensible Information Sources: The sources of information used in a program evaluation should be described in enough detail that the adequacy of the information can be assessed.
Valid Information: The information gathering procedures should be chosen, developed, and implemented in such a way that they will assure a valid interpretation.
Reliable Information: The information gathering procedures should be chosen, developed, and implemented so that they will assure sufficiently reliable information.
Systematic Information: The information from an evaluation should be systematically reviewed and any errors found should be corrected.
Analysis of Quantitative Information: Quantitative information—data from observations or surveys—in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
Analysis of Qualitative Information: Qualitative information—descriptive information from interviews and other sources—in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
Justified Conclusions: The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth.
Impartial Reporting: Reporting procedures should guard against the distortion caused by personal feelings and biases of people involved in the evaluation so that evaluation reports reflect the evaluation findings fairly.
Meta-evaluation: The evaluation itself should be evaluated against these and other pertinent standards so that it is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.
The six steps and 30 standards can be integrated and applied together, as illustrated on this chart:
(CDC, n.d.-b)
Steps in Evaluation Practice
Relevant Standards
Group/Item
Engaging stakeholders
Stakeholder identification
Utility/A
Evaluator credibility
Utility/B
Formal agreements
Propriety/B
Rights of human subjects
Propriety/C
Human interactions
Propriety/D
Conflict of interest
Propriety/G
Metaevaluation
Accuracy/L
Describing the program
Complete and fair assessment
Program documentation
Accuracy/A
Context analysis
Accuracy/B
Focusing the evaluation
design
Evaluation impact
Utility/G
Practical procedures
Feasibility/A
Political viability
Feasibility/B
Cost effectiveness
Feasibility/C
Service orientation
Propriety/A
Propriety/E
Fiscal responsibility
Propriety/H
Described purposes and procedures
Accuracy/C
Gathering credible evidence
Information scope and selection
Utility/C
Defensible information sources
Accuracy/D
Valid information
Accuracy/E
Reliable information
Accuracy/F
Systematic information
Accuracy/G
Justifying conclusions
Values identification
Utility/D
Analysis of quantitative information
Accuracy/H
Analysis of qualitative information
Accuracy/I
Justified conclusions
Accuracy/J
Ensuring use and sharing
lessons learned
Report clarity
Utility/E
Report timeliness and dissemination
Utility/F
Disclosure of findings
Propriety/F
Impartial reporting
Accuracy/K
Using this framework for program evaluation will help you find the best way to evaluate and use evaluation results to make your program more effective. The framework encourages an evaluation approach designed to engage all interested stakeholders in a process that welcomes their participation.
CDC. (2012, May 11). Step 3: Focus the Evaluation Design. Centers for Disease Control and Prevention. https://www.cdc.gov/evaluation/guide/step3/index.htm
CDC. (2017, May 15). A Framework for Program Evaluation. Centers for Disease Control and Prevention. https://www.cdc.gov/evaluation/framework/index.htm
CDC. (2021a, April 9). Evaluation Steps. Centers for Disease Control and Prevention. https://www.cdc.gov/evaluation/steps/index.htm
CDC. (2021b, April 9). Evaluation Standards. Centers for Disease Control and Prevention. https://www.cdc.gov/evaluation/standards/index.htm
CDC. (n.d.-a). Types of Evaluation. Centers for Disease Control and Prevention. https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf
CDC. (n.d.-b). Cross-reference of steps and relevant standards. Centers for Disease Control and Prevention. https://www.cdc.gov/evaluation/standards/stepsandrelevantstandards.pdf
CTB. (n.d.) Chapter 36, Section 1. A Framework for Program Evaluation: A Gateway to Tools. Community Tool Box. https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/framework-for-evaluation/main