э یی

 

یی ѐی .ی ی 14 1386 -   24 ی-  ی 17 ј ی یی ()   ѐ ی .ǐی ی 16 ی   ی .ی یی ی ی ی ی ی ی ј . ی э یی Framework for Program Evaluation ی ی ј ی ǐی  ی ی. ی ی ی یی 1382 ی یی   یی ی یی ی یی   ی  ѐ . э  یی ی یی ی ی Centers for Disease Control and Preventions ی ʺی یی .ی э ی یی ی یی یی   ی یی ی       ی یی ی  : Joint Commission on Evaluation Standards ی 塐 . ی یی ǐی ی ی сی یی  ی ی э . یی ی э ی  ی э ی ی ی  ی . 30 یی :ی ییϐی ی 31 э یی . э یی ی ی   .  ی ی یی ی یی یی ی یی.

 

Summary


Effective program evaluation is a systematic way to improve and account for public health actions that involves procedures that are useful, feasible, ethical, and accurate.  The Framework  guides public health professionals in their use of program evaluation. It is a practical, nonprescriptive tool, designed to summarize and organize essential elements of program evaluation. The framework comprises steps in program evaluation practice and standards for effective program evaluation. Adhering to the steps and standards of this framework will allow an understanding of each program's context and will improve how program evaluations are conceived and conducted.

Evaluation can be tied to routine program operations when the emphasis is on practical, ongoing evaluation that involves all program stakeholders, not just evaluation experts.  Informal evaluation strategies may be adequate for ongoing program assessment.  However, when the stakes of potential decisions or program changes increase, employing evaluation procedures that are explicit, formal, and justifiable becomes important. 

Understanding the logic, reasoning, and values of evaluation that are reflected in this framework can lead to lasting impacts, such as basing decisions on systematic judgments instead of unfounded assumptions.

Purposes
The framework was developed to:

  • Summarize and organize the essential elements of program evaluation
  • Provide a common frame of reference for conducting evaluations
  • Clarify the steps in program evaluation
  • Review standards for effective program evaluation
  • Address misconceptions about the purposes and methods of program evaluation

Scope


Throughout this report, the term "program" is used to describe the object of evaluation; it applies to any organized public health action. This definition is deliberately broad because the framework can be applied to almost any public health activity, including:

  • Direct service interventions
  • Community mobilization efforts
  • Research initiatives
  • Surveillance systems
  • Policy development activities
  • Outbreak investigations
  • Laboratory diagnostics
  • Communication campaigns
  • Infrastructure building projects
  • Training and education services
  • Administrative systems; and
  • Others

Additional terms defined in this report were chosen carefully to create a basic evaluation vocabulary for public health professionals.

How to Assign Value
Questions regarding values, in contrast with those regarding facts, generally involve three interrelated issues:

  • Merit (i.e., quality)
  • Worth (i.e., cost-effectiveness)
  • Significance (i.e., importance)

Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions:

  • What will be evaluated? (i.e. what is "the program" and in what context does it exist)
  • What aspects of the program will be considered when judging program performance?
  • What standards (i.e. type or level of performance) must be reached for the program to be considered successful?
  • What evidence will be used to indicate how the program has performed?
  • What conclusions regarding program performance are justified by comparing the available evidence to the selected standards?
  • How will the lessons learned from the inquiry be used to improve public health effectiveness?

These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.

Steps and Standards
 The following table summarizes the steps in program evaluation practice with the most important sub points for each, as well as the standards that govern effective program evaluation.   Follow the links to see brief definitions for each concept.

Steps in Evaluation Practice

Engage stakeholders
Those involved, those affected, primary intended users

Describe the program
Need, expected effects, activities, resources, stage, context, logic model

Focus the evaluation design
Purpose, users, uses, questions, methods, agreements

Gather credible evidence
Indicators, sources, quality, quantity, logistics

Justify conclusions
Standards, analysis/synthesis, interpretation, judgment, recommendations

Ensure use and share lessons learned
Design, preparation, feedback, follow-up, dissemination

Standards for "Effective" Evaluation

Utility
Serve the information needs of intended users

Feasibility
Be realistic, prudent, diplomatic, and frugal

Propriety
Behave legally, ethically, and with due regard for the welfare of those involved and those affected

Accuracy
Reveal and convey technically accurate information

The steps and standards are used together throughout the evaluation process.  For each step there are a sub-set of standards that are generally most relevant to consider.  These are linked in the table entitled:


Cross Reference of Steps and Relevant Standards

Applying the Framework

Conducting Optimal Evaluations

Public health professionals can no longer question whether to evaluate their programs; instead, the appropriate questions are

  • What is the best way to evaluate?
  • What are we learning from evaluation?
  • How will we use the learning to make public health efforts more effective?

The framework for program evaluation helps answer these questions by guiding its users in selecting evaluation strategies that are useful, feasible, ethical, and accurate.

To use the recommended framework in a specific program context requires skill in both the science and art of program evaluation. The challenge is to devise an optimal as opposed to an ideal strategy. An optimal strategy is one that accomplishes each step in the framework in a way that accommodates the program context and meets or exceeds all relevant standards.

Assembling an Evaluation Team

Harnessing and focusing the efforts of a collaborative group is one approach to conducting an optimal evaluation. A team approach can succeed when small groups of carefully selected persons decide what the evaluation must accomplish, and they pool resources to implement the plan. Stakeholders might have varying levels of involvement on the team that correspond to their own perspectives, skills, and concerns. A leader must be designated to coordinate the team and maintain continuity throughout the process; thereafter, the steps in evaluation practice guide the selection of team members. For example,

  • Those who are diplomatic and have diverse networks can engage other stakeholders and maintain involvement.
  • To describe the program, persons are needed who understand the program's history, purpose, and practical operation in the field. In addition, those with group facilitation skills might be asked to help elicit unspoken expectations regarding the program and to expose hidden values that partners bring to the effort. Such facilitators can also help the stakeholders create logic models that describe the program and clarify its pattern of relationships between means and ends.
  • Decision makers and others who guide program direction can help focus the evaluation design on questions that address specific users and uses. They can also set logistic parameters for the evaluations scope, time line, and deliverables.
  • Scientists, particularly social and behavioral scientists can bring expertise to the development of evaluation questions, methods, and evidence gathering strategies. They can also help analyze how a program operates in its organizational or community context.
  • Trusted persons who have no particular stake in the evaluation can ensure that participants values are treated fairly when applying standards, interpreting facts, and reaching justified conclusions.
  • Advocates, creative thinkers, and members of the power structure can help ensure that lessons are learned from the evaluation and that the new understanding influences future decision-making regarding program strategy.

All organizations, even those that are able to find evaluation team members within their own agency, should collaborate with partners and take advantage of community resources when assembling an evaluation team. This strategy increases the available resources and enhances the evaluations credibility. Furthermore, a diverse team of engaged stakeholders has a greater probability of conducting a culturally competent evaluation (i.e., one that understands and is sensitive to the programs cultural context). Although challenging for the coordinator and the participants, the collaborative approach is practical because of the benefits it brings (e.g., reduces suspicion and fear, increases awareness and commitment, increases the possibility of achieving objectives, broadens knowledge base, teaches evaluation skills, strengthens partnerships, increases the possibility that findings will be used, and allows for differing perspectives).

Addressing Common Concerns

Common concerns regarding program evaluation are clarified by using this framework. Evaluations might not be undertaken because they are misperceived as having to be costly. However, the expense of an evaluation is relative; the cost depends on the questions being asked and the level of certainty desired for the answers. A simple, low-cost evaluation can deliver valuable results.

Rather than discounting evaluations as time-consuming and tangential to program operations, the framework encourages conducting evaluations that are timed strategically to provide necessary feedback. This makes integrating evaluation with program practice possible.

Another concern centers on the perceived technical demands of designing and conducting an evaluation. Although circumstances exist where controlled environments and elaborate analytic techniques are needed, most public health program evaluations do not require such methods. Instead, the practical approach endorsed by this framework focuses on questions that will improve the program by using context-sensitive methods and analytic techniques that summarize accurately the meaning of qualitative and quantitative information.

Finally, the prospect of evaluation troubles some program staff because they perceive evaluation methods as punitive, exclusionary, and adversarial. The framework encourages an evaluation approach that is designed to be helpful and engages all interested stakeholders in a process that welcomes their participation.

 

Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide

CDC's Office of Strategy and Innovation (OSI) produced a self-study manual that is organized around the steps in the Framework,  "Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide."  This is a public domain document that can be shared without restriction. 

This version was developed to provide a practical tool to each step in the CDC Framework.   The manual presents the same content as the CDC publication, but using more user-friendly layout, cross-cutting case examples, and in-depth instructions and worksheets.

Adapted Version for Community Stakeholders

The Center for the Advancement of Community-based Public Health (CBPH) produced an adapted version of the framework entitled, "An Evaluation Framework for Community Health Programs."  This is a public domain document that can be shared without restriction. 

This version was developed to provide a practical tool for engaging community stakeholders in program evaluation activities.  Community stakeholders are often prevented from participating because explanations of evaluation are written mainly for academic and professional readers.  This document explains evaluation by speaking directly to people who live and work in communities.  Adaptations were based on feedback gathered systematically from front-line practitioners and community members across the country. The result is a retooled version of the framework that is more accessible to community members and staff of community-based organizations. The CBPH version presents essentially the same content as the CDC publication using less technical language, more graphics, and more user-friendly layout. It also includes case examples and quotes provided by community-based practitioners.

Instructional Video and Workbook

"Practical Evaluation of Public Health Programs" (course # VC0017) is a five-hour distance-learning course organized around CDC's recommended framework for program evaluation. Developed through CDCs Public Health Training Network, the course consists of two videotapes and a workbook, which can be used by individuals for self-study or by small groups with optional enrichment activities. Continuing education credit is available for this course. For more information, visit the Public Health Training Network website or call 1-800-41-TRAIN (1-800-418-7246).

Course materials may be purchased from the Public Health Foundation by calling the toll free number 1-877-252-1200, or using use their on-line order form.  The cost is approximately $40.00.

For informational purposes, the workbook can be viewed free-of-charge over the internet.

Gateway to the Community Tool Box

The Community Tool Box (CTB) is a highly acclaimed internet resource for health promotion and community development.  It contains a wealth of practical information about how to do the work of public health and social change on a community level.  Because they consider program evaluation to be a critical part of successful community-based work, the CTB team used the basic elements of the framework to create a unique gateway to evaluation ideas and tools.

CDC Program Evaluation framework details

Overview

Evaluation is an Essential Organizational Practice
Program evaluation is an essential organizational practice in public health; however, it is not practiced consistently across program areas, nor is it well-integrated into the day-to-day management of most programs.

Program evaluation is also necessary to fulfill CDC's operating principles for public health, which include

  • Using science as a basis for decision-making and action;
  • Expanding the quest for social equity;
  • Performing effectively as a service agency;
  • Making efforts outcome-oriented; and
  • Being accountable

These operating principles imply several ways to improve how public health activities are planned and managed. They underscore the need for programs to develop clear plans, inclusive partnerships, and feedback systems that allow learning and ongoing improvement to occur. One way to ensure that new and existing programs honor these principles is for each program to conduct routinely practical evaluations that inform their management and improve their effectiveness.

History


During the past three decades, the practice of evaluation has evolved as a discipline with new definitions, methods, approaches, and applications to diverse subjects and settings. Despite these refinements, a basic organizational framework for program evaluation in public health practice had not been developed.

In May 1997, the CDC Director and executive staff recognized the need for such a framework and the need to combine evaluation with program management. Further, the need for evaluation studies that demonstrate the relationship between program activities and prevention effectiveness was emphasized. CDC convened an Evaluation Working Group, charged with developing a framework that summarizes and organizes the basic elements of program evaluation.

Focus
The working group translated it's charge into a focus on developing products and services in two areas:

  • Defining and organizing the essential elements of program evaluation
  • Leading institutional change to promote evaluation practice at the CDC and throughout the public health system

Products
Initial efforts of the working group were dedicated to creating:

  • Recommendations for promoting program evaluation at the CDC

Continuing efforts of the working group are now dedicated to maintaining leadership and providing critical support and consultation to program staff and stakeholders who are exploring the benefits, challenges, and opportunities that program evaluation holds for improving the effectiveness of public health efforts.

Standards for
Effective Program Evaluation

 

A standard is a principle mutually agreed to by people engaged in a professional practice, that, if met, will enhance the quality and fairness of that professional practice, for example, evaluation."

                                ---
Joint Committee on Educational Evaluation

The second element of the framework is a set of 30 standards for assessing the quality of evaluation activities; these standards are organized into the following four groups:

These standards, adopted from the Joint Committee on Standards for Educational Evaluation, answer the question, "Will this evaluation be effective?" and are recommended as criteria for judging the quality of program evaluation efforts in public health.  They are an approved standard by the American National Standards Institute (ANSI) and have been endorsed by the American Evaluation Association and 14 other professional organizations

Public health professionals will recognize that the basic steps of the framework for program evaluation are part of their routine work. In day-to-day public health practice, stakeholders are consulted; program goals are defined; guiding questions are stated; data are collected, analyzed, and interpreted; judgments are formed; and lessons are shared. Although informal evaluation occurs through routine practice, having standards help to assess whether a set of evaluative activities are well-designed and working to their potential.

The standards also make conducting sound and fair evaluations practical. They are well-supported principles to follow when faced with having to compromise regarding evaluation options. The standards help avoid creating an imbalanced evaluation (e.g., one that is accurate and feasible but not useful, or one that would be useful and accurate but is infeasible).

Furthermore, the standards can be applied while planning an evaluation and throughout its implementation. The Joint Committee is unequivocal in that, "the standards are guiding principles, not mechanical rules. . . . In the end, whether a given standard has been addressed adequately in a particular situation is a matter of judgment."   To facilitate use of the standards, however, the Joint Committee's report discusses each with an associated list of guidelines and common errors, as well as applied case examples.

The specific standards are as follows:

Utility


The utility standards are intended to ensure that an evaluation will serve the information needs of intended users.  These standards are as follows.

  1. Stakeholder Identification:  Persons involved in or affected by the evaluation should be identified, so that their needs can be addressed.
  2. Evaluator Credibility: The persons conducting the evaluation should be both trustworthy and competent to perform the evaluation, so that the evaluation findings achieve maximum credibility and acceptance.
  3. Information Scope and Selection:   Information collected should be broadly selected to address pertinent questions about the program and be responsive to the needs and interests of clients and other specified stakeholders.
  4. Values Identification:  The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for value judgments are clear.
  5. Report Clarity:  Evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation, so that essential information is provided and easily understood.
  6. Report Timeliness and Dissemination:   Significant interim findings and evaluation reports should be disseminated to intended users, so that they can be used in a timely fashion.
  7. Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the likelihood that the evaluation will be used is increased.

 

Feasibility


The feasibility standards are intended to ensure that an evaluation will be realistic, prudent, diplomatic, and frugal.  The standards are as follows:

  1. Practical Procedures:  The evaluation procedures should be practical, to keep disruption to a minimum while needed information is obtained.
  2. Political Viability: The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by and of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted.
  3. Cost Effectiveness:  The evaluation should be efficient and produce information of sufficient value, so that the resources expended can be justified.

Propriety


The propriety standards are intended to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.  These standards are as follows:

  1. Service Orientation:  Evaluation should be designed to assist organizations to address and effectively serve the needs of the full range of targeted participants.
  2. Formal Agreements: Obligations of the formal parties to an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that these parties are obligated to adhere to all conditions of the agreement or formally to renegotiate it.
  3. Rights of Human Subjects: Evaluation should be designed and conducted to respect and protect the rights and welfare of human subjects.
  4. Human Interactions: Evaluators should respect human dignity and worth in their interactions with other persons associated with an evaluation, so that participants are not threatened or harmed.
  5. Complete and Fair Assessment:  The evaluation should be complete and fair in its examination and recording of strengths and weaknesses of the program being evaluated, so that strengths can be built upon and problem areas addressed.
  6. Disclosure of Findings:  The formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results.
  7. Conflict of Interest: Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.
  8. Fiscal Responsibility: The evaluator's allocation and expenditure of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

 

Accuracy

The accuracy standards are intended to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated.  The standards are as follows:

  1. Program Documentation: The program being evaluated should be described and documented clearly and accurately, so that the program is clearly identified.
  2. Context Analysis: The context in which the program exists should be examined in enough detail, so that its likely influences on the program can be identified.
  3. Described Purposes and Procedures: The purposes and procedures of the evaluation should be monitored and described in enough detail, so that they can be identified and assessed.
  4. Defensible Information Sources: The sources of information used in a program evaluation should be described in enough detail, so that the adequacy of the information can be assessed.
  5. Valid Information:  The information gathering procedures should be chosen or developed and then implemented so that they will assure that the interpretation arrived at is valid for the intended use.
  6. Reliable Information:  The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable for the intended use.
  7. Systematic Information: The information collected, processed, and reported in an evaluation should be systematically reviewed and any errors found should be corrected.
  8. Analysis of Quantitative Information: Quantitative information in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  9. Analysis of Qualitative Information: Qualitative information in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  10. Justified Conclusions: The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can assess them.
  11. Impartial Reporting: Reporting procedures should guard against the distortion caused by personal feelings and biases of any party to the evaluation, so that evaluation reports fairly reflect the evaluation findings.
  12. Metaevaluation: The evaluation itself should be formatively and summatively evaluated against these and other pertinent standards, so that its conduct is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.

CitationJoint Committee on Educational Evaluation, James R. Sanders (chair).  The program evaluation standards: how to assess evaluations of educational programs.  2nd edition.  Sage Publications, Thousand Oaks, CA.   1994.

 

Steps in Program Evaluation

The framework emphasizes six connected steps that together can be used as a starting point to tailor an evaluation for a particular public health effort, at a particular point in time. Because the steps are all interdependent, they might be encountered in a nonlinear sequence; however, an order exists for fulfilling each earlier steps provide the foundation for subsequent progress. Thus, decisions regarding how to execute a step should not be finalized until previous steps have been thoroughly addressed. The steps are as follows:

Understanding and adhering to these six steps will allow an understanding of each program's context (e.g., the program's history, setting, and organization) and will improve how most evaluations are conceived and conducted.

Engaging Stakeholders

The evaluation cycle begins by engaging stakeholders (i.e., the persons or organizations having an investment in what will be learned from an evaluation and what will be done with the knowledge). Public health work involves partnerships; therefore, any assessment of a public health program requires considering the value systems of the partners. Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, evaluation findings might be ignored, criticized, or resisted because they do not address the stakeholders' questions or values. After becoming involved, stakeholders help to execute the other steps. Identifying and engaging the following three groups are critical:

  • Those involved in program operations (e.g., sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff)
  • Those served or affected by the program (e.g., clients, family members, neighborhood organizations, academic institutions, elected officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing organizations)
  • Primary users of the evaluation (e.g., the specific persons who are in a position to do or decide something regarding the program.)  In practice, primary users will be a subset of all stakeholders identified. A successful evaluation will designate primary users early in its development and maintain frequent interaction with them so that the evaluation addresses their values and satisfies their unique information needs.

For additional details, see "Engaging Stakeholders".

Describe the Program

Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description should discuss the program's capacity to effect change, its stage of development, and how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables comparisons with similar programs and facilitates attempts to connect program components to their effects. Moreover, stakeholders might have differing ideas regarding program goals and purposes. Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating with stakeholders to formulate a clear and logical description will bring benefits before data are available to evaluate program effectiveness. Aspects to include in a program description are:

  • Need: A statement of need describes the problem or opportunity that the program addresses and implies how the program will respond.
  • Expected effects: Descriptions of expected effects convey what the program must accomplish to be considered successful.
  • Activities: Describing program activities (i.e., what the program does to effect change) permits specific steps, strategies, or actions to be arrayed in logical sequence. This demonstrates how each program activity relates to another and clarifies the programs hypothesized mechanism or theory of change
  • Resources: Resources include the time, talent, technology, information, money, and other assets available to conduct program activities.
  • Stage of development: Public health programs mature and change over time; therefore, a programs stage of development reflects its maturity.  A minimum of three stages of development must be recognized: planning, implementation, and effects. During planning, program activities are untested, and the goal of evaluation is to refine plans. During implementation, program activities are being field-tested and modified; the goal of evaluation is to characterize real, as opposed to ideal, program activities and to improve operations, perhaps by revising plans. During the last stage, enough time has passed for the programs effects to emerge; the goal of evaluation is to identify and account for both intended and unintended effects.
  • Context: Descriptions of the programs context should include the setting and environmental influences (e.g., history, geography, politics, social and economic conditions, and efforts of related or competing organizations) within which the program operates. Understanding these environmental influences is required to design a context-sensitive evaluation and will aid users in interpreting findings accurately and assessing the generalizability of the findings.
  • Logic model: A logic model, which describes the sequence of events for bringing about change, synthesizes the main program elements into a picture of how the program is supposed to work. Often, this model is displayed in a flow chart, map, or table to portray the sequence of steps leading to program results.

For additional details, see "Describing the Program".

Focus the Evaluation Design

The direction and process of the evaluation must be focused to assess the issues of greatest concern to stakeholders while using time and resources as efficiently as possible. Not all design options are equally well-suited to meeting the information needs of stakeholders. After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Among the items to consider when focusing an evaluation are the following:

  • Purpose: Articulating an evaluations purpose (i.e., intent) will prevent premature decision-making regarding how the evaluation should be conducted. Characteristics of the program, particularly its stage of development and context, will influence the evaluations purpose. Four general purposes exist for conducting evaluations in public health practice.
    1. Gain insight -- evaluations done for this purpose provide the necessary insight to clarify how program activities should be designed to bring about expected changes.
    2. Change practice -- evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities.
    3. Assess effects -- evaluations done for this purpose examine the relationship between program activities and observed consequences.
    4. Affect participants -- evaluations done for this purpose use the processes of evaluation to affect those who participate in the inquiry.  The logic and systematic reflection required of stakeholders who participate in an evaluation can be a catalyst for self-directed change. An evaluation can be initiated with the intent that the evaluation procedures themselves will generate a positive influence. 
  • Users: Users are the specific persons that will receive evaluation findings. Because intended users directly experience the consequences of inevitable design trade-offs, they should participate in choosing the evaluation focus. User involvement is required for clarifying intended uses, prioritizing questions and methods, and preventing the evaluation from becoming a misguided or irrelevant exercise.
  • Uses: Uses are the specific ways in which information generated from the evaluation will be applied. Several uses exist for program evaluation. Uses should be planned and prioritized with input from stakeholders and with regard for the programs stage of development and current context. All uses must be linked to one or more specific users.
  • Questions: Questions establish boundaries for the evaluation by stating what aspects of the program will be addressed. Negotiating and prioritizing questions among stakeholders further refines a viable focus for the evaluation. The question-development phase might also expose differing opinions regarding the best unit of analysis. Certain stakeholders might want to study how programs operate together as a system of interventions to effect change within a community. Other stakeholders might have questions concerning the performance of a single program or a local project within that program. Still others might want to concentrate on specific subcomponents or processes of a project.
  • Methods: The methods for an evaluation are drawn from scientific research options, particularly those developed in the social, behavioral, and health sciences. A basic classification of design types includes experimental, quasi-experimental, and observational designs. No design is intrinsically better than another under all circumstances. Evaluation methods should be selected to provide the appropriate information to address stakeholders questions (i.e., methods should be matched to the primary users, uses, and questions). Methodology decisions also raise questions regarding how the evaluation will operate (e.g., to what extent program participants will be involved; how information sources will be selected; what data collection instruments will be used; who will collect the data; what data management systems will be needed; and what are the appropriate methods of analysis, synthesis, interpretation, and presentation). Because each method option has its own bias and limitations, evaluations that mix methods are generally more effective.
  • Agreements: Agreements summarize the evaluation procedures and clarify roles and responsibilities among those who will execute the plan. Agreements describe how the evaluation plan will be implemented by using available resources (e.g., money, personnel, time, and information). Agreements also state what safeguards are in place to protect human subjects and, where appropriate, what ethical (e.g., institutional review board) or administrative (e.g., paperwork reduction) approvals have been obtained. Creating an explicit agreement verifies the mutual understanding needed for a successful evaluation. It also provides a basis for modifying or renegotiating procedures if necessary.

For additional details, see "Focusing the Evaluation Design".

Gather Credible Evidence

Persons involved in an evaluation should strive to collect information that will convey a well-rounded picture of the program and be seen as credible by the evaluations primary users. Information (i.e., evidence) should be perceived by stakeholders as believable and relevant for answering their questions. Such decisions depend on the evaluation questions being posed and the motives for asking them. Having credible evidence strengthens evaluation judgments and the recommendations that follow from them. Although all types of data have limitations, an evaluations overall credibility can be improved by using multiple procedures for gathering, analyzing, and interpreting data. Encouraging participation by stakeholders can also enhance perceived credibility. When stakeholders are involved in defining and gathering data that they find credible, they will be more likely to accept the evaluations conclusions and to act on its recommendations. The following aspects of evidence gathering typically affect perceptions of credibility:

  • Indicators: Indicators define the program attributes that pertain to the evaluations focus and questions. Because indicators translate general concepts regarding the program, its context, and its expected effects into specific measures that can be interpreted, they provide a basis for collecting evidence that is valid and reliable for the evaluations intended uses. Indicators address criteria that will be used to judge the program; they therefore highlight aspects of the program that are meaningful for monitoring
  • Sources: Sources of evidence in an evaluation are the persons, documents, or observations that provide information for the inquiry. More than one source might be used to gather evidence for each indicator to be measured. Selecting multiple sources provides an opportunity to include different perspectives regarding the program and thus enhances the evaluations credibility.  The criteria used for selecting sources should be stated clearly so that users and other stakeholders can interpret the evidence accurately and assess if it might be biased. In addition, some sources are narrative in form and others are numeric. The integration of qualitative and quantitative information can increase the chances that the evidence base will be balanced, thereby meeting the needs and expectations of diverse users. Finally, in certain cases, separate evaluations might be selected as sources for conducting a larger synthesis evaluation.
  • Quality: Quality refers to the appropriateness and integrity of information used in an evaluation. High-quality data are reliable, valid, and informative for their intended use. Well-defined indicators enable easier collection of quality data. Other factors affecting quality include instrument design, data-collection procedures, training of data collectors, source selection, coding, data management, and routine error checking. Obtaining quality data will entail trade-offs (e.g., breadth versus depth) that should be negotiated among stakeholders. Because all data have limitations, the intent of a practical evaluation is to strive for a level of quality that meets the stakeholders threshold for credibility.
  • Quantity: Quantity refers to the amount of evidence gathered in an evaluation. The amount of information required should be estimated in advance, or where evolving processes are used, criteria should be set for deciding when to stop collecting data. Quantity affects the potential confidence level or precision of the evaluations conclusions. It also partly determines whether the evaluation will have sufficient power to detect effects. All evidence collected should have a clear, anticipated use. Correspondingly, only a minimal burden should be placed on respondents for providing information.
  • Logistics: Logistics encompass the methods, timing, and physical infrastructure for gathering and handling evidence. Each technique for gathering evidence must be suited to the source(s), analysis plan, and strategy for communicating findings. Persons and organizations also have cultural preferences that dictate acceptable ways of asking questions and collecting information, including who would be perceived as an appropriate person to ask the questions. The techniques for gathering evidence in an evaluation must be aligned with the cultural conditions in each setting of the project. Data-collection procedures should also be scrutinized to ensure that the privacy and confidentiality of the information and sources are protected.

For additional details, see "Gathering Credible Evidence".

Justify Conclusions

Evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders. Stakeholders must agree that conclusions are justified before they will use the evaluation results with confidence. Justifying conclusions on the basis of evidence includes the following five elements:

  • Standards: Standards reflect the values held by stakeholders and provide the basis for forming judgments concerning program performance. Using explicit standards for judgment is fundamental for effective evaluation because it distinguishes evaluation from other approaches to strategic management in which priorities are set without reference to explicit values. In practice, when stakeholders articulate and negotiate their values, these become the standards for judging whether a given programs performance will, for example, be considered successful, adequate, or unsuccessful. An array of value systems might serve as sources of standards. When operationalized, these standards establish a comparison by which the program can be judged
  • Analysis and synthesis: Analysis and synthesis are methods for examining and summarizing an evaluations findings. They detect patterns in evidence, either by isolating important findings (analysis) or by combining sources of information to reach a larger understanding (synthesis). Mixed method evaluations require the separate analysis of each evidence element and a synthesis of all sources for examining patterns of agreement, convergence, or complexity. Deciphering facts from a body of evidence involves deciding how to organize, classify, interrelate, compare, and display information. These decisions are guided by the questions being asked, the types of data available, and by input from stakeholders and primary users.
  • Interpretation: Interpretation is the effort of figuring out what the findings mean and is part of the overall effort to make sense of the evidence gathered in an evaluation. Uncovering facts regarding a programs performance is not sufficient to draw evaluative conclusions. Evaluation evidence must be interpreted to appreciate the practical significance of what has been learned. Interpretations draw on information and perspectives that stakeholders bring to the evaluation inquiry and can be strengthened through active participation or interaction.
  • Judgment: Judgments are statements concerning the merit, worth, or significance of the program. They are formed by comparing the findings and interpretations regarding the program against one or more selected standards. Because multiple standards can be applied to a given program, stakeholders might reach different or even conflicting judgments. Conflicting claims regarding a programs quality, value, or importance often indicate that stakeholders are using different standards for judgment. In the context of an evaluation, such disagreement can be a catalyst for clarifying relevant values and for negotiating the appropriate bases on which the program should be judged.
  • Recommendations: Recommendations are actions for consideration resulting from the evaluation. Forming recommendations is a distinct element of program evaluation that requires information beyond what is necessary to form judgments regarding program performance. Knowing that a program is able to reduce the risk of disease does not translate necessarily into a recommendation to continue the effort, particularly when competing priorities or other effective alternatives exist. Thus, recommendations for continuing, expanding, redesigning, or terminating a program are separate from judgments regarding a programs effectiveness. Making recommendations requires information concerning the context, particularly the organizational context, in which programmatic decisions will be made.

For additional details, see "Justifying Conclusions".

Ensure Use and Share Lessons Learned

Assuming that lessons learned in the course of an evaluation will automatically translate into informed decision-making and appropriate action would be naive. Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately. Preparing for use involves strategic thinking and continued vigilance, both of which begin in the earliest stages of stakeholder engagement and continue throughout the evaluation process. The following five elements are critical for ensuring use of an evaluation:

  • Design: Design refers to how the evaluations questions, methods, and overall processes are constructed. As discussed in the third step of this framework, the design should be organized from the start to achieve intended uses by primary users. Having a clear design that is focused on use helps persons who will conduct the evaluation to know precisely who will do what with the findings and who will benefit from being a part of the evaluation.
  • Preparation: Preparation refers to the steps taken to rehearse eventual use of the evaluation findings. The ability to translate new knowledge into appropriate action is a skill that can be strengthened through practice. Building this skill can itself be a useful benefit of the evaluation. Rehearsing how potential findings (particularly negative findings) might affect decision-making will prepare stakeholders for eventually using the evidence. Preparing for use also gives stakeholders time to explore positive and negative implications of potential results and time to identify options for program improvement.
  • Feedback: Feedback is the communication that occurs among all parties to the evaluation. Giving and receiving feedback creates an atmosphere of trust among stakeholders; it keeps an evaluation on track by letting those involved stay informed regarding how the evaluation is proceeding.
  • Follow-up: Follow-up refers to the technical and emotional support that users need during the evaluation and after they receive evaluation findings. Because of the effort required, reaching justified conclusions in an evaluation can seem like an end in itself; however, active follow-up might be necessary to remind intended users of their planned uses. Follow-up might also be required to prevent lessons learned from becoming lost or ignored in the process of making complex or politically sensitive decisions. Facilitating use of evaluation findings also carries with it the responsibility for preventing misuse. Active follow-up can help prevent misuse by ensuring that evidence is not misinterpreted and is not applied to questions other than those that were the central focus of the evaluation.
  • Dissemination:
    Dissemination is the process of communicating either the procedures or the lessons learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion. Although documentation of the evaluation is needed, a formal report is not always the best or even a necessary product. Planning effective communication also requires considering the timing, style, tone, message source, vehicle, and format of information products. Regardless of how communications are constructed, the goal for dissemination is to achieve full disclosure and impartial reporting. A checklist of items to consider when developing evaluation reports includes tailoring the report content for the audience, explaining the focus of the evaluation and its limitations, and listing both the strengths and weaknesses of the evaluation.

 

For additional details, see "Ensuring Use and Sharing Lessons Learned"

Citation: Centers for Disease Control and Prevention.  Framework for Program Evaluation in Public Health. MMWR 1999;48(No. RR-11).

 

: 161386
تعداد نظرات تا اين لحظه :  580 اعلام نظر