Recommended Resources for Evaluating Program Improvement Efforts (including the SSIP)
On This Page
This document provides a list of recommended existing resources for state Part C and Part B 619 staff and technical assistance (TA) providers to utilize to support evaluation planning for program improvement efforts (including the State Systemic Improvement Plan, SSIP). There are many resources available related to evaluation and evaluation planning, but these were selected as the ones most relevant to and useful for early intervention and preschool special education.
This document was revised in February 2017 to reflect the changing needs of states in Phase III of the SSIP. As states begin implementing and evaluating their SSIP, they must present data that support their decisions to make mid-course corrections or continue implementation without adjustments. This resource includes highlighted resources from the original document, intended to support planning in Phase II, as well as new exercises and examples that may be used to support data-based decision-making by state leaders and stakeholders during Phase III. This document will continue to be updated as relevant resources become available.
The resources outlined below are organized by topic to guide readers to the appropriate resource to meet their needs, which may include infrastructure development, supports and general evaluation planning (Phase II) and data collection, management, and reporting to justify plan adjustments (Phase III). Additional resources can be found at the US Department of Education, OSERS RDA resource page, the SSIP Phase II Process Guide and the SSIP Phase III Process Guide.
Resources Specific to SSIP Evaluation
A Guide to SSIP Evaluation Planning
This guide was developed by the IDEA Data Center (IDC) Evaluation Workgroup for use by IDC technical assistance providers when assisting state staff in planning State Systemic Improvement Plan (SSIP) evaluations. It identifies steps and considerations for developing a high-quality SSIP evaluation plan based on OSEP’s SSIP Phase II Guidance and Review Tools. It is not intended to provide standalone guidance to states, but some state staff and non-IDC TA providers may find it useful to consult. See the guide for more information on this and on accessing TA.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Page 3: Step 3, Linking activities to outputs and outcomes
- Page 3: Step 4, Developing evaluation questions. Questions addressing process and outcomes
Phase III: Data Collection and Reporting
- Page 5: Step 5, Identify data collection strategies
- Page 6: Step 8, Plan to share and use evaluation results along the way
- Page 19: Worksheet 11, Plan for data use and dissemination by analysis results
Source: Nimkoff, T., Fiore, T., and Edwards, J. (January 2016). A Guide to SSIP Evaluation Planning. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://ideadata.org/sites/default/files/media/documents/2021-01/2-A-Guide_to_SSIP_Evaluation_Planning.pdf
SSIP Evaluation Plan Guidance Tool
This guide was developed by the Office of Special Education Programs (OSEP) for states’ use in reviewing and further developing SSIP evaluation plans for Phase III submissions. The elements included in this tool are derived from OSEP’s indicator measurement tables and Phase II review tool. The questions for consideration included for each element will assist States as they communicate the results of their SSIP implementation activities to stakeholders and organize the Phase III SSIP submission due to OSEP on April 3, 2017.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Pages 1-2: Alignment with Phases I and II
Phase III: Data Collection and Reporting
- Pages 2-3: Procedures and Analysis
Source: Office of Special Education Programs (2016). SSIP Evaluation Plan Guidance Tool. US Department of Education. Washington, DC. Retrieved from https://ectacenter.org/~pdfs/grads360/12904.pdf
Comprehensive Evaluation Guidebooks and Manuals
The Program Manager's Guide to Evaluation, 2nd Edition
This manual is written for program managers and contains tips, worksheets, and samples to provide information to help you understand each step of the evaluation process.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Pages 8-9: Outline of the basic evaluation steps
- Pages 30-41: Outlines the steps to prepare for an evaluation (Chapter 5)
- Pages 42-43: Sample logic model and logic model worksheet
- Pages 44-45: Sample and worksheet for describing implementation objectives in measurable terms
- Pages 46-47: Sample and worksheet for describing participant outcome objectives
- Pages 59-61: Sample outline for an evaluation plan
Phase III: Data Collection and Reporting
- Pages 74-75: Sample data collection plan
- Page 76: Worksheet for developing a data collection plan
- Page 86: Sample table for analyzing information on implementation objectives
- Pages 90-95: Sample outline for a final evaluation report
Source: Administration for Children and Families, Office of Planning, Research and Evaluation (ACF OPRE) (2010). Retrieved from https://files.eric.ed.gov/fulltext/ED566135.pdf
We Did It Ourselves: An Evaluation Guidebook
This guidebook is written for the person who has never done an evaluation before. It provides step-by-step instructions on how to design and carry out an evaluation. It could also be used as a reference by people interested in certain phases of the evaluation process, such as writing performance indicators or designing a survey. The guidebook details how to:
- develop outcome statements, indicators, and evaluation questions;
- formulate an evaluation methodology and collect, assess, and summarize data, and
- develop and disseminate evaluation findings and recommendations.
The guidebook also contains a glossary, sample outcome and indicator statements, evaluation resources, and real-life stories of how community organizations used evaluation tools. This resource is lengthy, but full of useful explanations of evaluation concepts and has some really wonderful worksheets and resources that can be used by states for the SSIP and also for TA providers to use with states during sessions and workshops.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Page 12: Worksheet for writing outcome statements linked to issues
- Pages 35-39: Worksheets to develop your evaluation questions
- Page 58: Implementation questions
Phase III: Data Collection and Reporting
- Page 61: Worksheet for documenting strategies and activities
- Pages 62-66: Examples of documentation forms
- Pages 91-96: Examples and worksheets for developing an evaluation work plan
- Pages 154-155: Exercise for analyzing training attendance data
- Pages 166-171: Examples and worksheets for summarizing findings when preparing to write an evaluation report
Source: SRI International (supported by the Sierra Health Foundation) (2000). Retrieved from https://www.yumpu.com/en/document/view/50107005/we-did-it-ourselves-sierra-health-foundation
Taking Stock: A practical guide to evaluating your own programs
This manual is written for community-based organizations and provides a practical guide to program evaluation. It focuses on internal evaluation conducted by program staff, which will be useful for States planning on conducting their SSIP evaluation internally. The manual provides a nice overview of the evaluation process and includes the basic steps of planning for and conducting internal program evaluation, including practical strategies for identifying quantitative and qualitative data.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Pages 15-19: What Are You Trying to Do? Defining Goals and Objectives (Chapter 4)
- Page 25: Evaluation Planning Chart
- Page 47: Chart showing link between program objectives and evaluation questions
- Pages 61-62: Example roadmap for evaluation design
Phase III: Data Collection and Reporting
- Pages 27-37: Finding the Evidence: Strategies for Data Collection (Chapter 6)
- Appendices: Example evaluation reports
Source: Bond, S.L., Boyd, S.E., Rapp, K.A., Raphael, J.B. and Sizemore, B.A. (1997). Horizon Research. Retrieved from https://www.dcjs.virginia.gov/sites/dcjs.virginia.gov/files/publications/victims/takingstock.pdf
Resources Specific to Evaluating Systems Change
A Framework for Evaluating Systems Initiatives
Directed at state staff engaged in building an early childhood system, it offers a framework for evaluating systems initiatives that connects the diverse elements involved in systems change. The executive summary includes general principles for evaluating systems initiatives and figures that support states in developing a theory of change and making decisions around evaluation planning.
Source: Coffman, Julia. (2007). BUILD Initiative. Framework for Evaluating Systems Initiatives (executive summary). Retrieved from https://buildinitiative.org/resource-library/a-framework-for-evaluating-system-initiatives/
Measuring Implementation of Early Childhood Interventions at Multiple Systems Levels
This brief is written for early childhood researchers, program developers, and funders and seeks to introduce the importance of measuring implementation at multiple system levels and proposes tools for doing so, including a cascading logic model that makes connections between the outcomes and resources of different systems. The brief uses two illustrative examples:
- a state's effort to improve the quality of infant-toddler child care and
- a state's effort to improve child and family outcomes through the expansion of home visiting.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Page 15: Example of cascading logic model for quality improvement initiative
- Page 18: Example of cascading logic model for evidence-based home visiting program
Phase III: Data Collection and Reporting
- Page 16: Example of measurement for cascading logic model from page 15
- Page 19: Example of measurement for cascading logic model from page 18
Source: Berghout, Anne M., Lokteff, Maegan, Paulsell, Diane. (2013). OPRE. Retrieved from https://www.acf.hhs.gov/sites/default/files/documents/opre/levels_brief_final_002.pdf
Resources Specific to Writing Evaluation Reports
Developing an Effective Evaluation Report
This workbook is written for public health program managers, administrators, and evaluators to support their construction of an effective evaluation report. It encourages a shared understanding of what constitutes a final evaluation report for people with all levels of evaluation experience. The workbook outlines six steps for report development and provides worksheets and examples for each step. It offers exercises on outlining a report, communicating results, and stakeholder inclusion. It is especially geared toward stakeholder participation in reporting which will help states as they consider how to include stakeholders in their SSIP.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- N/A
Phase III: Data Collection and Reporting
- Page 22: Evaluation plan methods grid example aligns goals, indicators and measures for communicating results to outside groups
- Page 33: Communication plan table
- Page 40: Glossary of evaluation report elements
- Pages 49-50: Checklist for assessing evaluation questions
- Pages 51-52: Stakeholder interpretation meeting exercise
Source: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity. (2013). Developing an effective evaluation report: Setting the course for effective program evaluation. Retrieved from https://www.cdc.gov/eval/materials/developing-an-effective-evaluation-report_tag508.pdf
Resources Specific to Developing Logic Models to Plan for Evaluation
Logic Model Development Guide
This guide is written for a broad audience to clarify the what, why, and how of logic models: what logic models are and the different types; why a program should develop a logic model; and how a logic model can be used to guide implementation and plan for evaluation. The guide includes templates and checklists that states can use to apply it to their specific SSIP. It provides useful explanations and definitions of evaluation terminology.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Page 3 (Figure 2): How to read a logic model graphic
- Pages 9-10: 3 Approaches to logic models
Phase III: Data Collection and Reporting
- N/A
Source: W.K. Kellogg Foundation (Updated 2004). Retrieved from https://wkkf.issuelab.org/resource/logic-model-development-guide.html
Logic Models as a Platform for Program Evaluation Planning, Implementation, and Use of Findings
This presentation is geared toward helping practitioners develop useful logic models. The slides address the hallmarks of well-constructed, useful logic models and how to use logic model for program evaluation planning, implementation, and using the findings. The slides do a nice job of breaking down and explaining the different components and terminology associated with a logic model. Additional resources and sites are also included. States can use this presentation as an introduction to logic models and as guidance for key components to create a meaningful logic model for their SSIP evaluation.
Most Relevant Sections:
Phase II: Planning and Infrastructure
- Slides 3 and 4: Components of a logic model and how to construct one
- Slide 11: Process diagram for creating a logic model
Phase III: Data Collection and Reporting
- N/A
Source: Honeycutt, S. and Kegler, M. C. (2010). Honeycutt, S. and Kegler, M. C. AEA/CDC 2010 Summer Evaluation Institute. Retrieved from http://comm.eval.org/viewdocument/?DocumentKey=79c7da3d-6978-452c-8e8b-7056d3626966
The contents of this document were developed under cooperative agreement numbers # H326R140006 (DaSy), #H326P120002 (ECTA Center), and #H326R140006 (NCSI) from the Office of Special Education Programs, U.S. Department of Education. Opinions expressed herein do not necessarily represent the policy of the US Department of Education, and you should not assume endorsement by the Federal Government.
Project Officers: Meredith Miceli and Richelle Davis (DaSy), Julia Martin Eile (ECTA Center) and Perry Williams and Shedeh Hajghassemali (NCSI)