This document provides a list of recommended existing resources for state Part C and Part B 619 staff and technical assistance (TA) providers to utilize to support evaluation planning for program improvement efforts (including the State Systemic Improvement Plan, SSIP). There are many resources available related to evaluation and evaluation planning, but these were selected as the ones most relevant to and useful for early intervention and preschool special education.
This document was revised in February 2017 to reflect the changing needs of states in Phase III of the SSIP. As states begin implementing and evaluating their SSIP, they must present data that support their decisions to make mid-course corrections or continue implementation without adjustments. This resource includes highlighted resources from the original document, intended to support planning in Phase II, as well as new exercises and examples that may be used to support data-based decision-making by state leaders and stakeholders during Phase III. This document will continue to be updated as relevant resources become available.
The resources outlined below are organized by topic to guide readers to the appropriate resource to meet their needs, which may include infrastructure development, supports and general evaluation planning (Phase II) and data collection, management, and reporting to justify plan adjustments (Phase III). Additional resources can be found at the GRADS 360 SSIP resource page, the SSIP Phase II Process Guide and the SSIP Phase III Process Guide.
This guide was developed by the IDEA Data Center (IDC) Evaluation Workgroup for use by IDC technical assistance providers when assisting state staff in planning State Systemic Improvement Plan (SSIP) evaluations. It identifies steps and considerations for developing a high-quality SSIP evaluation plan based on OSEP’s SSIP Phase II Guidance and Review Tools. It is not intended to provide standalone guidance to states, but some state staff and non-IDC TA providers may find it useful to consult. See the guide for more information on this and on accessing TA.
Source: Nimkoff, T., Fiore, T., and Edwards, J. (January 2016). A Guide to SSIP Evaluation Planning. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://ideadata.org/files/resources/5697cca3140ba0ca5c8b4599/56996726150ba0d53f8b4592/a_guide_to_ssip_evaluation_planning/2016/01/15/a_guide_to_ssip_evaluation_planning.pdf
This guide was developed by the Office of Special Education Programs (OSEP) for states’ use in reviewing and further developing SSIP evaluation plans for Phase III submissions. The elements included in this tool are derived from OSEP’s indicator measurement tables and Phase II review tool. The questions for consideration included for each element will assist States as they communicate the results of their SSIP implementation activities to stakeholders and organize the Phase III SSIP submission due to OSEP on April 3, 2017.
Source: Office of Special Education Programs (2016). SSIP Evaluation Plan Guidance Tool. US Department of Education. Washington, DC. Retrieved from https://osep.grads360.org/#communities/pdc/documents/12904
This manual is written for program managers and contains tips, worksheets, and samples to provide information to help you understand each step of the evaluation process.
Source: Administration for Children and Families, Office of Planning, Research and Evaluation (ACF OPRE) (2010). Retrieved from http://www.acf.hhs.gov/programs/opre/research/project/the-program-managers-guide-to-evaluation
This guidebook is written for the person who has never done an evaluation before. It provides step-by-step instructions on how to design and carry out an evaluation. It could also be used as a reference by people interested in certain phases of the evaluation process, such as writing performance indicators or designing a survey. The guidebook details how to:
The guidebook also contains a glossary, sample outcome and indicator statements, evaluation resources, and real-life stories of how community organizations used evaluation tools. This resource is lengthy, but full of useful explanations of evaluation concepts and has some really wonderful worksheets and resources that can be used by states for the SSIP and also for TA providers to use with states during sessions and workshops.
Source: SRI International (supported by the Sierra Health Foundation) (2000). Retrieved from https://www.sierrahealth.org/pages/525
This manual is written for community-based organizations and provides a practical guide to program evaluation. It focuses on internal evaluation conducted by program staff, which will be useful for States planning on conducting their SSIP evaluation internally. The manual provides a nice overview of the evaluation process and includes the basic steps of planning for and conducting internal program evaluation, including practical strategies for identifying quantitative and qualitative data.
Source: Bond, S.L., Boyd, S.E., Rapp, K.A., Raphael, J.B. and Sizemore, B.A. (1997). Horizon Research. Retrieved from http://www.horizon-research.com/taking-stock-a-practical-guide-to-evaluating-your-own-programs/
This executive summary is directed at state staff engaged in building an early childhood system. It offers a framework for evaluating systems initiatives that connects the diverse elements involved in systems change. The executive summary includes general principles for evaluating systems initiatives and figures that support states in developing a theory of change and making decisions around evaluation planning.
Source: Coffman, Julia. (2007). BUILD Initiative. Framework for Evaluating Systems Initiatives (executive summary). Retrieved from http://buildinitiative.org/WhatsNew/ViewArticle/tabid/96/ArticleId/621/Framework-for-Evaluating-Systems-Initiatives.aspx
This brief is written for early childhood researchers, program developers, and funders and seeks to introduce the importance of measuring implementation at multiple system levels and proposes tools for doing so, including a cascading logic model that makes connections between the outcomes and resources of different systems. The brief uses two illustrative examples:
Source: Berghout, Anne M., Lokteff, Maegan, Paulsell, Diane. (2013). OPRE. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/measuring-implementation-of-early-childhood-interventions-at-multiple
This workbook is written for public health program managers, administrators, and evaluators to support their construction of an effective evaluation report. It encourages a shared understanding of what constitutes a final evaluation report for people with all levels of evaluation experience. The workbook outlines six steps for report development and provides worksheets and examples for each step. It offers exercises on outlining a report, communicating results, and stakeholder inclusion. It is especially geared toward stakeholder participation in reporting which will help states as they consider how to include stakeholders in their SSIP.
Source: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity. (2013). Developing an effective evaluation report: Setting the course for effective program evaluation. Retrieved from https://www.cdc.gov/eval/materials/developing-an-effective-evaluation-report_tag508.pdf
This guide is written for a broad audience to clarify the what, why, and how of logic models: what logic models are and the different types; why a program should develop a logic model; and how a logic model can be used to guide implementation and plan for evaluation. The guide includes templates and checklists that states can use to apply it to their specific SSIP. It provides useful explanations and definitions of evaluation terminology.
Source: W.K. Kellogg Foundation (Updated 2004). Retrieved from https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide
This presentation is geared toward helping practitioners develop useful logic models. The slides address the hallmarks of well-constructed, useful logic models and how to use logic model for program evaluation planning, implementation, and using the findings. The slides do a nice job of breaking down and explaining the different components and terminology associated with a logic model. Additional resources and sites are also included. States can use this presentation as an introduction to logic models and as guidance for key components to create a meaningful logic model for their SSIP evaluation.
Source: Honeycutt, S. & Kegler, M. C. (2010). Honeycutt, S. & Kegler, M. C. AEA/CDC 2010 Summer Evaluation Institute. Retrieved from http://comm.eval.org/viewdocument/?DocumentKey=79c7da3d-6978-452c-8e8b-7056d3626966
The contents of this document were developed under cooperative agreement numbers # H326R140006 (DaSy), #H326P120002 (ECTA Center), and #H326R140006 (NCSI) from the Office of Special Education Programs, U.S. Department of Education. Opinions expressed herein do not necessarily represent the policy of the US Department of Education, and you should not assume endorsement by the Federal Government.
Project Officers: Meredith Miceli & Richelle Davis (DaSy), Julia Martin Eile (ECTA Center) and Perry Williams & Shedeh Hajghassemali (NCSI)