Evaluating the Improvement Plan
During Phase III states collect, analyze, report, and use evaluation data based on the methods, tools and timelines outlined in the Phase II evaluation plan. These data are used to track implementation progress, evaluate infrastructure improvements, measure practice change and fidelity and monitor progress toward achieving the SIMR.
Throughout evaluation of the SSIP, states engage stakeholders. Stakeholder engagement activities include engaging workgroups in developing data collection tools, providing training to support stakeholder understanding of evaluation data, and soliciting their input/decision-making about the interpretations of the data and potential changes to the implementation and evaluation plan.
States report this progress and outcome data as well as revisions to the plans to OSEP in the Phase III SSIPs due in April each year.
Considerations
- On an ongoing basis, assess the effectiveness of the evaluation plan for supporting the implementation of the SSIP including alignment of the evaluation plan to improvement activities and the alignment between intended outcomes, evaluation questions, performance indicators and data collection methods/tools.
- Evaluate infrastructure improvements as well as practice change/practice fidelity.
- Make adjustments in data collection methods/tools as needed to better measure implementation and achievement of intended outcomes related to infrastructure improvements and practice change/practice fidelity.
- Analyze data to address critical evaluation questions and determine if performance indicators are being met.
- Review and analyze data collected and the barriers to data collection to determine if data quality issues exist.
- Review and adjust resources, as needed, to conduct the evaluation plan.
- Review and adjust, as needed, the individuals involved in each stage of the evaluation plan (data collection activities, data analyses, etc.), including engaging stakeholders.
- Continue to use data to support and guide improvement strategies and implementation processes.
- Ensure that communication protocols and feedback loops are effectively utilized to support communication across all levels of the system.
- Disseminate implementation and evaluation data to all stakeholder groups and intentionally utilize stakeholder feedback to inform adjustments to the plan.
Tools and Resources: Evaluation Planning
-
This guide was developed by the Office of Special Education Programs (OSEP) for states’ use in reviewing and further developing SSIP evaluation plans for Phase III submissions. The elements included in this tool are derived from OSEP’s indicator measurement tables and Phase II review tool. The questions for consideration included for each element will assist States as they communicate the results of their SSIP implementation activities to stakeholders.
Office of Special Education Programs. (2016). SSIP Evaluation Plan Guidance Tool. US Department of Education. Washington, DC. Retrieved from https://ectacenter.org/~pdfs/grads360/12904.pdf
-
This guide was developed by the IDEA Data Center (IDC) Evaluation Workgroup for use by IDC technical assistance providers when assisting state staff in planning State Systemic Improvement Plan (SSIP) evaluations. It identifies steps and considerations for developing a high-quality SSIP evaluation plan. Most relevant sections include:
Phase II: Planning and Infrastructure
- Page 3: Step 3, Link activities to outputs and outcomes
- Page 3: Step 4, Develop evaluation questions
Phase III: Data Collection and Reporting
- Page 5: Step 6, Identify data collection strategies
- Page 6: Step 8, Plan to share and use evaluation results along the way
- Page 19: Worksheet 11, Plan for data use and dissemination by analysis results
Source: Nimkoff, T., Fiore, T., and Edwards, J. (January 2016). A Guide to SSIP Evaluation Planning. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://www.ideadata.org/sites/default/files/media/documents/2017-09/a_guide_to_ssip_evaluation_planning.pdf
-
This 2017 document is designed for states who are currently working to refine and refocus their short and/or long term intended SSIP outcomes. It includes a worksheet to support states in identifying the intended outcomes that are most critical to the success of their SSIP and refining the language of those outcomes to best align to the theory of action.
Source: Early Childhood Technical Assistance Center, Center for IDEA Early Childhood Data Systems, and National Center for Systemic Improvement. (2017, April). Refining your evaluation: Refining intended SSIP outcomes. Retrieved from https://ectacenter.org/~pdfs/topics/ssip/Refining_Intended_SSIP_Outcomes.pdf
-
This 2017 document is designed to assist states develop or refine their SSIP performance indicators. It includes a worksheet with a series of questions based on S.M.A.R.T. criteria (Specific, Measurable, Achievable, Relevant and Timely) that will help states write performance indicators, providing the information needed for the SSIP, and will articulate a rationale for making changes to existing performance indicators and their corresponding intended outcome statements.
Source: Early Childhood Technical Assistance Center, Center for IDEA Early Childhood Data Systems, and National Center for Systemic Improvement. (2017, April). Refining your evaluation: Refining S.M.A.R.T. performance indicators. Retrieved from https://ectacenter.org/~pdfs/topics/ssip/Refining_SMART_Performance_Indicators.pdf
-
This 2017 reference tool includes a series of steps that are required to collect high quality data needed to evaluate SSIP implementation and outcomes. These steps can help states collect high quality data regardless of whether or not they are in the process of planning for or engaged in data collection.
Source: Early Childhood Technical Assistance Center, Center for IDEA Early Childhood Data Systems, and National Center for Systemic Improvement. (2017, April). Refining your evaluation: Data pathway - from source to use. Retrieved from https://ectacenter.org/~pdfs/topics/ssip/Data_Pathway.pdf
-
This interactive self-assessment tool guides state staff in gauging progress on key components necessary for fully executing their SSIP evaluation plan and in identifying action steps needed to realize the greatest benefit from SSIP evaluation efforts. State staff can download and complete the tool's checklist through individual self-reflection or as part of a group discussion.
Source: Nimkoff, T., Shaver, D., Schroeder, K., and Fiore, T. Operationalizing Your SSIP Evaluation: A Self-Assessment Tool, (Version 1.0). IDEA Data Center. Rockville, MD: Westat. Retrieved from https://www.ideadata.org/sites/default/files/media/documents/2017-09/idc_opssip_eval_tool.pdf
Tools and Resources: Data Collection
Measuring Implementation
-
This brief is written for early childhood researchers, program developers, and funders and seeks to introduce the importance of measuring implementation at multiple system levels and proposes tools for doing so, including a cascading logic model that makes connections between the outcomes and resources of different systems. The brief uses two illustrative examples: a state's effort to improve the quality of infant-toddler child care and a state's effort to improve child and family outcomes through the expansion of home visiting.
Source: Paulsell, D., Austin, A. M. B., and Lokteff, M. (2013). Measuring implementation of early childhood interventions at multiple system levels (OPRE Research Brief OPRE 2013-16). Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from https://www.acf.hhs.gov/sites/default/files/documents/opre/levels_brief_final_002.pdf
-
This presentation at the SSIP Interactive Institutes in 2015 focuses on how to use high quality data to support effective implementation. Information highlights informed use of data for decision making and improvement and the conditions under which high-quality data can make the most difference.
Source: The IDEA Data Center in collaboration with the Center for IDEA Early Childhood Data Systems, the Early Childhood Technical Assistance Center, the National Center for Systemic Improvement, and National Technical Assistance Center for Transition. (2015). Retrieved from https://ideadata.org/sites/default/files/media/documents/2017-09/allslides-keynote_day1_sisep_nirn.pdf
-
This 2013 brief published by OPRE, the Office of Planning Research and Evaluation of the Administration for Children and Families, suggests that greater efforts are needed to incorporate quality measures into the implementation evaluation process. It offers examples of how quality and quantity can be assessed and examined in relation to early care and education program outcomes.
Source: Downer, J. and Yazejian, N. (2013). Measuring the quality and quantity of implementation in early childhood interventions (OPRE Research Brief OPRE 2013-12). Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from https://www.acf.hhs.gov/opre/report/measuring-quality-and-quantity-implementation-early-childhood-interventions
-
A variety of tools are provided on the National Implementation Research Network’s Active Implementation Hub, designed to assist states in assessing the nature and quality of their use of coaching and various implementation drivers.
- Resource Library: Evaluation and Planning Tools, Drivers
Source: The National Implementation Research Network’s Active Implementation Hub. Retrieved from: https://implementation.fpg.unc.edu/resources/ - Resource Library: Cycles, Improvement
Source: The National Implementation Research Network’s Active Implementation Hub. Retrieved from: https://implementation.fpg.unc.edu/resources/
- Resource Library: Evaluation and Planning Tools, Drivers
Evaluating Systems Change
-
Directed at state staff engaged in building an early childhood system, offers a framework for evaluating systems initiatives that connects the diverse elements involved in systems change. It includes general principles for evaluating systems initiatives and figures that support states in developing a theory of change and making decisions around evaluation planning.
Source: Coffman, Julia. (2007). BUILD Initiative. Framework for Evaluating Systems Initiatives. Retrieved from https://buildinitiative.org/wp-content/uploads/2021/08/Framework-for-Evaluating-Systems-Initiatives.pdf
-
For capturing changes to the quality of state infrastructure system component(s) or how they are progressing relative to a quality standard, consider the System Framework Self-Assessment Tool and Comparison Tool. The System Framework can be completed for just one component, such as Fiscal, or for multiple components if a state is working on improving more than one.
Source: The Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA). (2015). Retrieved from https://ectacenter.org/sysframe/selfassessment.asp and http://dasycenter.org/self-assessment-comparison-tool/
-
For states focusing on improving their child outcomes measurement system (data quality and/or use) in particular, the State Child Outcomes Measurement System (COMS) Framework and Self-Assessment is tailored for that purpose. Either the entire self-assessment can be completed or specific sections most relevant to state improvement activities can be selected.
Source: The Early Childhood Outcomes Center (ECO). (2011). Scale for Assessing State Implementation of a Child Outcomes Measurement System. Retrieved from https://ectacenter.org/eco/pages/childoutcomes.asp#S-COMS
-
States planning to measure the current status and/or change in child outcomes measurement system at the local level may want to consider having local programs complete the Local Child Outcomes Measurement System Self-Assessment.
Source: The Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA). (2017). Retrieved from https://ectacenter.org/eco/pages/childoutcomes.asp#L-COMS
-
The COS Team Collaboration (COS-TC) Quality Practices Checklist is designed for states who use the Child Outcomes Summary Process (COS) and would like to assess the extent to which local teams are using quality practices to complete the COS ratings. The checklist could be easily converted to an online survey as well.
Source: The Early Childhood Technical Assistance Center (ECTA) and the Center for IDEA Early Childhood Data Systems (DaSy). Retrieved from https://ectacenter.org/eco/pages/costeam.asp
-
For states focusing on improving their family outcomes measurement system, the State Family Outcomes Measurement System Framework and Self-Assessment is tailored for that purpose. It is designed to be completed in total or by using specific sections that are most relevant to state improvement activities.
Source: The Early Childhood Outcomes Center (ECO). Retrieved from https://ectacenter.org/eco/pages/familyoutcomes.asp#S-FOMS
-
The State Leadership Team Benchmarks of Quality is used by a collaborative State Leadership Team (SLT) to assess progress and plan future actions so that Recommended Practices (RPs) are available for providers and families statewide. This document focuses on evaluating the infrastructure elements that are in place to support implementation of evidence-based practices. The Benchmarks are grounded in the science of implementation which bridges the gap between evidence-based practices and real-life application.
Source: Smith, B.J., Fox, L., Strain, P., Binder, D.P., Bovey, T., Jones, A., McCullough, K., Veguilla, M., Dunlap, G., Blase, K., Trivette, C.M., Shapland, D., and Danaher, J. (2018). State Leadership Team Benchmarks of Quality: Implementing Evidence-Based Practices Statewide. Retrieved from https://ectacenter.org/~pdfs/sig/2_3_benchmarks_slt.pdf
-
Presents a qualitative rubric and describes a method for using the rubric to measure progress in relationship building within a group. The tool can be used over time to give teams data on the depth of interaction of the team.
Source: The IDEA Partnership. (2014). Retrieved from https://www.ilispa.org/assets/LbCRubrics.pdf
Measuring Practice Change and Fidelity
-
The State Leadership Team Benchmarks of Quality is used by a collaborative State Leadership Team (SLT) to assess progress and plan future actions so that Recommended Practices (RPs) are available for providers and families statewide. This document focuses on evaluating the infrastructure elements that are in place to support implementation of evidence-based practices. The Benchmarks are grounded in the science of implementation which bridges the gap between evidence-based practices and real-life application.
Source: Smith, B.J., Fox, L., Strain, P., Binder, D.P., Bovey, T., Jones, A., McCullough, K., Veguilla, M., Dunlap, G., Blase, K., Trivette, C.M., Shapland, D., and Danaher, J. (2018). State Leadership Team Benchmarks of Quality: Implementing Evidence-Based Practices Statewide. Retrieved from https://ectacenter.org/~pdfs/sig/2_3_benchmarks_slt.pdf
-
The State Leadership Team Benchmarks of Quality is used by a collaborative State Leadership Team (SLT) to assess progress and plan future actions so that Recommended Practices (RPs) are available for providers and families statewide. This document focuses on evaluating the infrastructure elements that are in place to support implementation of evidence-based practices. The Benchmarks are grounded in the science of implementation which bridges the gap between evidence-based practices and real-life application.
Source: Binder, D.P. amp; Fox, L. (2018). Benchmarks of Quality for Classroom-Based Programs (Implementing Recommended Practices Edition). Retrieved from https://ectacenter.org/~pdfs/sig/4_9_benchmarks_classroom.pdf
-
Adapted with permission from the Early Childhood Program-Wide PBS Benchmarks of Quality by Lise Fox, Mary Louis Hemmeter, and Susan Jack (2010), this tool provides benchmarks for Local Implementation Teams (LITs) to measure key elements of quality for classroom-based early childhood programs.
Source: Trivette, C.M. and Jones, A. (2018). Benchmarks of Quality for Home-Visiting Programs (Implementing Recommended Practices Edition). Retrieved from https://ectacenter.org/~pdfs/sig/4_10_benchmarks_homevisiting.pdf
-
Implementing a fidelity assessment often poses a number of challenges for implementation teams. One of the activities featured on the National Implementation Research Network’s Active Implementation Hub, this activity provides an initial four‐step approach for identifying, categorizing, and discussing challenges, then completing an action plan to develop and fidelity assessment system.
Source: The Active Implementation Hub, AI Modules and AI Lessons are developed by the State Implementation and Scaling-up of Evidence-based Practices Center (SISEP) and The National Implementation Research Network (NIRN) located at The University of North Carolina at Chapel Hill’s FPG Child Development Institute. Copyright 2015. Retrieved from https://implementation.fpg.unc.edu/resource/activity-designing-a-fidelity-assessment/
-
One of the activities featured on the National Implementation Research Network’s Active Implementation Hub, this activity is for teams who have identified the core components of their intervention or innovation and clearly defined and operationalized them. Using the Fidelity Assessment Brainstorming Worksheet (included), individuals or teams can use this tool to develop a fidelity assessment.
Source: The Active Implementation Hub, AI Modules and AI Lessons are developed by the State Implementation and Scaling-up of Evidence-based Practices Center (SISEP) and The National Implementation Research Network (NIRN) located at The University of North Carolina at Chapel Hill’s FPG Child Development Institute. Copyright 2015. Retrieved from https://implementation.fpg.unc.edu/resource/activity-developing-a-fidelity-assessment/
-
This presentation discusses existing resources and tools to help evaluate implementation of practices. It also shares concrete examples of how two states are working to gather information on implementation of practices.
Source: Abby Schachner, Megan Vinh, Maureen Casey, Dana Romary, and Ruth Chvojichek. Presentation at the Improving Data, Improving Outcomes Conference, August 2016. Retrieved from https://ectacenter.org/~pdfs/meetings/ecidea16/EBP_Session_IDIO_Conference_FINAL_8-10-2016.pptx.pdf
Tools and Resources: Data Analysis
-
This 2015 document was developed to help technical assistance (TA) providers and state staff define and limit the scope of data analysis for program improvement efforts, including the State Systemic Improvement Plan (SSIP); develop a plan for data analysis; document alternative hypotheses and additional analyses as they are generated; and summarize findings and document results.
Source: The Center for IDEA Early Childhood Data Systems and the Early Childhood Technical Assistance Center. (2015). Planning, conducting, and documenting data analysis for program improvement. Menlo Park, CA: SRI International. Retrieved from https://dasycenter.sri.com/downloads/DaSy_papers/DaSy_SSIP_DataAnalysisPlanning_20150323_FINAL_Acc.pdf
-
The purpose of broad data analysis is to look at how children in the state are performing relative to national data, across years, within the state and by comparisons across programs within the state. This template has been developed to assist in conducting an initial analysis with data you currently use for reporting in the APR. An example of one state’s data is included to illustrate how the template can be used.
Source: The Early Childhood Technical Assistance Center and the Center for IDEA Early Childhood Data Systems. (2014). SSIP Child Outcomes Broad Data Analysis Template. Retrieved from https://ectacenter.org/eco/assets/docs/SSIP_child_outcomes_broad_data_analysis_template_FINAL.docx
-
SSIP Child Outcomes Subgroup Analysis Template (March, 2014) provides states with table shells for subgroup analyses that have proven useful in understanding predictors of child outcomes. The shells are suggestions and should be tailored to fit the appropriate categories for each state.
Source: The Early Childhood Technical Assistance Center and the Center for IDEA Early Childhood Data Systems. (2014). SSIP Child Outcomes Subgroup Analysis Template. Retrieved from https://ectacenter.org/eco/assets/docs/subgroupdataanalysistemplate.docx
-
This 2017 reference tool includes a series of steps that are required to collect high quality data needed to evaluate SSIP implementation and outcomes. These steps can help states collect high quality data regardless of whether or not they are in the process of planning for or engaged in data collection.
Source: Early Childhood Technical Assistance Center, Center for IDEA Early Childhood Data Systems, and National Center for Systemic Improvement. (2017, April). Refining your evaluation: Data pathway - from source to use. Retrieved from https://ectacenter.org/~pdfs/topics/ssip/Data_Pathway.pdf
-
These calculators compute the 90% confidence interval around state and local summary statement values. Confidence intervals can be used to understand the precision of summary statement values; however, summary statement values with very large confidence intervals (more than ±5%) should be interpreted with caution.
- Child Outcomes Year-to-Year Meaningful Differences Calculator for States
Look at the statistical significance of change in state child outcomes summary statements from year-to-year and compare local performance to the state’s performance (updated September 2017). - Child Outcomes Year-to-Year Meaningful Differences Calculator for Local Programs
Look at the statistical significance of change in local programs’ child outcomes summary statements from year-to-year (updated December 2015).
Source: Early Childhood Technical Assistance Center and Center for IDEA Early Childhood Data Systems. Retrieved from https://ectacenter.org/eco/pages/childoutcomes-calc.asp#meaningful
- Child Outcomes Year-to-Year Meaningful Differences Calculator for States
Tools and Resources: Reporting and Communicating Evaluation Results
-
This workbook is written for public health program managers, administrators, and evaluators to support their construction of effective evaluation reports. It encourages a shared understanding of what constitutes a final evaluation report for people with all levels of evaluation experience. The workbook outlines six steps for report development and provides worksheets and examples for each step. It is especially geared toward stakeholder participation in reporting which will help states as they consider how to include stakeholders in their SSIP. Steps 4 through 6, pages 20 – 37, are especially relevant to reporting, communicating and disseminating results.
Source: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity. (2013). Developing an effective evaluation report: Setting the course for effective program evaluation. Retrieved from https://www.cdc.gov/eval/materials/developing-an-effective-evaluation-report_tag508.pdf
-
This document is designed to illustrate how a state might summarize and report data gathered through the System Framework self-assessment process to document infrastructure improvements in their Phase III SSIP or other program improvement efforts. A template for reporting progress is provided along with an example of hypothetical state data.
Source: Early Childhood Technical Assistance Center, and Center for IDEA Early Childhood Data Systems. (2017). Summarizing and Reporting ECTA System Framework Self-Assessment Data to Demonstrate Infrastructure Improvements for SSIP Phase III. Retrieved from https://ectacenter.org/~pdfs/sysframe/Reporting_Infrastructure_Improvements_2017-03-03.pdf
-
This guide was developed by the IDEA Data Center (IDC) for use by technical assistance providers when assisting state staff in planning State Systemic Improvement Plan (SSIP) evaluations. Particularly relevant to reporting are Page 6: Step 8, Plan to share and use evaluation results along the way; and Page 19: Worksheet 11, Plan for data use and dissemination by analysis results.
Source: Nimkoff, T., Fiore, T., and Edwards, J. (January 2016). A Guide to SSIP Evaluation Planning. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://ideadata.org/sites/default/files/media/documents/2017-09/a_guide_to_ssip_evaluation_planning.pdf
-
This protocol provides a simple structure that state and local teams can use to guide conversations around evaluation data during meetings. The protocol is based on the premise that a key part of the data analysis process involves talking about the results of evaluation activities and making meaning of the results together. It details steps to follow before, during, and after meetings to support data-informed decision-making.
Source: Nimkoff, T., Shaver, D., and Schroeder, K. (2018, January). Data Meeting Protocol. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://www.ideadata.org/sites/default/files/media/documents/2018-01/51457-IDC_MtngProtclTool-V4.pdf
-
This 2018 brief offers guidance on the varied uses/audiences for infographics, best practices for creating infographics, and examples of state SSIP infographics. It also offers an exemplar SSIP infographic and instructions for how to access the template through a TA provider.
Source: Center for IDEA Early Childhood Data Systems (DaSy), National Center for Systemic Improvement (NCSI), and Early Childhood Technical Assistance Center. (January 2018). Retrieved from https://dasycenter.sri.com/downloads/DaSy_papers/SSIPY511_InfographicGuide.pdf
Tools and Resources: Improving Data Quality
-
This table describes strategies for using data analysis to improve the quality of state data by looking for patterns that indicate potential issues for further investigation. The pattern checking table was revised in July 2012. Work is currently underway to expand the document including information about how to use the table and example visual displays of patterns. The expanded version will be posted as soon as it is available.
Source: Early Childhood Outcomes Center (ECO). (2012, July). Checking Outcome Data for Quality: Looking for Patterns. Retrieved from https://ectacenter.org/eco/assets/pdfs/Pattern_Checking_Table.pdf
-
The ECTA Professional Development Resources page includes a number of training presentations and activities that relate to ensuring the quality of data. The Looking at Data presentation file covers child outcomes data analysis and use and is designed for intermediate or advanced learners, particularly those who are responsible for or interested in data management, interpretation, and reporting.
Source: Early Childhood Technical Assistance Center (ECTA). Retrieved from https://ectacenter.org/eco/pages/cospd.asp - childdevelopment (web page); https://ectacenter.org/eco/assets/ppt/LookingAtData_revised.ppt (presentation file)
-
This calculator allows states to determine the statistical significance of change in state family outcomes data from year-to-year and compare local performance to the state’s performance. The calculator computes the 90% confidence interval around values. Confidence intervals can be used to understand the precision of values; however, values with very large confidence intervals (more than ±5%) should be interpreted with caution. (Updated December 2016)
Source: The Early Childhood Outcomes Center (ECO). Retrieved from https://ectacenter.org/eco/assets/xls/MeaningfulDifferencesCalculator_FamilyOutcomes.xlsx
-
This calculator computes response rates for state family survey data and determines if the surveys received are representative of the target population. The calculator uses a statistical formula to determine if two percentages (i.e., % of surveys received versus % of families in target population) should be considered different from each other. Enter the values by subgroup and the calculator will compute the statistical significance of the difference between the two percentages and highlight significant differences. Instructions about how to enter data into the calculator appear at the top of each tab. (Updated December 2015)
Source: The Early Childhood Technical Assistance Center (ECTA). Retrieved from https://ectacenter.org/eco/assets/xls/Representativeness_calculator.xlsx
-
This graphing template compares state C4 family outcomes data to national data in the three sub-indicator areas. In addition, it makes comparisons to subgroups of states that use the same survey and scoring approach for the following: the FOS with recommended scoring, the FOS-Revised with recommended scoring, and the NCSEAM with Rasch scoring. States that use other scoring or surveys can graph their data using the comparison to national data. National data in the calculator are for FFY 2015, submitted by states in February, 2017. (Updated October 2017)
Source: The Early Childhood Outcomes Center (ECO). Retrieved from https://ectacenter.org/~xls/eco/FamilyOutcomes-State_approaches_calculator_FFY2015.xlsx
-
This page contains links to several state web sites with information regarding child and/or family outcome measurement system development, training, and resources. Links are included for Part C and/or section 619 for each state included.
Source: The Early Childhood Technical Assistance Center (ECTA). Retrieved from https://ectacenter.org/eco/pages/states_websites.asp
-
This webpage summarizes information related to including child outcomes data collected via the Child Outcomes Summary (COS) process in an electronic IFSP form. Including COS data in the IFSP can improve the quality of child outcomes data and add value for IFSP team and programmatic decision-making. The webpage includes key considerations, state examples, and resources.
Source: The Center for IDEA Early Childhood Data Systems (DaSy). Retrieved from https://dasycenter.org/ifsp-toolkit/integrated-ifspchild-outcomes-summary-cos-2/
-
These three technical assistance products are designed to be used by the state personnel responsible for the IDEA 618 and/or 616 data. The tools include a brief that introduces the principles of outlier analysis. In addition to this brief, these products include a tutorial on completing an outlier analysis and a tool state staff can use to conduct outlier analyses with their local data. All of these products may be used by IDEA Part B and C state staff working with LEAs and LLAs to analyze their local data. Any state staff with the ability to examine and analyze IDEA 618 and/or 616 data also would benefit from these technical assistance products.
- Outlier Analysis Tool
- IDEA Data Quality: Outlier Analyses Brief
- Outlier Analysis: Step-by-Step Guide
Source: The IDEA Data Center (IDC). Retrieved from https://www.ideadata.org/resources/resource/1508/idea-data-quality-outlier-analyses-tools
-
The Part C Exiting Toolkit allows users to access five different downloadable forms that will assist in the documentation of their Part C Exiting Process and provides checklists they can use to ensure high-quality data. The toolkit also contains the Part C Exiting Counts app. The app is a great tool for understanding the 10 federal Part C Exiting categories. The toolkit also contains links to documents that use Part C Exiting data and to other related resources.
Source: The IDEA Data Center (IDC). Retrieved from https://www.ideadata.org/sites/default/files/media/documents/2018-01/Exiting_PART_C_Interactive_Toolkit.pdf
Tools and Resources: Using Data for Program Decision-Making and Improvement
-
This protocol provides a simple structure that state and local teams can use to guide conversations around evaluation data during meetings. The protocol is based on the premise that a key part of the data analysis process involves talking about the results of evaluation activities and making meaning of the results together. It details steps to follow before, during, and after meetings to support data-informed decision-making.
Source: Nimkoff, T., Shaver, D., and Schroeder, K. (2018, January). Data Meeting Protocol. IDEA Data Center. Rockville, MD: Westat. Retrieved from https://www.ideadata.org/sites/default/files/media/documents/2018-01/51457-IDC_MtngProtclTool-V4.pdf
-
This document presents some key questions to consider at important points in the process of using data for program improvement. The questions are intended to support group discussion and decision-making and to serve as examples of the types of questions to be considered. In most cases, the process of using data is an iterative one, proceeding in a series of steps that sometimes inform earlier steps. This document is best used in combination with other resources or as a point of reference for a group working with technical assistance providers or others who have experience analyzing and interpreting data. Although this document was designed for use with Child Outcomes data, it has general applicability for examining program evaluation. Most relevant sections include Step 1: Page 1, Work with stakeholders to decide where to target your effort. What are your crucial policy and programmatic questions? What do you most want to know to make decisions about services and to improve the program?; Step 10: Page 1, Discuss appropriate actions based on your inference about the data (action). Plan a series of steps expected to improve the program and ultimately change the data; and Step 11: Implement the action plan, including tracking timelines and plans for running follow-up analyses that track changes. Repeating Steps 3–10 at a later time is critical to see system improvements and the effect on outcomes.
Source: Barton, L., Taylor, C., Kasprzak, C., Hebbeler, K., Spiker, D., and Kahn, L. (2013). Analyzing child outcomes data for program improvement: A guidance table. Menlo Park, CA: SRI International, The Early Childhood Outcomes Center. Retrieved from https://ectacenter.org/~pdfs/eco/AnalyzingChildOutcomesData-GuidanceTable.pdf
-
This interactive website from the DaSy Center includes resources, guidance, and templates for budget and fiscal analyses. Fiscal analysis can help determine if or how much funding exists to support "improvement strategies such as personnel development, monitoring, data systems, technical assistance, and personnel development activities."
Source: The IDEA Center for Early Childhood Data Systems (DaSy). Retrieved from http://olms.cte.jhu.edu/olms2/DaSyFinance
-
This report provides an overview of the critical role of fiscal data in state Part C systems. The information is intended to help state Part C lead agency staff better understand strategic fiscal policy questions, the fiscal data elements needed to address those questions, and the benefits of using these data. Fiscal data provide powerful information for decision-making, program management, and policy-making. The use of fiscal data, especially when paired with child and family demographic data and data about services, can help state Part C staff and stakeholders better understand the dynamics that influence the Part C program and its financing.
Source: Greer, M., Kilpatrick, J., Nelson, R., and Reid, K. (2014). Understanding and using fiscal data: A guide for Part C state staff. Menlo Park, CA: SRI International. Retrieved from https://dasycenter.sri.com/downloads/DaSy_papers/Understanding_and_Using_Fiscal%20Data_111914_final_r1_access.pdf
-
This 2015 document was developed to help technical assistance (TA) providers and state staff define and limit the scope of data analysis for program improvement efforts, including the State Systemic Improvement Plan (SSIP); develop a plan for data analysis; document alternative hypotheses and additional analyses as they are generated; and summarize findings and document results.
Source: The Center for IDEA Early Childhood Data Systems and the Early Childhood Technical Assistance Center. (2015). Planning, conducting, and documenting data analysis for program improvement. Menlo Park, CA: SRI International. Retrieved from https://dasycenter.sri.com/downloads/DaSy_papers/DaSy_SSIP_DataAnalysisPlanning_20150323_FINAL_Acc.pdf
-
This quick reference guide was designed to assist states in understanding what information needs to be available in order for stakeholders to assist in selecting potential improvement strategies that will increase the capacity of LEAs, EIS programs, and practitioners to improve results for infants, toddlers and their families or for young children with disabilities. This guide can be used for program improvement as well as for developing the SSIP.
Source: The Early Childhood Technical Assistance Center, the Center for IDEA Early Childhood Data Systems, and the Regional Resource Center Program. Retrieved from https://ectacenter.org/~pdfs/topics/ssip/ssip_strategies_for_improvement.pdf
-
This manual is written for program managers, and contains tips, worksheets and samples to provide information to help you understand each step of the evaluation process. Most relevant sections: Pages 8-9: Outline of the basic evaluation steps; Chapter 5 (pages 30-41): Outlines the steps to prepare for an evaluation; Pages 42-43: Sample logic model and logic model worksheet; Pages 44-45: Sample and worksheet for describing implementation objectives in measurable terms; Pages 46-47: Sample and worksheet for describing participant outcome objectives; Pages 59-61: Sample outline for an evaluation plan; Pages 74-75: Sample data collection plan; and Page 76: Worksheet for developing a data collection plan.
Source: Administration for Children and Families, Office of Planning, Research and Evaluation. (2010). The program manager’s guide to evaluation second edition. Retrieved from https://files.eric.ed.gov/fulltext/ED566135.pdf
-
This manual is written for community-based organizations and provides a practical guide to program evaluation and focuses on internal evaluation conducted by program staff, which will be useful for States planning on conducting their SSIP evaluation internally. The manual provides a nice overview of the evaluation process and includes the basic steps of planning for and conducting internal program evaluation, including practical strategies for identifying quantitative and qualitative data. Most relevant sections: Chapter 4 (pages 15-19): What Are You Trying to Do? Defining Goals and Objectives; Page 25: Evaluation Planning Chart; Chapter 6 (pages 27-37): Finding the Evidence: Strategies for Data Collection; Page 47: Chart of program objectives to evaluation questions; Pages 61-62: Roadmap for evaluation design; and Appendices: Example evaluation reports.
Source: Bond, S. L., Boyd, S. E., and Rapp, K. A. (1997). Taking stock: A practical guide to evaluating your own programs. Chapel Hill, NC: Horizon Research, Inc. Retrieved from https://www.dcjs.virginia.gov/sites/dcjs.virginia.gov/files/publications/victims/takingstock.pdf