Step 1 - Getting the Help You Need
Step 2 - Selecting an Assessment Instrument
Step 3 - Aligning Curriculum and Assessment for Continuous Improvement
Step 4 - Testing Your System
Step 5 - Ongoing Implementation and Problem Solving
Three years ago the Ohio legislature mandated that all state-funded Head
Start programs measure the progress of the children they serve using a common
assessment instrument. The responsibility for designing the measurement
system fell to the Office of Early Childhood in the Ohio Department of Education.
In Ohio, we are serving over 80 percent of the children who qualify for
Head Start services. We are able to serve this many children due to gubernatorial
and legislative support to expand state Head Start funding from less than
$14 million per year in 1990 to over $100 million this year. Along with
this funding came an underlying concern about accountability. Legislators
began asking: How well are the children doing in the programs we fund?
Did Head Start give children a head start? Did the children enter kindergarten
better off than they would have been if they had not been enrolled? For
a long time, we avoided addressing the "results" questions.
We should have listened more carefully and reacted more quickly than we
did. In 1996, one of the research arms of the Ohio State Legislature conducted
a study that found little positive evidence of the impact of Head Start
on children's literacy and social competency skills. We had very
little information to refute the findings.
The handwriting was on the wall. We had to put a system in place that
would provide the data to demonstrate the impact of Head Start on children.
At the same time, we determined that we were also going to include our
public preschool programs and preschool special education services in
our outcomes system. This meant that, eventually, we would be collecting
data for approximately 80,000 children in over five hundred local programs.
The system we developed is based on the Measurement and Planning System
(MAPS) child assessment section of the Galileo software application.
Teachers collect observational data on children's work in the areas of
language and literacy, early math, social development, self help, and
nature and science and enter the information into a computer. Data are
collected at the beginning of the year to document skills the children
have at entry. Teachers update the information as the year progresses
and then enter data at the end of the year. This system gives teachers,
parents, administrators, and legislators a comprehensive picture of the
progress children are making over time in their early childhood program.
The purpose of this article is to describe the five key steps we took
in Ohio to set up this assessment system. We believe our experience can
help local Head Start grantees as they plan for and implement requirements
from the Head Start Bureau to gather, analyze, and use information on
child outcomes in new ways.
Step 1 Getting the Help You Need
It would be a big mistake to enter into the task of starting an outcomes
measurement system thinking that you have all of the answers. Asking for
help maximizes the potential for positive consequences and minimizes the
potential for negative consequences. So, deciding who to ask, how to ask,
and what to ask is an important part of this process.
Some of the best help we got was by reading books and articles on assessment.
A key message from our reading was to be clear on the purposes for assessment
and to be sure the system serves those purposes. Thus, before we did anything
else, we decided our two central purposes were to report on the overall
levels of progress of children in Head Start in Ohio, and to provide assessment
information that would be useful to teachers. That is, we wanted our new
statewide system to reinforce what good teachers were already doing on
a daily basis observing and assessing children's progress to
help make instructional decisions. Finding out what a child knows and
is able to do helps teachers plan new experiences to advance learning.
We wanted the assessment to fit into these daily routines, to provide
a common approach to documenting progress of children, and to assist teachers
in promoting progress.
A second major source of help in our planning was to draw on several
stakeholder groups to help design our system. We convened a series of
discussion sessions with groups including legislators, advocates, program
directors, staff, parents, and state department staff particularly
those with expertise in assessment and information technology. Each group
was asked the same question, "What outcomes do you expect from a
quality early childhood program?"
Next, we held a synthesis meeting with representatives from each group
of stakeholders to get a consensus on the final child outcomes and to
begin developing a continuous improvement system for measuring and using
outcome data. The synthesis meeting was scheduled for two days, but into
the second day, we were still not agreeing on much. We were sensing resistance
or reluctance from some Head Start staff members. Finally, one Head Start
Director, Mary Hodge from Toledo, stood up, faced the group, and asked,
"Why is this so difficult? We are talking about our bragging rights!
Our early childhood programs work and we are deciding which of these outcomes
we want to brag about! These are indicators of our successes!" Thus,
the project was and will forever be called the Indicators of Success Project
(although our personal preference was to call it the Early Childhood Bragging
In addition to reviewing literature on assessment and conducting our public
engagement strategy, we also sought as much technical assistance as we
could find. For instance, we attended a meeting hosted by the National
Early Childhood Technical Assistance System with other states that were
wrestling with early childhood program outcome measures. This meeting
helped us in conceptualizing an outcomes-based continuous improvement
system for programs in our state.
Our advice is to get help from the beginning of your planning and decision-making
process. We learned valuable things from reading, widespread involvement
of stakeholder groups, and technical assistance services. The people and
resources you identify may be different from those we used. Starting with
the purposes of your child outcomes effort, we urge you to seek out help
from experts and involve the people who will implement and use your child
Step 2 Selecting an Assessment Instrument
Once we had agreement on the purposes of our assessment initiative and
the content areas of child outcomes, we began to work on selecting an
assessment instrument for programs to use on a statewide basis. We worked
with our stakeholder groups and experts to develop a set of criteria for
choosing an assessment instrument. The full set of criteria looked like
Purpose of Assessment
- Provide information to
stakeholders about expectations
- Be useful to teachers for
- Be useful to
administrators for improving programs
- Identify children who may
require special interventions
- Track child progress
toward fourth grade curricula outcomes
- Provide information for program accountability
Early Childhood Values
- Collect data by observing
children in a natural setting
- Used with children from
birth to age eight all ability levels
- Categorize observations in content areas and developmental domains
- Provide data on individual
children that can be aggregated at the classroom, program, and
- Provide descriptive
statistics and gain scores
- Available in computerized
and paper formats
- Compatible with the State of Ohio Education Management Information
After an extensive review, we found the MAPS section of the Galileo software
application most appropriate, given our criteria. Many local Head Start agencies are currently reviewing their assessment
instrument and evaluating other options, based on the new Head Start Child
Outcomes Framework. As you look around to decide how to measure outcomes,
we recommend developing a set of criteria, based on input from staff and
other knowledgeable people, to guide your decision-making.
Step 3 Aligning Curriculum and Assessment for Continuous Improvement
Once we had selected the MAPS assessment system, we turned our attention
to connecting the assessment effort with curriculum in local Head Start
and preschool programs. We began by "Ohio-izing" the MAPS assessment
scales so that they directly measure the Ohio Department of Education's
goals and expectations for preschool curricula. Then staff members from
the Office of Early Childhood Education traveled around the state working
with programs to align their curricula with the expectations.
Local programs worked to compare their curricula with the MAPS assessment
framework. They reviewed their formal packaged curricula, and, in a number
of programs, also convened work groups to analyze teachers' actual
lesson plans to determine whether what goes on in the classrooms reflects
the comprehensive scope of the outcomes they are hoping to achieve. Essentially,
they are asking, "Are we providing the learning experiences to help
our children reach the outcomes set forth for Head Start programs in Ohio?"
Two issues surfaced. First, some curricula did not adequately address
all of the areas of child outcomes included in the MAPS assessment framework.
Mathematics learning was the area most commonly found to be inadequately
addressed in local curricula. The second issue was the reverse
some curricula addressed outcomes that were not measured by the Ohio outcomes
system. For example, one of our programs, Miami Valley Child Development
Centers, has made a significant investment to implement the High Scope
curriculum. Education staff carefully compared the High Scope curriculum
with the outcomes measured in the MAPS assessment system. They found that
the outcomes in science measured by the MAPS tool were more comprehensive
than those in the High Scope curriculum. However, in the art, music and
movement content areas, they found that the outcomes in the High Scope
system were more comprehensive. (Art, music, and movement are not content
areas for which the State of Ohio requires outcome measures.) Miami Valley
decided to collect data required by the state of Ohio and information
about the areas of art, music, and movement that they believe are important
goals for children in their
Spending time to analyze information on child outcomes is worthwhile
if it helps programs answer key questions like, "How are our children
doing?" "Is what we are doing working?" and "How can
we be even more effective in preparing children for school?" Making
sure your curriculum and assessment systems are lined up with a common
set of goals for children is crucial to answering these questions in useful
ways. Then you can use information on children's progress to plan for
continuous program improvement. It is also important to be sure that your
curricula reflect the values and priorities of your staff, program managers,
families, and community.
Step 4 Testing Your System
We knew that we had a lot to learn about how this assessment system would
work for all of our programs. We decided to field test the system to be
sure we were getting what we needed and that the system was not overly
burdensome on our programs. Fifteen Head Start programs, three public
school preschool and three preschool special education programs volunteered
to participate in the field test.
The field test allowed us to try out approaches with a manageable number
of motivated programs. They were our practice group for funding, technical
assistance, training, implementation, and reporting. We convened another
broad-based advisory committee of forty stakeholders to meet quarterly
to receive updates and advise us on the policies and procedures. To evaluate
our field test, we conducted focus groups, interviews, and surveys of
teachers, education administrators, directors and parents. Results indicated
that the teachers and administrators were satisfied with the system. They
believed the system was useable, the items on the scales were meaningful
and the system provided useful reports. Parents of children also said
that the reports were useful and understandable.
We found some areas that needed improvement
- Teachers and assistant
teachers reported that they needed more training.
- Teachers found that it was
difficult to document progress using this system for children with
very involved disabilities. For example, teachers of children
diagnosed with autistic spectrum disorder complained that their
children would not be likely to show any improvement on the
language scale assessment items in an entire year.
- Teachers reported that they were concerned about the amount of
time they spent on assessment. Teachers reported spending an average of
2.5 hours per week on assessment, although the amount of time decreased
as they became familiar with the system.
To address the areas of need, revisions have been made in the frequency
and type of training offered. In addition, a committee to support children
with complex disabilities came together to design a system which will
provide teachers with more finely differentiated information about the
progress of their children.
We believe it is valuable for local agencies to test changes in ongoing
child assessment and procedures for analyzing information on children's progress and accomplishments. Implementing new
efforts in a limited set of classrooms and centers can uncover problems
and help fine-tune your system, and contribute to more successful implementation.
Step 5 Ongoing Implementation and Problem Solving
This year, approximately 30,000 children are being assessed in the Indicators
of Success Project. We have learned a lot over the last three years. We
are continuing to work hard to build a system that will work for teachers
and children during this period of rapid expansion and implementation.
Our priorities focus on training, equipment, and continuing to communicate.
We need to be sure that staff have the computers and training they need
to use the system. To keep communication open, we set up committees on
curriculum, the MAPS assessment scale, supporting children with complex
disabilities, training, and technology.
One simple lesson we learned from implementing this project is that data
on child outcomes at the beginning of each year are the most important
and useful information to guide program improvement efforts. These data
can help programs make better decisions in allocating resources, staff
development, and technical assistance to improve the progress of children.
Another important lesson we learned is that when a system begins to hold
programs accountable, teachers feel pressure to reach the specified outcomes.
One program director reported that she observed her teachers walking around
the room with clipboards documenting what was going on rather than facilitating
learning. Another director told us that teachers were using "drill
and kill" teaching strategies because they felt pressure to prepare
children for state assessment efforts. We have communicated with teachers
around the state, helping them understand that good instruction will be
the deciding factor in improving child outcomes not good paperwork.
To clarify our support for developmentally appropriate practices, we created
a User's Manual with many, many examples of how to observe and foster
progress on outcomes in a developmentally appropriate way.
The implementation of the Ohio child outcomes assessment system has depended
on relationships. Thousands of individuals have walked these five steps
with us. We have continued to develop close partnerships that hold us
responsible for the goals we all have for children. We have gained consensus
on what outcomes to assess and how to measure them. And we have begun
to be better able to communicate the very real impact our programs have
on the lives of children and families.
When will our indicators begin to indicate success? Our hard work to
document progress is already paying off. One large urban district has
been able to document significant progress on some meaningful indicators.
In the fall, they reported that 24 percent of their Head Start children
could name ten or fewer letters. In the spring, 62 percent were documented
as having done so. In the fall, 8 percent of their children could name
eleven or more letters; in the spring, 40 percent could. In the fall,
one percent of the children could write using some complete words; by
the spring, 15 percent could do so.
Probably the comment that makes us the most proud came from a parent
working on a committee to adapt our system to fit children with complex
disabilities. She said that she believed that what we are doing in this
area is very important because the system is strengths-focused. Her son
has a significant brain injury and during the development of his Individual
Education Plan, the school psychologist had written "Not applicable"
in the section used to list a child's strengths. She said, "You
are giving them something to write in that section."
We are far from finished with the design of this system. In fact, we
do not intend to reach a point of completion. As we use data on children's
progress to guide further improvements in programs and classrooms, we
will continue to strive towards higher and more meaningful goals.
Mary Lou Rush is the Interim Director at the Ohio Department of Education,
Office of Early Childhood. T: 614-466-0224;
Dawn Denno is a consultant for the Ohio Department of Education, Office
of Early Childhood. T: 513-874-1771;
Edith Greer is an Assistant Director at the Ohio Department of Education,
Office of Early Childhood. T: 330-364-5567;
Ann Gradisher is an Assistant Director at the Ohio Department of Education,
Office of Early Childhood. T: 330-220-6410;