Annual Meetings






School Indicators & Profiles SIG

A service to members of the American
Educational Research Association

A Longitudinal View of Change in School Effectiveness Status

John A. Freeman The University of Alabama
Charles Teddlie, Louisiana State University
Eugene Kennedy, Louisiana State University


There is a growing awareness among school effectiveness researchers of the need for a shift in conceptual framework. The call comes for a framework that emphasizes the extent of change in a school's effectiveness rather than one that emphasizes the extent of stability in a school's effectiveness over time (Gray et al., 1995). In 1995, John Gray and his colleagues conducted a multilevel analysis of three cohorts in 30 English secondary schools that illustrated the degree of change in effectiveness. The findings in that study indicated, among other things, which "only a small proportion of the schools in any particular locality will be improving or deteriorating in terms of their effectiveness in ways which are substantively significant." (p. 111) They estimated that between a fifth and a fourth of the schools would change their effectiveness status.

Also, the Gray study stipulated that the extent to which changes in effectiveness are dependent on the outcome measures employed needs to be explored. It was their contention that differences in the methods of constructing measures may affect the extent of change in a school's effectiveness. For example, the change may be concentrated in a particular subject area used in creating the measure of effectiveness.

As part of a study conducted in Louisiana (Freeman, 1997), an attempt was made to categorize all elementary schools in the state according to prescribed criteria, as "improving," "stable," or "declining." The methodology for that study included regression (ordinary least squares) analysis, using a composite score created from CRT and NRT data for each school, in each of three years. This composite score, along with the schools' socioeconomic status and community type were entered into the regression model resulting in a residual score for each year. The effectiveness status for each school was based upon these residual scores Although the methodology differed from the Gray et al. (1995) study, and there were differences in sampling and school configuration, the schools were entered into a matrix that divided the schools into quartiles in the first year of the study and the third year of the study, similar to the Gray et al. (1995) study. Interestingly, the results were almost identical to the results found in the Gray et al. (1995) study.

Based upon these preliminary findings, the primary objective of the present study was to partially replicate the Gray et al. (1995) study and compare the results in relation to the extent of change in effectiveness. This was accomplished by re-analyzing the data from the Freeman (1997) study with HLM (Bryk & Raudenbush, 1992).

Perspective of the Study

The implications of this study reach into the heart of school effectiveness research, which has focused on "what is" an effective school, and marks a turning point toward the study of "how did the school become effective?" The study of the process of school change will add to the body of knowledge that seeks to unite the often disparate traditions of research known as school effectiveness and school improvement by helping to understand the process of change in schools as a contingency-based concept (Slater & Teddlie, 1993) rather than one in which certain "Ingredients" (Edmonds, 1979) for school effectiveness can be combined into a recipe for school improvement (Purkey & Smith, 1983). By using methodologies from the area of school effectiveness research to measure change in school effectiveness, it is hoped that the path toward uniting the areas of school effectiveness and school improvement will be facilitated.


The present study has evolved from a study that used a three-phased approach to the identification and subsequent examination of the phenomenon of "naturally occurring" school improvement, as identified by Teddlie and Stringfield (1993). The first two phases were designed to identify schools that were undergoing "naturally occurring" school improvement, while the third phase involved on-site data collection from eight schools identified as meeting the criteria for naturally occurring school improvement, as established in the first two phases.

In keeping with prior research at the Louisiana Department of Education (Crone, Franklin, Caldas, Ducote & Killebrew, 1992), SIPSCORES were created by transforming individual student-level CRT and NRT component scores to z scores and then calculating an average school-level score for all students tested in each of the last three years. Once SIPSCORES were created, regression analysis methods were used to determine the expected achievement for each school over a three-year period, based on contextual variables (socioeconomic status and community type). From these results, baseline data was established for schools in Louisiana (with preestablished configurations) that identified each school in the state as "improving," "stable," or "declining."

In an effort to provide a greater degree of validity to the findings, the data for all 634 schools examined were expanded to provide student-level information for the purpose of conducting HLM analyses. The results of these analyses were then manipulated by methods that mirrored the Gray et al. (1995) study.

Data Sources

The data sources used for this study consisted of statewide standardized test scores, as follows:

1. Criterion-referenced Tests (CRTs) - Scaled student scores for the language arts and mathematics portions of the Louisiana Educational Assessment Program (LEAP) tests administered to all third and fifth grade students in the state of Louisiana during the 1991-92, 1992-93 and 1993-94 school years.
2. Norm-referenced Tests (NRTs) - Scaled student scores for the total battery component of the California Achievement Tests (CAT) which was administered to all fourth and sixth grade students in the state of Louisiana during the 1991-92, 1992-93 and 1993-94 school years.
The listed test scores were obtained for every school in the state, however, only those schools that fit the predetermined criteria for the study (elementary schools containing a third grade, with no grade higher than sixth) were used in the creation of SIPSCORES. All test data were obtained with permission from the Louisiana Department of Education, Office of Research and Development, Bureau of Pupil Accountability.

Other data used in the study consisted of

3. Percentage of students enrolled in the free lunch program (used as the variable for determining SES) for each school in the state.
4. Community-type and school configuration data for every school in the state.
Data used to determine these two variables were obtained with permission from the Louisiana
Department of Education, Office of Research and Development, Bureau of School Accountability.


It should be emphasized that the present study is not a complete replication of the Gray et al. (1995) study. There are a number of methodological dissimilarities between the two studies, such as, sample size (30 and 634) and school configuration (secondary and elementary). However, despite these differences, the results in terms of the percentages of schools that changed their school effectiveness are very similar when comparing like cells, as illustrated by these two tables.

Changes in Schools' Effectiveness Over Time. (Gray et al., 1995)
Position in 1992
Position in 1990
Top Quarter
Middle Half 
Bottom Quarter
Top Quarter
(1) 15%
(2) 6%
(3) 0%
Middle Half 
(4) 9%
(5) 35%
(6) 9%
Bottom Quarter 
(7) 0%
(8) 9%
(9) 18%
 Note. Table cell numbers are in parentheses. Cell percentages are indicated.
Changes in Schools' Effectiveness Over Time. (Present Study)
Position in 1994
Position in 1992
Top Quarter 
Middle Half
Bottom Quarter
Top Quarter
(1) 15.62%
(2) 8.04%
(3) 1.42%
Middle Half
(4) 8.83%
(5) 32.81%
(6) 8.20%
Bottom Quarter
(7) 0.63%
(8) 9.15%
(9) 15.30%
Note. Table cell numbers are in parentheses. Cell percentages are indicated.

Significance of the Study

The significance of the study lies in the fact that the results provide additional support to the findings of the Gray et al. (1995) study in terms of how many schools in a particular grouping can be expected to improve and deteriorate. The present study lends support to the notion that different methods of measuring effectiveness may produce similar results in terms of the total effectiveness of a school, contrary to the concern expressed by the Gray study.

The present study is also significant in the fact that it contributes to the effort to unite the research areas of school effectiveness and school improvement. No longer is it valid to simply ask, what is an effective school? It is imperative that researchers begin to ask what causes particular schools to improve or deteriorate in relation to other schools? A valid method of identifying change in school effectiveness will aid in the pursuit of improving all schools.


Bryk, A., & Raudenbush, S. (1992). Hierarchical linear models. Newbury Park, CA: Sage.

Crone, L. J., Franklin, B. J., Caldas, S. J., Ducote, J. M., & Killebrew, C. (1992). The use of norm-referenced and/or criterion referenced tests as indicators of school effectiveness. Louisiana Educational Research Journal, 56-77.

Edmonds, R. (1979), Effective schools for the urban poor. Educational Leadership, 37(10), 15-24.

Freeman, J. (1997). A methodological examination of naturally occurring school improvement in Louisiana schools. Unpublished doctoral dissertation, Louisiana State University.

Gray, J., Jesson, D., Goldstein, H., Hedger, K., & Rasbash, J. (1995). A multi-level analysis of school improvement: Changes in schools' performance over time. School Effectiveness and School Improvement, 6(2), 97-114.

Purkey, S., & Smith, M. (1983). Effective schools: A review. The Elementary School Journal, 83(4), 427- 452.

Slater, R., & Teddlie, C. (1992). Toward a theory of school effectiveness and leadership. School Effectiveness and School Improvement, 3(4), 247-257.

Teddlie, C., & Stringfield, S. (1993). Schools make a difference: Lessons learned from a 10-year study of school effects. New York: Teachers College Press.

John A. Freeman
The University of Alabama
College of Education
Dept. of Administration and Ed. Leadership
Box 870302
Tuscaloosa, AL 35487-0302

Hosted by:
Learning Point Associates