SIP:SIG Home

Officers

Members

Annual Meetings

Membership

Newsletter

Resources

AERA Home


 
 
 
 

School Indicators & Profiles SIG

A service to members of the American
Educational Research Association

Integrating School Indicators, School Effectiveness, and School Improvement Research—The Louisiana School Effectiveness and Assistance Program (SEAP)

Chair: Charles Teddlie, Louisiana State University
Discussant: Carol Taylor Fitz-Gibbon, University of Durham, United Kingdom

Objectives of the Symposium

This symposium will cover policy and research issues related to a systemic reform in the way that Louisiana monitors education performance and delivers necessary resources and services to support school improvement. The six proposed presentations will address various aspects of the development and implementation of a school indicator/effectiveness/improvement program currently being piloted in Louisiana. The School Effectiveness and Assistance Program (SEAP) has research, practice, and policy implications both nationally and internationally because it is an effort to integrate activities associated with three distinct research disciplines into one coordinated, statewide program that

  1. measures the effectiveness of nearly 1,500 Louisiana public schools using achievement and other performance indicators (SEAP-I, the school indicator component);
  2. intensively assesses the effectiveness of selected schools (12 in 1996-97, 60 in 1997-98) using a wide variety of process and product data gathered through SEAP-directed school site visits and the schools' own internal needs assessments (SEAP-II, the school effectiveness component); and
  3. initiates school improvement efforts in the targeted schools (SEAP-III, the school improvement component), based on the SEAP-II findings.

Scientific or Educational Importance

Though there have been numerous calls for the integration of school indicators, school effectiveness, and school improvement efforts (e.g., Brown, Riddell, & Duffield, 1996; Fitz-Gibbon, 1996; Fitz-Gibbon and Kochan, in press; Kochan, Teddlie & Franklin, 1997; Reynolds, Hopkins & Stall; 1993; Willms, 1992), SEAP is the first program of its scope (i.e., statewide) to be implemented.

It also is the focal point of the Louisiana Department of Education's (LDE) current effort to reinvent its relationship with local education authorities (LEAs) and schools, moving from a regulatory stance to a service orientation. Toward that end, the LDE currently is undergoing a total reorganization.
 

Presentations

Joining School Indicator, School Effectiveness, and School Improvement Research--The International Perspective. David Reynolds, University of Newcastle, U.K.; Charles Teddlie, Louisiana State University

Objectives: This paper will first discuss the artificial cleavage that still persists between the three related fields of school indicators, school effectiveness, and school improvement. This discussion will focus on the paradigmatic differences (e.g., post positivism, constructivism, etc.) that have separated the three fields and how these differences are now reconcilable within the paradigm of pragmatism, together with the utilization of mixed methods and mixed model designs.
Perspectives and/or Methods: The methods involve an integrated literature review drawing primarily from four recent sources: Reynolds, Hopkins, and Stoll's (1993) review linking school effectiveness and improvement; Fitz-Gibbon's Monitoring School Performance: A Guide for Educators (1996), which discusses the role of education indicator research in school improvement; Teddlie and Reynold's forthcoming International Handbook of School Effectiveness Research, which includes chapters that review and integrate the three related fields; and Tashakkori and Teddlie's forthcoming Mixed Methods and Mixed Model Studies in the Social and Behavioral Sciences, which reviews the denouement of the paradigm wars and the emergence of pragmatism and mixed model designs.

Conclusions/Point of View: The authors conclude that future research programs at the state and national levels should integrate the methodologies of the three related fields, thus allowing the identification of more effective/typical/less effective schools based on multiple indicators, the intensive assessment of schools of interest, and the development of change process models for the identified schools. Speculation about integrated international research programs will also be offered.

Dueling Agenda--Louisiana's Prescription for Balancing the Often Competing Demands of Education Improvement and Accountability. W. Marlyn Langley, Bobby Franklin, Louisiana Department of Education
Objectives: Louisiana is noted for its diverse population and distinctive political heritage. This paper will describe the systemic education accountability reforms currently underway in Louisiana and the political and social context within which those reforms are taking place.

Perspectives: Louisiana's system of government is very centralized and is traditionally a top-down structure. The education reform efforts now underway are an attempt to break that mold. The LDE, under new management, is trying to (a) reconceptualize its role, moving from regulator to facilitator and (b) replace the current hodgepodge of performance monitoring mechanisms-all artifacts of the piecemeal reforms of the 1970s and 1980s-with an integrated and systemic education accountability process.

Conclusions/Point of View: The development of a statewide, school accountability system is a process fraught with competing ideas, beliefs, philosophies, and political agendas that can stymy progress and subvert even the best intentions of reformers. Louisiana has embarked upon a progressive journey to restructure its education system by changing the LDE's primary function from one of regulation to one of service. While trying to redirect resources to fulfill this new calling, the LDE is charged with developing a means by which schools are held accountable for student performance. SEAP is the LDE's mechanism for striking a balance between those demanding school closures and those striving for improvement.

Analyzing Statewide School Effectiveness Datasets Accurately and Fairly--A Review of SEAP-I. Eugene Kennedy, Louisiana State University; Garrett Mandeville, University of South Carolina; Linda Crone, West Virginia Education Fund

Objectives: The presenters will review and critique the rationale behind and methods employed by SEAP staff to produce school effectiveness indices (SEIs) for nearly 1,500 Louisiana public schools. More than just a research exercise, the SEIs are used to target schools for intensive site-based assessment and, ultimately, improvement. Each school's performance is judged using two achievement measures: (1) a standards-based Baseline Performance Indicator (BPI) and a Relative Performance Indicator (RPI), which takes into consideration six student and school intake characteristics. Each BPI is a school-level composite score reflecting student performance on all state-administered criterion-referenced tests (CRTs) for that site. The RPI, which also is a school-level composite score, reflects student performance on state-administered norm-referenced tests (NRTs) as well as CRTs. Other indicators (e.g.,a student participation index based on attendance, dropout, and/or other data) may be added at a later date.
Perspectives and/or methods--The SEIs calculated during the SEAP pilot are based on three years of achievement data (Spring 1995-1997) from the LDE's Louisiana Educational Assessment Program (LEAP). These composite scores were produced by summing and averaging transformed subject area scores for the respective grade-level tests (Crone, Lang, Franklin, Halbrook, 1994; Crone, Teddlie, Franklin, 1995). The RPIs were calculated using two competing statistical models: school-based regression and multilevel modeling (HLM). The two statistical models will be discussed in terms of their comparative ability to identify the effectiveness status of schools.
Data Source of Research. The SEAP pilot utilizes LEAP CRT data collected at grades 3, 5, 7, 10, and 11 and NRT (California Achievement Test/Form 5) data for grades 4, 6, and 8. When SEAP goes into full implementation, it will be based on Louisiana's new statewide performance-based assessments, which will be phased in beginning in SY 1997-98.
Conclusions/Point of View--Opinions differ widely as to the appropriate method for weighing and monitoring school performance: i.e., actual versus relative or value-added scores, regression versus HLM, etc. (Fitz-Gibbon, 1996; Willms, 1992; Salganik, 1994). The authors contend that the SEAP approach yields an accurate and fair assessment of school performance: one that weighs school performance against state standards, yet recognizes the different challenges that school staff face in advancing student learning.

Gathering and Analyzing Intensive School-level Process Data--A Review of SEAP-II. Susan Kochan, Louisiana Department of Education; Maryann Durland, Kentucky Institute for Educational Research; Robin Jarvis, Lysha Kemper, Louisiana State University

Objectives--Leading researchers in the school effectiveness and education indicator fields have long cited the need for process data to lend insights into the schooling process, to inform policy, and to suggest strategies for improving school performance (Blank, 1994; Brookover et al, 1979; Fitz-Gibbon, 1996; Oakes, 1989; Porter, 199 1; Rutter et al., 1979; Willms, 1996). This presentation will focus on the methods piloted during SEAP-II to collect and analyze process data from targeted SEAP schools.
Perspectives and/or Methods: In the spring of 1997, approximately 30 LDE staff joined 5 university-based researchers for intensive 2-day site visits to each of the 12 SEAP-II schools in order to collect behavioral and attitudinal data. At each site, a five-member LDE/university team conducted 24 classroom observations, general campus wide observations, a teacher focus group, student focus group, and principal interview. Parent, student, teacher, and principal surveys also were administered. A variety of univariate statistics and qualitative methods were used to analyze the resulting process data.
Data Source of Research: Process and product data collected during SEAP-II site visits, SEAP-I achievement data, and archival data from other sources for SYS 1995-96 through 1997-98, as available.
Conclusions/Point of View: The SEAP teams used established school and teacher effectiveness methods (Teddlie, 1994) to collect process data of the kind long sought in indicator research (Porter, 1991; Willms, 1996). This process data collection effort enabled the SEAP teams to make very focused recommendations for school improvement and provided a mechanism for gathering input from students, parents, teachers, and administrators. SEAP-II also was an invaluable learning experience for LDE staff, many of whom make administrative and/or policy decisions impacting schools, but spend little time in the field. Because the LDE staff who participated in SEAP-II were drawn from throughout the agency, it also furthered the LDE's planned shift toward a more collaborative and service-oriented relationship with schools.

Tying School Improvement to School Accountability: A Review of SEAP-III. James Meza, University of New Orleans; Sam Stringfield, Johns Hopkins University

0bjectives: This paper summarizes the 12 school improvement projects initiated in SY 1997-98. School faculty and administrators developed final school improvement plans, drawing on (a) draft school improvement plans developed by LDE/university staff, which were based on SEAP-I and -II findings and (b) needs assessments (self-studies) conducted by the schools themselves.
Perspectives and/or Methods: This paper is presented from the perspective of external school improvement experts who helped the 12 schools to identify available strategies and restructuring models (e.g., Accelerated Schools, Success for All, etc.) (Stringfield et al., 1997) that might assist them in their respective school improvement efforts. The methods section of the paper will summarize the context-specific improvement plans that each school developed for SY 1997-98 - SY 1999-2000.
Data Source of Research: Process and product data collected during the course of the SEAP-II site visits, SEAP-I cognitive data, and archival data from other sources.
Point of View: SEAP site teams and external school improvement specialists can provide valuable external perspectives on the strengths and weaknesses of schools and facilitate the delivery of needed resources and services to support school improvement. At times, the SEAP process may even serve as a mechanism for validating the staff's own preconceived needs. Ultimately, however, the direction and the impetus for improvement should come from the school itself, backed by the shared commitment of faculty, administrators, district, and community.

The SEAP Process--Illustrative Case Studies from SEAP-II and III. Debbie Heroman and Sharon Pol, Louisiana State University; Bobby Franklin, Louisiana Department of Education

Objectives: This presentation will give symposium participants a "real feel" for the SEAP process by presenting several illustrative case studies of schools that participated in the SEAP pilot.
Methods: Through narrative, with audio-visual support, the presenters will walk symposium participants through several SEAP-II site visits and the LDE's collaborative approach to compiling draft case studies and recommendations. They will explain how findings from the SEAP site visits and the schools' own self-assessment were integrated to produce improvement strategies tailored to the specific needs of the individual schools, and will describe how the respective schools' improvement efforts have unfolded.

References


Blank. R. K. (1993). Developing a system of education indicators: Selecting, implementing, and reporting indicators. Educational Evaluation and Policy Analysis, 15(l), 65-80.

Brookover, W.B., Beady, C., Flood, PK, & Schweitzer, J.H.(I 979). Schools, social systems, and student achievement: Schools can make a difference. New York: Praeger.

Brown, S., Riddell, S. & Duffield, J. (1996). Possibilities and problems of small-scale studies to unpack the findings of large scale studies of school effectiveness. In J. Gray, D. Reynolds, C. Fitz-Gibbon, & D. Jesson (Eds.) Merging traditions: The future of research on school effectiveness and school improvement (pp. 93 -120). London: Cassell.

Crone, L.J., Lang, M.H. . Franklin, B.J. & Halbrook, A.M. (1994). Composite versus component score: Consistency of school effectiveness classification. Applied Measurement in Education, 8(4), 365-377.

Crone, L.J., Lang, M.H., Teddlie, C., and Franklin. B. (1995). Achievement measures of school effectiveness classification: Comparison of model stability across years. Applied Measurements in Education, 8(4), 365-377.

Fitz-Gibbon, C.T. (1996). Monitoring education: indicators, quality and effectiveness. London: Cassell.

Fitz-Gibbon, C.T., & Kochan, S. (in press). School effectiveness and educational indicators. In C. Teddlie & D. Reynolds (Eds.) The international handbook on school effectiveness research. London: The Faliner Press.

Kochan, S., Teddlie, C., and Franklin, B. (1997, July). The evolution of an indicator system in Louisiana: Accomplishments and challenges. Paper presented at the Evidence-Based Policies and Indicator Systems International Conference, Durham, UK.

Oakes, J. (1989). What educational indicators? The case for assessing school context. Educational Evaluation and Policy Analysis, 1](2), 181-199.

Porter, A. (1991). Creating a system of school process indicators. Educational Evaluation and Policy Analysis, 13(l), 12-29.

Reynolds, D., Hopkins, D., & Stoll, L. (1993). Linking school effectiveness knowledge and school improvement practice: Towards a synergy. School Effectiveness and School Improvement, 4(l), 37-58.

Rutter, M., Maughan, B., Mortimore, P. & Ouston, J. with Smith, A. (1979). Fifteen thousand hours: Secondary schools and their effects on children. Cambridge, MA: Harvard University Press.

Salganik, L.H. (1994). Apples and Apples: Comparing performance indicators for places with similar demographic characteristics. Educational Evaluation and Policy Analysis, 16(2), 125-141.

Stringfield, S., Millsap, M.A. & Herman, R. (1997). Urban and suburban/rural special strategies for educating disadvantaged children: Findings and policy implications of a longitudinal study. Washington, D.C.: U.S. Department of Education.

Tashakkori, A., and Teddlie, C. (in press) Mixed methods and mixed model studies in the social and behavioral sciences. Thousand Oaks, CA: Sage Publications, Inc.

Teddlie, C. (1994). Integrating classroom and school data in school effectiveness research. In D. Reynolds, et al, Advances in school effectiveness research and practice. Oxford: Pergamon.

Teddhe. C., and Reynolds, D. (Eds.). The international handbook of school effectiveness research. London: Faliner Press Limited.

Willms, J.D. (1992). Monitoring school performance: A guide for educators. London: The Falmer Press.