Interim Joint Committee on Education

 

Subcommittee on Elementary and Secondary Education

 

Minutes of the<MeetNo1> 4th Meeting

of the 2003 Interim

 

<MeetMDY1> November 3, 2003

 

The<MeetNo2> 4th meeting of the Subcommittee on Elementary and Secondary Education of the Interim Joint Committee on Education was held on<Day> Monday,<MeetMDY2> November 3, 2003, at<MeetTime> 10:00 AM, in<Room> Room 129 of the Capitol Annex. Senator Lindy Casebier, Chair, called the meeting to order, and the secretary called the roll.

 

Present were:

 

Members:<Members> Senator Lindy Casebier, Co-Chair; Representative Jim Thompson, Co-Chair; Senators Brett Guthrie and Alice Kerr; Representatives Tim Feeley, Derrick Graham, Reginald Meeks, Harry Moberly, Arnold Simpson, and Charles Walton.

 

Guests:  Gene Wilhoit, Commissioner, Department of Education; Richard Innes, Bluegrass Institute for Public Policy Solutions; and Kathy G. Louisgnont, Partners for Kentucky’s Future.

 

LRC Staff:  Janet Stevens, Sandy Deaton, Erin McNees, and Rita Ratliff.

 

In July 2002, the Program Review and Investigations Committee directed staff to review various aspects of the Commonwealth Accountability Testing System (CATS).  The report was presented and approved by the committee in August 2003. Erin McNees, Program Review Analyst, gave a brief overview of the report.

 

Ms. McNees explained major findings from the report. These findings include: (a) the Kentucky Department of Education (KDE) accepts dropout data from school districts without sufficient validation; (b) estimated cost of the CATS assessment is approximately $21 million, with school districts and the state each sharing approximately half that cost; (c) the correlation between individual students’ CATS and ACT scores is relatively high; (d) according to survey responses of superintendents, principals, and teachers, the quality of education is better under CATS than it was under KIRIS; (e) there is little evidence to indicate that teachers are teaching the test; (f) more than half the teachers surveyed felt the process used by KDE to set school improvement goals was inappropriate because the system does not measure individual student progress, but compares different classes; and (g) the reliability of writing portfolio scores has not improved in recent years.

 

Ms. McNees explained that the Kentucky Core Content Test assesses student knowledge of core content in the subject areas of reading, math, science, social studies, arts and humanities, practical living and vocational studies, and writing. She said that in order to comply with the federal No Child Left Behind (NCLB) Act, additional tests will be required in the 2005-06 school year. She said NCLB requires students to be tested annually in reading and math in grades 3 through 8.

 

Ms. McNees briefly explained other differences between NCLB and CATS: (a) under CATS, a school serves as its own baseline; under NCLB each school will start out with the same baseline; (b) under CATS, students moving from novice to apprentice, apprentice to proficient, or proficient to distinguished helps raise the overall school’s score; under NCLB, only students scoring proficient or above will be counted toward that school’s goal; (c) NCLB requires accountability for subgroups (economically disadvantaged students, students from major ethnic and racial groups, students with disabilities, and students with limited English proficiency); and (d) under NCLB, if one of the above subgroups fails to reach its target in either math or reading, the entire school will be identified as needing improvement.

 

Ms. McNees said that because of differences between the federal and state testing systems and the desire to keep CATS intact, in August KDE decided to implement a dual accountability system, where schools will be judged separately under federal and state systems. She said that this dual system could result in schools being eligible for rewards under one and classified as needing improvement under the other. She noted that concerns have been raised that this type of system will result in confusion as to how well a school is actually progressing.

 

Ms. McNees next discussed dropout rates in conjunction with CATS. She explained the current accountability cycle. She said the Program Review report showed that in 2002 the dropout rate declined to its lowest point, just below four percent, and that the general trend is toward lower dropout rates. She said that Program Review staff researched how school and district self-reported dropout rates were verified and validated by KDE. Staff concluded that KDE does not sufficiently verify or validate the dropout numbers it receives from local school districts. She said that due to the lack of verification of dropout numbers, Program Review staff recommended that as part of the attendance audit, KDE should review the school’s documentation that students coded as transfers are enrolled in another school. Dropout statistics should be corrected to reflect any inaccuracies found in the audit. The report further suggested that KDE consider sanctioning schools that underreport dropout statistics by lowering their scores on the accountability index by an additional amount or by making them ineligible for rewards that year.

 

The Program Review report recommended that KDE implement a uniform student information system, allowing KDE to monitor schools and track individual students as they move through the system, rather than relying on self-reported data. She said this type of system would allow KDE to track individual students who transfer, dropout, graduate, or withdraw for any reason.

 

The report also found that dropout audits in other states showed that actual dropout rates are consistently higher than the reported rates.  Ms. McNees noted that in August 2003, the Program Review and Investigations Committee directed the state Auditor’s office to do an audit of Kentucky’s self-reported dropout rate.

 

The Program Review and Investigations Committee also asked staff to estimate the financial cost of the CATS assessment for local school districts. The cost to school districts was estimated to be approximately $10.6 million, or $22 per student tested for the 2000-2001 school year. The cost of the CATS assessment for the state is estimated to be $10.3 million, or approximately $21.75 per student tested for fiscal year 2003. Ms. McNees said that these estimates are only for direct monetary costs. The report recommended that KDE create a CATS testing expenditure category in the MUNIS system and encourage school districts to utilize this category for all CATS administrative expenses. It was further recommended that any problems in implementing this change should be reported to the Interim Joint Committee on Education and the Education Assessment and Accountability Review Subcommittee (EAARS).

 

Ms. McNees said that Program Review staff conducted a survey to elicit opinions about various aspects of the CATS assessment. Responses were obtained from approximately 800 teachers, 500 principals, and 100 superintendents.

 

When educators were asked how CATS has affected the overall quality of education compared to KIRIS, approximately 50 percent of superintendents and principals responded that the quality of education is now better. Approximately 40 percent of each group indicated that the quality of education has stayed about the same. Less than 10 percent responded that the quality of education has worsened under CATS.

 

When asked how CATS affects the way teachers teach, more than two-thirds of principals and superintendents responded that CATS affects teaching somewhat or very positively. Only 41 percent of teachers agreed that the effect of CATS on teaching was positive, and almost one-third of teachers felt the impact was negative. More than 60 percent of each group of educators indicated that teachers do not have enough instructional time to adequately teach the core content.

 

When asked whether preparation for the test was affected by repetition of questions, slightly less than half of principals and a majority of superintendents felt this affected preparation; many teachers agreed with this, but also stated that question repetition helped them better prepare students for the format of questions or for content stressed on the test. Ms. McNees said that there was no indication from the survey responses that teachers were teaching the test.

 

Ms. McNees said that survey responses indicated that teachers are less optimistic that schools could reach proficiency by 2014; some teachers responding that not all students are capable, that the test does not measure progress of the same student over time, and that student population characteristics are not reflected in the goals. Principals and superintendents were more optimistic that proficiency is attainable by 2014.

 

            When asked if the consequences to schools that fail to meet their biennial improvement goals were appropriate, 55 percent of teachers and principals felt that consequences were inappropriate and 55 percent of superintendents felt that consequences were appropriate. A majority of principals and teachers in schools classified as “needing assistance” indicated that the assistance provided was somewhat or very helpful.

 

            When asked about the weighting of components of the accountability index, at least 75 percent of each group responded that the weights for reading, math, science, and social studies were about right.  Half of the teachers and 30 percent of principals and superintendents responding indicated writing portfolios are weighted too heavily. Responses indicated that most educators expressed the need for student-level accountability.

 

            Program Review staff also conducted a study of the correlation between CATS and ACT scores. There were approximately 41,000 students meeting the criteria to conduct the study. Ms. McNees said the data indicated that CATS and the ACT have a relatively high positive correlation with a coefficient of .738. The data also indicated that the ACT and CATS math and reading components are positively correlated, but not as high as overall assessments.

 

            Program Review staff were directed to determine whether CATS had been judged valid by the National Technical Advisory Panel on Assessment and Accountability (NTAPAA). The NTAPAA issued a statement stating there is substantial evidence supporting the validity and reliability of CATS and that student performance on CATS constitutes a valid base for rewarding schools or identifying schools who need to improve.

 

            Program Review staff also conducted a survey on writing portfolios. Ms. McNees indicated that this survey was done during the time schools were grading portfolios and that some responses may have been more negative than they would have been if the survey had been conducted at a different time during the school year. According to responses, 71 percent of teachers felt that the time it takes to score portfolios is not appropriate to the benefit the students receive from writing them; 48 percent of teachers, 31 percent of principals, and 38 percent of superintendents indicated that writing portfolios were weighted too high on the accountability index. The survey also indicated that slightly more than 30 percent of teachers are scoring their own students’ portfolios, causing less than objective scoring.

 

            Based on the writing portfolio survey findings, Program Review staff recommended that KDE utilize a formal audit procedure to increase the probability that portfolios are scored accurately. It was further recommended that (a) KDE work with schools and districts to reduce the practice of teachers scoring their own students’ portfolios; (b) KDE should survey teachers to determine how portfolio scoring training can be improved; (c) KDE should regularly replace benchmark portfolios with new samples; (d) KDE should encourage schools to provide teachers with more opportunities to practice scoring writing portfolios; (e) KDE should consider establishing consequences for schools that have low portfolio audit agreement rates; (f) KDE should consider reauditing schools that had a high number of scoring inaccuracies the prior year to ensure that scoring accuracy has improved; and (g) KDE should also consider increasing the number of schools randomly selected for audits so that the risk of facing consequences would encourage schools to score more carefully.

 

            Representative Graham asked why alternative school students are included in the main school’s scores and what does KDE plan to do to correct this inaccuracy. KDE Commissioner Gene Wilhoit said that the State Board of Education is considering not counting students in an alternative school outside the main school scores if the students are placed in an alternative school by social services or the criminal justice system. He said that if the main school places students in an alternative school setting, and there becomes a lack of communication regarding the progress of these students, the main school should be held accountable, and these students should be included in the main school score. Commissioner Wilhoit said that the Board should act on this decision this year.

 

            When questioned about portfolio scoring and the follow-up process after an audit by Representative Graham, Commissioner Wilhoit said that KDE is trying to take subjectivity out of the scoring process by looking at teachers solely scoring their own students and by providing rubrics to decrease emotional responses. He said that KDE is not as pro-active with teachers as it is with portfolio scorer trainers. He indicated that contact with scorers will be enhanced with electronic communication and direct conversations with teachers where there are large discrepancies in portfolio scores and audit scores.

 

            Commissioner Wilhoit said that he found the Program Review report to be extremely helpful. He said that in response to the recommendations in the report, KDE will implement a statewide student tracking system, with a pilot system in place by December 2003.  He said that any system without student identifiers will be inadequate. KDE’s goal will be to simplify the tracking system, making it electronic, to achieve accuracy and provide oversight.

 

Representative Feeley expressed his concern about using Social Security numbers as student identifiers but realizes that it takes specific identifiers to track students and suggested using an internal numbering system like that currently used at some schools. Commissioner Wilhoit said the new system will allow for numbering other than Social Security numbers.

 

When asked by Representative Feeley about how Kentucky rates in the amount of time spent on testing, Commissioner Wilhoit said that with No Child Left Behind, testing will be fairly uniform nationwide. He said since Kentucky testing requires beyond multiple choice, the duration is longer but provides more helpful information. He said the assessment becomes part of a solid instructional process. Representative Feeley agreed but expressed concern on the turnaround time on scores. Commissioner Wilhoit said that turnaround time is better but not where it should be. He said there are initiatives underway and that greater use of technology will be necessary for quicker scoring results.

 

A motion was made by Representative Graham, and seconded by Senator Kerr, to approve the minutes of the September 8, 2003, meeting. The motion carried by voice vote.

 

Senator Casebier introduced Richard Innes from the Bluegrass Institute for Public Policy Solutions. Mr. Innes discussed the formula utilized in determining graduation and dropout rates. He explained that Kentucky adopted the National Center for Education Statistics (NCES) formula. He said that this formula probably does not comply with the law, counts all graduates, not just on-time graduates, and uses a dubious approach and even more dubious data. He explained that the numerator in the formula is the number of graduates in the reported year and that the denominator is created by adding the number of graduates to the dropouts reported each year as this class proceeds through high school. He said another critical element in the formula is the quality of dropout data.

 

Mr. Innes said that a fairly simple formula, the Classical Formula, which divides the number of graduates by the number of students who were in that class when it entered the ninth grade four school years earlier, provides a more credible calculation of graduation rates. He said when he applied the Classical Formula, the results showed a nine percent dropout rate as opposed to the 4.79 percent dropout rate reported by KDE. He said the Classical Formula does have limitations because it requires four years of data collection and is subject to corruption from migration effects.

 

Mr. Innes said the Bluegrass Institute recommends that Kentucky adopt the Swanson’s Urban Institute Approach, a cumulative promotion index, which uses enrollment and graduation data, only requires two years of data, and is less impacted by retention issues. He said NCLB requires graduation rate calculations and that the Swanson formula will provide a more accurate indication of graduation and dropout rates in Kentucky schools.

 

In response to a question by Senator Guthrie on the grade level of questions on CATS, Mr. Innes provided a handout that indicates the CATS questions are two grade levels below actual grade level and suggested that there is evidence that further research is needed on this topic.

 

In response to questions by Representative Walton, Mr. Innes said that CATS is the most expensive testing system in the country because it relies heavily on open response questions, which are very inaccurate. He suggested a task force be formed to look into the cost issue and further suggested that a group of professionals who are not educators be included, rather than having only educators do the research.

 

Senator Casebier asked Mr. Innes if he was suggesting that Kentucky elminate CATS. Mr. Innes said that Kentucky can comply with NCLB by using CTBS with less or no open response questions. Senator Casebier said since the enactment of KERA, Kentucky has moved from the KIRIS testing to CATS. He said that he believed teachers have a vested interest and can better develop a statewide testing system. He said that if a task force was created, those trained in education would make a more reputable panel.

 

Mr. Innes questioned whether the NTAPAA had reviewed the CATS questions because it appears that the grade level of questions is not meeting national standards, providing data that could be unreliable and invalid. He said he understands that there is no vehicle to review questions without breaching the confidentiality of the questions.

 

When asked by Senator Casebier what the Bluegrass Institute is, Jim Waters said that it is one of 49 think tanks in the U.S. and has been in operation for 16 years. He said that their education credentials lie with Mr. Innes.

 

In response to questions from Representative Feeley, Mr. Innes said that the dropout rate is growing nationally and indicated that there is no real data to analyze the problem nationwide. He said that Kentucky has reached its highest migration rate due to students moving from public schools to private schools.

 

In response to questions from Representative Thompson, Commissioner Wilhoit said that there is heavy involvement by teachers and a contractor who works with Kentucky in scrutinizing test questions. He said Kentucky is going in the right direction. He said that the content and cost of the test could be lowered, but that the mind processes of students are being better tested with CATS. Representative Thompson said that writing skills have become the most profound gain in Kentucky education and that he would be opposed to a cheaper test to lower gains. Commissioner Wilhoit said KDE is working toward better education through writing ability and depth and knowledge of content by students.

 

Representative Graham agreed with Senator Casebier that educators should be included in developing testing changes, and that the test can be fine-tuned. He said that he has heard from some at higher education institutions that incoming freshmen are better prepared and that writing skills have greatly improved .

 

Senator Casebier reiterated that Kentucky is striving to improve its education system but said that going back to a different test, one that is only multiple choice, would send education spiraling backwards. He said that KERA may not be perfect, but Kentucky is moving in the right direction. He said that some states consider Kentucky a model for education reform.

 

There being no further business, the meeting adjourned at 11:55 a.m.