Sunday, October 26, 2014

ED 615: Reflective Essay Blog Post




I have completed several standardized tests during the course of my life. In elementary school, for several years in a row, I completed a test called the California Achievement Test. Even though I went to school in Maryland, we all took the California Achievement Test because Maryland did not have its own state test at the time. This test was our equivalent to the MSA or PARCC test that elementary students take now.
In high school, I took three Advanced Placement tests – one in Calculus, English, and French. I earned a score of “3” out of “5” on each test. These tests translated to 14 college credits towards my undergraduate experience at UMBC and actually allowed me to graduate on time. In high school, I also took the PSAT and ACT one time each, in addition to the SAT, which I took at least two times. Because the College Board (makers of the SAT) combined the highest scores from the English and Math sections- across different test dates-  to arrive at the student’s final score, I was able to maximize my score by taking the test multiple times. I have also taken the GRE for entrance into graduate school, but I did not fare as well on that test; I only took it once.
I do not feel that taking the tests completely demonstrated evidence of learning. This is especially true with the SAT. In high school after my first time and before my second attempt at the SAT, I enrolled in an after-school coach class that my homeroom teacher (who was also a math teacher) provided. What I learned most from the SAT prep course were strategies for how to approach each type of test question. It was very effective, but it did not help me display the content that I had learned during my regular math class.
            This example shows the problem with traditional standardized tests. Teachers can begin to “teach to the test.” On the opposite spectrum, teachers’ variability in instruction can impose limitations on a student’s performance.  The instructional sensitivity article shows that not all standardized tests actually generate the results that they set out to produce. (D'Agostino, Corson, & Welsh, 2007) The study in the article demonstrated that students whose teachers taught them in the same way in which the standardized tests were structured scored higher on their tests. This all stemmed from a case in 1981 (Debra P. v. Turlington) where a student brought a claim against the state of Florida asserting that students of color were not taught the material that was included on the state’s minimum basic achievement test. I have seen a similar example first hand in the middle school where I work. Two years ago, when I first started working there, I co-proctored a test for a class of 6th grade students who were taking the Maryland State Assessment (MSA). Several of the students either asked me to read the word “diagonal” for them from the test or asked me what the word “diagonal” meant. If so many of the students had trouble with this one, very important word, how many other points were they missing? Clearly, these students had not learned everything that they needed in order to be prepared for the standardized test that they were taking. The fact that the school had low test scores for several years in a row made one question how well the classroom instruction was matching up with the test content. This again occurred last year on a more wide-scale basis when all of the students in Maryland had to take the MSA again, but the curriculum had changed to the common core. Thus, the test did not reflect what the students were learning in their curriculum.
            I think what motivated the development of standardized testing was teachers, parents, policy-makers, and community members wanting to know how well students from different regions compared, in terms of acquired knowledge. States could then use that data to make funding decisions based on how much help the students needed to reach pre-set standards. By allowing one organization to make a test for everyone, states could eliminate the bias that each teacher would have imposed had she created her own tests.
The assumption is that giving the same test under the same conditions to everyone normalizes everything. However, we now know that various other factors come into play, including the knowledge base of the teacher giving the instruction, the teaching style of the teacher and prior knowledge of the students. For these reasons, standardized tests do not always produce the desired results that their administrators set out to achieve.





References

Balf, T. (2014, November). A Smarter, Fairer SAT. Popular Science, p. 30.
Black, P., & Wiliam, D. (1998, October). Inside the Black Box: Raising standards through classroom assessment. Phi Delta Kappan, 139-148.
D'Agostino, J. V., Corson, N. M., & Welsh, M. E. (2007). Instructional Sensitvity of a State's Standards-Based Assessment. Educational Assessment, 12(1), 1-22.
Popham, W. J. (2014). Classroom Assessment. Upper Saddle River, NJ: Pearson Education, Inc.

2 comments:

  1. I agree I do think that more of a strategy is taught for standardized tests rather than content and application. I remember being told that if you do not know the answer to skip it and that it would not be counted against you in some standardized tests. Do you think it is still valuable to teach strategies and different approaches to take in standardized tests or should we simply stick to the concepts and applications?

    ReplyDelete
  2. I have the same conflict with teaching test taking strategies. That in and of itself is a symbol of what is wrong with standardized testing, we use valuable instructional time teaching students how to take a test. In order to better their chance of test-taking success, we gear content delivery to match the style of tests rather than the learning style of students. Further, we are now in this realm of testing higher-order thinking skills through standardized assessment, which hopefully will soon be recognized for being as accurate as the IQ tests of yore. As you described, Juanita, the institution is set on creating these scientific apples-to-apples comparisons of our students, but without accounting for the number of variables or defining valid controls. Do you think this is a worthy goal? In what environment would this model of assessment actually "achieve the desired results"?

    ReplyDelete