Class mean = 95% on Exam 1. Awesome. Now… What Did I Do Differently?

This semester I decided to try something different with our approach to the first unit exam in our Trigonometry class. Typically, I give students in all my classes an "exam objectives sheet" prior to the exam. The sheet has a collection of objectives the students should master. Basically, I am telling them about what appears on the test and in what order. That way, there are no surprises or "Gotcha!" moments on test day. Here's a sample of a subset of exam objectives from an exam objectives sheet from AP Stats to give you a sense of how these sheets look in my classes:

In my teaching experience, students have struggled at times with being able to dissect the verbal instructions and figure out what potential exam problems might look like. I decided for the first Trigonometry test to run this process "in reverse."

Here's the exam objectives sheet students received Monday:

Trig_Exam_Objectives

 

 

 

 

 

 

 

 

 

 

 

On Monday, I had students work in self-selected small groups of 3 to 4 students for 22 minutes (timer on the board) on writing the objectives from the problem sets. I modeled on the board how to write the first objective for section 1.1 problems 1-8.

Here's a sample problem from section 1.1:

Find the domain and range of each relation.
#4: { (2,5), (3,5), (4,5), (5,5), (6,5) }

I talked students through writing an objective for items of this type: "Given a relation, state the domain and range." Several students questioned why I did not start with "given a set of ordered pairs." I told them this choice was purposeful because in practice exercises, students were also given relations in graphical form. We discussed that starting the objective with the phrase "given a relation" captured both types of problems.

Students worked in small groups and wrote objectives for EVERY item. I circulated the room and confirmed EVERY student had written objectives for EVERY item. Then, we spent the remaining half of class discussing common misconceptions and errors on problems, addressing why each error occurred and what the student making the error would be thinking, along with why the thinking was erroneous. Here's the data for one of my classes (n = 16)

Before I start jumping for joy, it's probably a good idea to consider my other classes. Here is a comparison of the three sections I have.

The other two sections did not fare so well. The much larger dispersion among scores in the classes labeled B and C concerns me (s = 13.0229 and s = 11.3002). On our grade scale, the median in all three sections is an "A."

Teachers have to be data detectives to diagnose what students do or do not know. The three outliers above had some misconceptions, evidence to suggest little to no work outside class is taking place. As a practitioner, I also have to think about how effective I was or was not teaching the material. Obviously something different is going on in the class labeled "Column C."

When I compare the global mean and median across my three sections this year (mean = 92.0351; median = 96) to last year's data (mean = 85.61; median = 87), I am pleased to see a dramatic improvement on the assessment (yes, this is the exact same test I used last year). I will likely take the same approach to preparing students for the second exam and use data analysis to determine if the approach is contributing to the improvement in scores.

Leave a Reply