Instructional Strategies

This semester I decided to try something different with our approach to the first unit exam in our Trigonometry class. Typically, I give students in all my classes an "exam objectives sheet" prior to the exam. The sheet has a collection of objectives the students should master. Basically, I am telling them about what appears on the test and in what order. That way, there are no surprises or "Gotcha!" moments on test day. Here's a sample of a subset of exam objectives from an exam objectives sheet from AP Stats to give you a sense of how these sheets look in my classes:

In my teaching experience, students have struggled at times with being able to dissect the verbal instructions and figure out what potential exam problems might look like. I decided for the first Trigonometry test to run this process "in reverse."

Here's the exam objectives sheet students received Monday:

Trig_Exam_Objectives

 

 

 

 

 

 

 

 

 

 

 

On Monday, I had students work in self-selected small groups of 3 to 4 students for 22 minutes (timer on the board) on writing the objectives from the problem sets. I modeled on the board how to write the first objective for section 1.1 problems 1-8.

Here's a sample problem from section 1.1:

Find the domain and range of each relation.
#4: { (2,5), (3,5), (4,5), (5,5), (6,5) }

I talked students through writing an objective for items of this type: "Given a relation, state the domain and range." Several students questioned why I did not start with "given a set of ordered pairs." I told them this choice was purposeful because in practice exercises, students were also given relations in graphical form. We discussed that starting the objective with the phrase "given a relation" captured both types of problems.

Students worked in small groups and wrote objectives for EVERY item. I circulated the room and confirmed EVERY student had written objectives for EVERY item. Then, we spent the remaining half of class discussing common misconceptions and errors on problems, addressing why each error occurred and what the student making the error would be thinking, along with why the thinking was erroneous. Here's the data for one of my classes (n = 16)

Before I start jumping for joy, it's probably a good idea to consider my other classes. Here is a comparison of the three sections I have.

The other two sections did not fare so well. The much larger dispersion among scores in the classes labeled B and C concerns me (s = 13.0229 and s = 11.3002). On our grade scale, the median in all three sections is an "A."

Teachers have to be data detectives to diagnose what students do or do not know. The three outliers above had some misconceptions, evidence to suggest little to no work outside class is taking place. As a practitioner, I also have to think about how effective I was or was not teaching the material. Obviously something different is going on in the class labeled "Column C."

When I compare the global mean and median across my three sections this year (mean = 92.0351; median = 96) to last year's data (mean = 85.61; median = 87), I am pleased to see a dramatic improvement on the assessment (yes, this is the exact same test I used last year). I will likely take the same approach to preparing students for the second exam and use data analysis to determine if the approach is contributing to the improvement in scores.

1 Comment

Each semester, I stand in awe of how many students do not understand how to calculate the impact a semester final exam has on their semester grade. Our math department grading policy roughly breaks down in each class to the following:

90% Formative/Summative measures from the semester
10% Cumulative Final Exam

Kids carry many misconceptions about the final exam. A common one is that the final can somehow miraculously overwrite a semester's effort (or lack of effort). Here's a visual representation, to scale, of the 90%/10% model.

Exhibit A: The orange team blows out the red team, 90 to 10.

I have often instructed students to make the following program to help them forecast the impact of the exam on their semester standing.

Exhibit B: Program on the TI-84 for computing a 90%/10% weighted grade.

Here's a numeric example for how this sometimes surprises a student.

Exhibit C: Not much movement of the overall grade, despite a solid final exam score.

This is where the discussion gets interesting. Students will often use trial and error, over and over and over again, with the above program to compute the final exam score they need in order to get an A (which is 90% in our grading scale).

Exhibit D: An A doesn't appear to be in the cards, especially since extra credit does not exist in my class. (I'll share my views on EC on a different day). But I digress.

This is where I like to invoke Ken O'Connor's philosophy from How to Grade for Learning. This is a terrific book for teachers of all disciplines because it carefully examines the potential dangers of deferring to the electronic or paper gradebook to make all the heavy decisions.

Guideline 6 (page 153): Crunch numbers carefully - if at all.

a. Avoid using the mean; consider using the median or mode and weight components to achieve intent in final grades.

b. Think "body of evidence" and professional judgment - determine, don't just calculate grades.

I have been using the following practice for nearly ten years. I have a one-on-one conference with the student. I speak to them about what they need to get on the final to earn a particular grade. Since the breadth and depth of my final exam questions are more considerable than on unit exams, the student with an 86 would earn a 90 if they earn a 90 or better on the final exam, despite the fact the numeric stuff doesn't turn out that way.

This policy works wonders with students at the upper end. Sitting on a 96 for the semester, Johnny? What if you had the chance to up your semester grade? What if you could get a 100 for the semester by acing the final exam? All of a sudden, the student has a carrot to chase. The students appreciate this approach because it is a system that rewards effort and hard work. This approach gives the control back to the student.

Otherwise, isn't it possible that neat-and-tidy 96% in the grade book is simply a graveyard of sign errors? Or computation errors? Philosophically, what do I want as a teacher? Do I want a student's grade to reflect their learning? I have worked with thousands of students, yet only a handful I have encountered get most things on the first attempt. I tell my students every semester I have yet to meet a person that learned something without making mistakes.

I approach this problem reasonably. Don't get me wrong - if a kid is sitting at a 62% for the semester and gets a 95% on the final, that doesn't necessarily mean the kid deserves a 95%. I would need to reflect on the student's formative and summative measures as a body of evidence to help me make an informed professional judgment. However, if that situation happens - and it hasn't happened to me yet - I would need to re-examine my professional practice. That situation would indicate there is a disconnect between the content mastery a student demonstrates and what the grade says the student knows.

Teachers need to think carefully about how their grading practices capture - or don't capture! - student content mastery. Virtually all measurements are imperfect. The burden of evidence to prove or disprove the student knows something should lie with the student. If this is true, then the burden of judging whether the evidence suggests the student is learning lies with the teacher, not the plastic or silicon genie.

Think carefully on your grading this holiday season. Good luck to everybody - teachers and students alike - with semester finals!

December is a busy time of the school year. Well, every month is a busy time of year, but at the high school level, as the end of the semester approaches, teachers have a lot on their plate. Administrative edicts, standardized testing demands, semester final exams, and kids stressing about their grades consume a good deal of the high school teacher's time. One thing I would like to share about how I approach teaching: always protect time to reflect on your practice.

Becoming a better teacher is a never-ending process. Growth can be accidental or growth can be purposeful. One thing I try to do each day, whether during a planning period at school, or a brief few minutes in the morning when students aren't around, or after school when I've finishing helping students for the day, or even while surfing professional articles on the Web on a school night, is protect a few minutes to think about what I can do to become a better teacher.

One thing I like to do when I visit teachers I admire is to take a picture of their bookshelf. As Peter Knox says, "Sharing your shelf is sharing yourself." In fact, here's a photo of one of my bookshelves at school.

Aaberg_Bookshelf

 

 

 

 

 

 

 

 

 

 

 

The top shelf is an annotated version of my professional library. A great way to steal the knowledge of great teachers - a really quick and efficient way - is to take photos of their bookshelves. What books do they have that you don't? Which books are most important to them? I have been spending some of my recent reflection time looking at books from my undergraduate teacher education and my Master's work.

I used my time today to look at a phenomenal book I read cover to cover during my undergraduate work years ago. I would recommend it to any teacher at any level. The title is The Skillful Teacher: Building Your Teaching Skills. I feel like I entered a cheat code as a first year teacher in that I read this book cover to cover during undergrad, not just the assigned parts.

Below is an image of one of the many helpful research based practices and insights from the book. Think carefully on this table below. What do you believe? What do your students believe? What do your teacher colleagues believe? What do your administrators believe?

Beliefs_About_Risk_Taking_and_Learning

 

 

 

 

 

 

 

 

 

If that's fuzzy or tough to read, a PDF of the table is below.

Skillful_Teacher_page_369

Think about our school system and what gets rewarded. Ken Robinson points to this feature of schooling - the fact we are educated in a way that penalizes and stigmatizes mistakes - as educating students out of their creativity. The text excerpt above would point to this aversion to making mistakes as an unwillingness to take risks in the classroom.

This risk-taking dimension of climate has to do with the amount of confidence a student has and the amount of social and academic risk taking the student will do. If it is well developed, a student might be able to say, 'It's safe to take a risk here. If I try hard, learn from errors, and persist, I can succeed.'

I am constantly searching for ways to empower my students to be better risk-takers in the mathematics classroom. If students adopt the healthy attributions from the left-hand side of the chart above, we will see increases in tenacity, perseverance, and patient problem solving. How do we establish a classroom climate to empower our students to be better risk takers? And to respond with greater effort after making mistakes?

1 Comment

Probability and counting questions associated with rolling two dice are frequent at math contests. For example, one might ask, "What is the most common sum when two dice are rolled?" The grid below helps the student see the most common sum.

Exhibit A: The sums of two dice. (Ignore the pink stuff; 7 is the most common sum).

Suppose we construct the probability distribution for rolling two dice.

Exhibit B: Probability simulation for rolling two dice. 7 is the most likely sum.

One of my colleagues has 11 dice in her classroom. This made me wonder what would be the most likely sum for 11 dice. How would we go about answering this question?

Problems like this one show students the value of breaking a larger problem into smaller problems. Let's construct a table for the minimum and maximum sums for rolling n number of dice and see if we can determine some patterns. Our aim is to generalize our findings to n dice.


Exhibit C: A preliminary table for rolling n dice.

How would you support students in working towards the generalization? For example, modeling two dice with a table is fairly simple, as shown above. What about modeling three dice? Sure, you could go three dimensional... but what about four dice? Five dice? This problem poses some great modeling questions.

Addition to original post:

Dice_Problem_Wolfram_Alpha

 

 

 

 

 

 

 

 

 

Exhibit D: Polynomial approach to obtaining dice counts for the case where n = 2.