Teacher Think-Alouds

This post is a quick hitter about the way I have approached grading tests for several years. I purposefully try not to stretch the point total to 100 or some other tidy multiple of 10 or 20. Instead, I assign points to each problem based on sub-steps. I use my trusty TI-84 to compute values quickly for grade book entry.

Suppose a test turns out to be worth 68 points. Here's what I do.

This is the linear function with slope 1/68 and y-intercept 0. One approach is to construct a table of values like below.

But the trouble occurs when grades vary wildly. Say some students are getting poor scores, like a 41 out of 62. Then we have to either reset the initial table value or, even slower, scroll up.

Instead, I set the viewing window and make a graph.

Just ignore the delta-x and TraceStep at the bottom of the screen. I set the window to have an Xmin of 0, an Xmax of 68 (because these x-values span all possible raw scores), and then set the Xscl to 1. The Xscl assigns integer values to tick marks on the graph, but for our purposes, this is really a personal preference, not a requirement.

Setting Ymin = 0 and Ymax = 1 with Yscl = .1 computes the percentage for the student's score. I could obviously do a little extra work by graphing Y = (X/68)*100 and make window adjustments accordingly, but for the sake of speed, I will just mentally convert the decimal to a percent.

By pressing the TRACE key, I can now manually enter any score and see the decimal corresponding to the student's percentage instantly. For example, suppose a student answered 44 items of 68 correctly.

 

I know there are other ways to do this... but, this way works well for me. This is another of the many things that students in math teacher ed programs may not see or talk about before student teaching experience, or in some cases, at all before entering the classroom.

 

The following problem is what I use each year I introduce the notion of z-scores and converting position values on a Normal distribution with mean μ and standard deviation σ [ N(μ, σ) ] to scores on the standard Normal distribution with mean 0 and standard deviation 1 [ N(0, 1) ].

We use YMS The Practice of Statistics, 3rd edition, in the AP Stats class I teach. Standardized scores makes its first appearance in Chapter 2. We cover this content in early September, a time where many college bound students are busy filling out college applications, preparing resumes, and requesting letters of reference. Since our school is in the Midwest, virtually all students are familiar with the ACT. Few know about the SAT; in particular, few know the maximum possible score on sections of the SAT. This activity leads to a nice thought experiment also, where the students must put themselves in the shoes of scholarship committee members making decisions that affect students' lives.

Here's what the example above looks like worked out on the Promethean board:

This decision is pretty simple to make. The student with an ACT math score of 33 has a relative performance far more impressive than the student with the SAT math score of 705. I start with this example because the values are fairly clean and the decision is easy to make.

But what happens when the computations reveal values that do not yield an 'easy' decision? Here's the example I use to immediately follow the ACT vs SAT issue.

I like this example because it requires the students to reflect on the choice they will make about units of length. Should we convert the feet & inches measurements to decimal feet? Or to inches? Many students choose to use inches. I show students the problem and put three minutes on a countdown timer.

Then I circulate the room as students work through the problem. I listen carefully for the discussion, for the argument of which student would be the better scholarship candidate. I randomly select a student to go to the front of the room to show the work they did and to explain their thinking. An example of some student work is below.

Listening to the students argue about this decision is fascinating. Some will insist that because the female has a z-score that is a whole unit higher (5.11 versus 4.109), the female deserves the scholarship.

Others will argue that because the normalcdf command on the TI-84 yields the same value to four decimal places, it does not matter which candidate we choose; both are equally good. (This four decimal claim is because the rounding convention we use as a default is to the nearest ten-thousandth when not specified).

Another school of thought amongst the students is that because the procedure does not yield a clear result, further analysis is needed, such as academic performance, financial need, or an assessment of each athlete's moral character. These factors of consideration are student-centric.

I challenge students to think from the perspective of the athletic team or the institution. Perhaps the conference is loaded with strong female athletes, so we need a strong female athlete to be competitive. Perhaps we need to choose the athlete whose family could provide more financial support in the event we have to split the scholarship value later. Many of these institution-centric considerations do not occur to the students naturally.

Using high quality problems like this one provides another hidden instructional benefit. I always have a conceptual hook on which to hang the process of standardizing scores. If I ask "Do you remember the process for standardizing scores on the Normal curve?" and get little to no positive responses, I can always quickly follow with, "Think back to the scholarship problem, where you had to compare two different candidates to see which candidate was better." This cuts down on the time I have to spend reteaching and allows us to be more efficient during class time.

As I look to summer when I have additional time to better my practice, this is one of the first problems I will look to film when I test the waters of 'flipping' the classroom.

Our calculus class on Tuesday was investigating derivatives of functions involving logarithms, particularly functions including natural logs. To begin our notes set, I worked a problem from algebra to help students remember how we use inverse operations to solve equations. I took direction from the students and solved the problem below in the following way.

I knew what would happen in advance - that students would simply go through the motions and apply algebraic properties to isolate x and move on. On a yearly basis, this is one of a set of problems I like to use to demonstrate to students the value of multiple representations. I then showed them the graph of the equation and how we could solve for the intersection of the logarithmic function and the constant function 12.

We had an interesting discussion about the viewing window. Most students jumped all over "ZOOM-6", or Zoom --> Standard, the viewing window with x-axis and y-axis spanning the values -10 to 10 with units (tick marks) of 1.

Hmmmmm.... where's the 12? It's off the screen, of course!

After challenging their solution with the graphical argument, we decided to revisit our algebraic approach.

This was a powerful way to convince the students that the exponent 2 acting on the quantity x - 2 has the effect of making the result positive, and it drove home the idea that we must consider the definition of absolute value when dealing with squares and square roots.

We also had a good discussion about the original expression and whether the second power was acting on the quantity (x - 2), such as ln[ (x-2)*(x-2) ], or if it mattered if we consider ln[x-2]*ln[x-2].

Students often believe the plastic genie - the graphing calculator - knows all. I really enjoy the days in math class where I can show examples that disconfirm this student perception.

This semester I decided to try something different with our approach to the first unit exam in our Trigonometry class. Typically, I give students in all my classes an "exam objectives sheet" prior to the exam. The sheet has a collection of objectives the students should master. Basically, I am telling them about what appears on the test and in what order. That way, there are no surprises or "Gotcha!" moments on test day. Here's a sample of a subset of exam objectives from an exam objectives sheet from AP Stats to give you a sense of how these sheets look in my classes:

In my teaching experience, students have struggled at times with being able to dissect the verbal instructions and figure out what potential exam problems might look like. I decided for the first Trigonometry test to run this process "in reverse."

Here's the exam objectives sheet students received Monday:

Trig_Exam_Objectives

 

 

 

 

 

 

 

 

 

 

 

On Monday, I had students work in self-selected small groups of 3 to 4 students for 22 minutes (timer on the board) on writing the objectives from the problem sets. I modeled on the board how to write the first objective for section 1.1 problems 1-8.

Here's a sample problem from section 1.1:

Find the domain and range of each relation.
#4: { (2,5), (3,5), (4,5), (5,5), (6,5) }

I talked students through writing an objective for items of this type: "Given a relation, state the domain and range." Several students questioned why I did not start with "given a set of ordered pairs." I told them this choice was purposeful because in practice exercises, students were also given relations in graphical form. We discussed that starting the objective with the phrase "given a relation" captured both types of problems.

Students worked in small groups and wrote objectives for EVERY item. I circulated the room and confirmed EVERY student had written objectives for EVERY item. Then, we spent the remaining half of class discussing common misconceptions and errors on problems, addressing why each error occurred and what the student making the error would be thinking, along with why the thinking was erroneous. Here's the data for one of my classes (n = 16)

Before I start jumping for joy, it's probably a good idea to consider my other classes. Here is a comparison of the three sections I have.

The other two sections did not fare so well. The much larger dispersion among scores in the classes labeled B and C concerns me (s = 13.0229 and s = 11.3002). On our grade scale, the median in all three sections is an "A."

Teachers have to be data detectives to diagnose what students do or do not know. The three outliers above had some misconceptions, evidence to suggest little to no work outside class is taking place. As a practitioner, I also have to think about how effective I was or was not teaching the material. Obviously something different is going on in the class labeled "Column C."

When I compare the global mean and median across my three sections this year (mean = 92.0351; median = 96) to last year's data (mean = 85.61; median = 87), I am pleased to see a dramatic improvement on the assessment (yes, this is the exact same test I used last year). I will likely take the same approach to preparing students for the second exam and use data analysis to determine if the approach is contributing to the improvement in scores.

I have a confession to make. When I first started teaching AP Statistics in 2005, I had no idea why a Normal probability plot (an example is shown to the left) was important... or what it told us about data. I busy trying to stay a day ahead of students that first year. I never really sat down with several textbooks to compare definitions and examples as I probably should have. Simply put, when students asked, I told them the canned answer: "The more linear the plot is, the more "Normal" the data is." We'd use the calculator to make the plot, look at it, and move on.

Let's take a closer look at why we study a Normal probability plot in AP Statistics. I will do some borrowing from various discussion board posts of the past on the AP Stats forum and will add some commentary as we go.

First, consider the method we use to compute a z-score; that is, a positional score for Normally distributed data that indicates the number of standard deviation units above or below the mean a particular data point lives. For example, if z = -1.2, then the data point is 1.2 standard deviations below the mean. It makes sense that a standardized score [ z = (x-μ)/σ] depends on two things: the data value's physical distance from the mean *and* the distance tempered by a measure of spread, specifically the standard deviation. Let's isolate x in this equation to see what happens.

 

 

 

 

 

 

 

 

The algebra above is commonly used in problems where we are asked to find a score which corresponds to a particular percentile rank. For example, if the mean score of the ACT is 18, and the standard deviation is 6, then what composite score puts a student in the 70th percentile of all test takers that day? A score slightly north of 21, as shown below.

 

 

 

 

 

 

The InvNorm command above finds the z-score corresponding to a cumulative area of .70 under the standard Normal curve, which has mean 0 and standard deviation 1. We see a z-score of .5244005101, according to the TI-84, gives the position for a data point in the 70th percentile. We can then reverse engineer the score needed to fall into this percentile.

In the world outside school, it's usually not likely we know the actual value of σ, the population standard deviation, or μ, the actual population mean. As unbiased estimators of these unknown values, we use , the sample mean, in place of μ, and we use s, the sample standard deviation, in place of σ. Then the value of x looks like Technically, once we make the substitutions, we would really be using a t-distribution of some flavor to model the data. On the other hand, in the example below, since we can get data on every qualified point guard in the NBA as of right now, we can directly compute the mean and standard deviation for the entire population, making this substitution unnecessary in this case. However, students need to be aware of the need for t-procedures.

To show an example of a Normal probability plot, I pulled NBA data from ESPN regarding point guard performance thus far in the 2013-14 regular season. Let's take a look at the top 26 (since there's a tie for 25th place) point guards in the NBA with respect to average points scored per game, the gray column labeled "PTS."

 

Let's enter the data from the table above in the TI-84.

 

Next, let's construct the Normal probability plot for the data. Norm_Prob_Plot

 

 

 

 

 

So... what exactly does this plot represent? And what makes this representation so important? The x-values obviously correspond to the average points per game value for each point guard. What about the y-coordinate of each point on the graph? The y-coordinate corresponds to the z-score related to that particular x-value. In the screen shot above, Kemba Walker, the point guard with 18.6 points per game, has a z-score of approximately .7039. If the data followed exactly a Normal curve, then all the points on the above graph would lie exactly on a straight line. By looking at the z-score for each data point using this display, we can get a quick insight into whether the data are Normally distributed. Let's look at a boxplot for the same data:

 

 

We can see, in the plot above, the data for these 26 point guards have no outliers, but there appears to be some skewness. Computing the values (Max - Q3) = 4.4 and (Q1 - Min) = 10.8 - 9.3 = 1.5 and 4.4 > 1.5, we can demonstrate this skewness. This numeric argument doesn't take a lot of calculator kung fu, but we do have to perform an extra computation or two. Looking back at the Normal probability plot, we could use the image to immediately notice the skewness of the data. Suppose we graphed the original z-score equation [z = (x-μ)/σ] on the same graph as the Normal probability plot. In other words, we will make the Normal probability plot. Take a look!

 

We only used 26 data points, so the data is a sample of the population of NBA point guards. Again, if the data were perfectly Normal, all the blue points would be living directly on the red line. We can use our knowledge of linear equations to see clearly what's going on here.

So the slope of this red line representing the 'perfectly' Normal data has slope 1/4.271785124. Let's find an equivalent value that's slightly more user friendly:

If we express this value as

notice we can say for every additional unit increase in x, the average points scored per game, we expect to see a z-score increase of .2340941716. Much like when we consider residuals while doing linear regression, when x-values deviate noticeably from the expected red line, they are surprising from the "Normal curve's point of view." The curvature at the left end of the Normal probability plot immediately indicates the skewness of the data. You can find more examples of this on your favorite search engine by asking for "Normal probability plot skewness." If we know how to visually recognize this pattern, we can immediately recognize skewness of data using a Normal probability plot.

This connection between the Normal distribution and why its z-scores are linear has a pretty good explanation on the Wikipedia entry for "Standard score."

4 Comments

Implicit differentiation really helps my students understand why we can't just arbitrarily slap a prime on a function when we differentiate, that the variable we differentiate with respect to matters. Discussing the differences between y' and dy/dx, or P' and dP/dt helps facilitate later topics too, like related rates and optimization.

We had our second calculus class of the semester today. We spend part of the time going over results from our fall semester final exam. We also worked on a free response question from an old AP exam, 2000 AB #5 that mirrored another old AP question I have on our fall final. The question we worked today is below.

We had some good conversation about what it means to show the expression given in part a for dy/dx corresponds to the given implicit curve. After doing the implicit differentiation, we ended up with this:

I argued with the students our job wasn't quite done because the expression to the right of the equals sign does not match exactly with the original expression given in part a. I then wrote

And declared part a "done." A student challenged me, saying the y' did not match the left hand side of the equals sign of the original in part a (dy/dx). Great observation. I told the students if we are being precise, we should also go a step further and demonstrate we know y' and dy/dx are equivalent by writing dy/dx on the left. While this might seem like splitting hairs, I want my students to know attention to detail matters.

Part b came and went without much trouble. We found the two points on the curve whose x-coordinate is 1 and constructed the tangent line equations.

Then came part c. As the students were working part c, I went to Wolfram Alpha and created a graph of the implicit curve.

Wolfram_Implicit_Graph

 

 

 

 

 

 

 

 

 

 

 

I wasn't too concerned with the restrictions on the scale or viewing window. As we worked through part c together, the students understood why we needed to set because this would retrieve an undefined result for dy/dx.

The algebra we did looked like this:

The reason x = 0 is circled is because I forgot to verify at the end of the problem that x=0 is not a valid answer. If we try to directly substitute x = 0 into xy2 – x3y = 6, we get 0 = 6 which is nonsense. The substitution confirms x cannot be zero on the graph despite the fact we must infer the asymptote on the graph.

Here's another screenshot of the last slide I wrote during class:

I wrote -x5 = 24 and then asked how to isolate the x. I then wrote the solution (in black) above. The students were having trouble reconciling that the black value is equivalent to the red value. I referenced even and odd roots as the reason why I could pull the negative out of the radical (going from red to black).

HERE'S WHAT I WISH I HAD WRITTEN ON THE BOARD DURING CLASS...

I could sense the unease in the room around the solution, but I couldn't quite put my finger on it. After reflecting on this, I think I know why the kids seemed to disengage a bit. I have a lot of experience with negatives and roots and so on, so the simplification was not a stretch for me based on my experience. But the kids needed to see the work above, to reference the algebraic rules they know, to understand why it is permissible to move the negative out in front. I struggled to diagnose the students' need for this explanation during class. In hindsight, it explains why the students seemed to disengage and why my spider sense started tingling.

This post is really a reference for myself later, to remind me to think carefully on anticipated errors and what I can do to help students reconcile quantities that are numerically equivalent but not obviously numerically equivalent.

The break gave me a fair amount of time to reflect on teaching and my philosophy of education. I was scrolling through my Twitter feed and came across this tweet from the National Council on Teacher Quality (NCTQ):

While reading this tweet, I was struck by an interesting memory. In my first year of teaching in Omaha, back in 2004, the school I taught at had just implemented 1-to-1 computing with Macbooks. I had a student that year from China. After class ended one day, she and I talked about how interesting it was to watch how students were using the Macbooks in positive ways but also in negative ways. I asked the student what she thought about the computers. She offered a brilliant insight:

I think the computers are making the students impatient. It used to take a lot of time to look something up. All the students want the answer right now.

That conversation has stuck with me. I see this impatience in my classroom and other classrooms on a regular basis, whether it's students moaning and groaning when a problem takes more than thirty seconds or the students' body language revealing frustration or boredom.

Two years later, I was teaching at a different school, a school that has not yet gone to 1-to-1 computing, even in 2014. Even without 1-to-1 computing, the ubiquity of smartphones and tablets has had a profound impact on our young people. Personally, I wonder how different my school would have been with the temptations of Twitter, Facebook, and SnapChat lurking in the background. I am not saying technology is evil; I am saying educators need to be mindful of the impact technology has on students' physiology and psychology.

Joachim_de_PosadaOur culture in the United States is the embodiment of instant gratification. This has dire consequences for math teachers attempting to help students learn patient problem solving. Joachim de Posada discusses the predictive impact of studies on delayed gratification. Intuitively, it makes sense the students willing to delay gratification - those that push through and past the point of frustration - tend to be the successful students in school.

 

Back to the earlier question: are students evolving or changing? Technology provides the context for our students to literally stand on the shoulders of giants and answer more interesting questions that can profoundly impact the modern world. But with great power comes great responsibility. In nature, we sometimes see DuckBilledPlatypusevolutionary dead ends. Think duck-billed platypus. We want our students to be critical thinkers. To be problem solvers. And while it is convenient to rely on Google's search algorithms to find an answer quickly, we need our students to be able to analyze, synthesize, and evaluate the information they encounter in the world. It's not a stretch of the imagination to think certain technological behaviors we allow, like clicking on the first link on a Google search or citing only Wikipedia sources, are the evolutionary equivalent of a duck-billed platypus. I wonder what my students and their children will be doing in the 22nd century and how different the world will be. I want to empower my students with robust strategies to prepare them for this world that does not yet exist. What can I do as a math teacher to maximize these students' potential? Because barring medical miracles, I am undoubtedly preparing my students for a world in a new century I will not likely see. So what should be my function, my purpose for teaching students mathematics?

Awaken raw curiosity. Provide the context and vocabulary to describe the universe and everything in it. It starts with asking interesting questions and finding technological resources to address these questions. Michael Stevens is doing a phenomenal job leveraging curiosity to create teachable moments at Vsauce.

The spirit of the STEM movement is the interconnectedness between disciplines. Why we limit this 'interconnectedness' to only four disciplines causes me to scratch my head a bit. Mr. Stevens' video above references figures that cannot be measured directly with current measurement tools... but they can certainly be calculated. We can even compute how aesthetically pleasing an object is or isn't. What should the role of the math teacher be, then? What does this have to do with our students, and whether they are evolving or changing?

I think my best answer right now is to provide a balanced approach between traditional mathematics and problems from the world outside school. And for those items from the world outside school students may not have the mathematical horsepower to address, there is now a phenomenal resource to address both traditional topics and modeling challenges. Math teachers may experience some nervous excitement on the Wolfram Demonstrations website. I encourage you to check out the resources available on Wolfram Demonstrations.

Regardless what educational approaches we utilize, the world will continue to move forward. Students will continue to move on through the sorting algorithm that is school and the world will continue to realize the potential of students from many different school systems. What do you think? Are students evolving or changing? What can we do differently as teachers to prepare students for the future?

While lesson planning for calculus class, I was thinking this morning about common mistakes students make during simplification. Teaching students how to combine like terms can be a challenge. I'd like to share how I approach teaching combine like terms and factoring. Consider the Algebra 1 exercise below.

Textbook instructions:
"Simplify the following expressions. Use the Distributive Property if needed."

Problem:

How often do we see struggling students claim the above binomial is equivalent to 13x? We can quickly diagnose the misconception the student has... the student saw the addition symbol and combined the integers 7 and 6 as they did in their youth. The not-so-helpful hint "use the Distributive Property if needed" may distract the student from what they are being asked to do. I would pose the exercise in a different way.

Modified instructions:
"Simplify."

Problem:

Note: Some may argue writing "simplify if possible" would be better since we may encounter some problems like this one where there really isn't any 'work' to be done. Instead of writing "not possible," I would rather see my students recognize the quantities 7x and 6 are relatively prime (with GCF of 1) because the value of x is unknown. To facilitate the teaching of factoring later in algebra, the student must also recognize when to "stop." Students will often ask, "How do I know when I can't factor it any further?" I like to revisit the definitions of prime and composite to address this notion of knowing when to stop.

I have my students write the following:

Physical models like algebra tiles are one way to approach modeling the reason why we cannot combine 7x and 6. Let's look at another approach as to why we should not combine 7x and 6. Here's a contrived exchange between a student and a teacher that looks like how I approach this in my classroom verbally.

Teacher: What is the exponent on the x? <points to the x in 7x>

Student: Zero!

Teacher: If the exponent were a zero, we know any nonzero number to the zero power is 1. Then we would be multiplying the 7 by a 1, and we know multiplying any number by 1 does not affect the number's value. Also, we use zero to represent 'none.' So, if the exponent were a zero, then there would be 'no' x.

Student: Then the power must be a 1.

Teacher: That makes sense. What is the power of x on the 6? <points to 6>

Student: There's no x there!

Teacher: Would it be legal to draw a  ghost x0 to mean there is no x there?

Student: Okay.

Teacher: On page 89 of our algebra text, the author states "3x and 5x are like terms because they contain the same form of the variable x." The author then says 3x + 5x = 8x. What do you think the author means by 'same form'?

Student: The powers of x match in both.

Teacher: If that's true, then is it possible to combine 7x and 6 using addition or subtraction?

Student: No.

Then the curtains fall. The teacher and students move on to another problem. But let's take a second look. We still need a convincing argument to demonstrate the expressions 7x and 6 cannot be combined. Let's look at some specific cases.

Suppose x = 1. Then the student is correct, since 7(1) + 6 = 13. But this substitution does not allow x to vary freely. Another way we can convince the student we cannot combine 7x and 6 would be to use a graph.

The fact that the red line y = 13 and the blue line y = 7x + 6 is pretty convincing evidence the two quantities 13 and 7x + 6 are not the same for any x value other than 1, the intersection point on the graph. We can counsel the student that y = 13 is constant while y = 7x + 6, the line with slope 7 and y-intercept 6, depends on x. Let's consider another special case.

Suppose for some integer . We can look at some specific cases to help our thinking about the consequences of this selection for x.

We can now discuss the earlier statement, "Use the Distributive Property if needed." If we consider the case where x is an integer multiple of 6, then we have

7x

 

 

 

7k and 1 are not like terms, just as 7x and 6 are not like terms. 7k + 1 only equals 8 if k = 1. Why even think this way? Because we can tie the original problem back to factoring.I tell my students factoring is like playing tennis. Metaphorically speaking, we know a volley is over with in tennis based on what the ball does. We know we are 'done' factoring when we have an expression made of prime factors. I tell my students factoring is essentially a two step process:

Step 1. Greatest Common Factor (GCF)

Step 2. Depends on terms that remain. Applying a possible strategy may involve the whole collection of terms or a subset of the terms in the problem.

We volley back and forth - do both steps - until we obtain expressions that are prime or constants that can be written as the product of primes. For instance, in the problem , we could rewrite 6 as the product of 2 and 3... but the binomial can be rewritten as the product of prime linear factors x + 4 and x - 4.

Why do we consider x + 4 prime? Since the value of x varies, if we only consider integer values of x, then x + 4 may give a composite number or a prime number. Since we do not know the value of x, we think of x + 4 more conservatively as prime.

It does not take much to convince students that all numbers are divisible by 1. I insist my students look for a GCF every time, even in trivial problems, and identity the GCF is 1, or more specifically, . I try to emphasize that when we factor expressions with multiple terms, we look to factor out the "lowest" power the variable.

If we try and factor the original problem using this approach, we have

since GCF(7, 6) = 1 and GCF(x, x^0) = x^0 = 1. This approach is particularly useful for later work in calculus, specifically when taking derivatives of expressions. My calculus students sometimes struggle with reconciling their solutions with the answer the text provides because they may not totally understand the factoring necessary for simplification. Students stumble when asked to find the GCF of the expression . If students are accustomed to identifying the GCF, then problems of this type aren't as troublesome.

My self-imposed holiday blogging break is over. <cracks knuckles>

The holiday break affords teachers the opportunity to reflect. One dimension I think many overlook in teacher reflection is to sit and think deeply about one's content area. I would even be bold enough to say the best math teachers are mathematicians at heart. When I sit down to think about mathematics, as a high school math teacher, I like to approach it like a person that can solve a Rubik's Cube. When you watch this feature below on a speed solver, Chester Liam makes the following claim about speed solving a Rubik's Cube:

As a speed solver? No, there is no math involved, no thinking involved. It's just finger dexterity and pattern recognition. There is nothing, no thinking involved in the entire solving process. [1:02 - 1:14]

We can unpackage Chester's thoughts about the Rubik's Cube and how he solves it so quickly by applying mathematical structure to the cube. However, we should ask: what is the objective? What are we trying to do? There's gobs and gobs of mathematics wrapped up in speed solving a Rubik's Cube, but if Chester were to pay attention to the procedures he is applying, it would inhibit his ability to solve the cube quickly.

But consider this: what if Chester attempts to teach how to solve the cube to another person? What would he have to do? What examples, explanations, and demonstrations would he utilize to teach his pupil speed solving? What implications does this thinking have on teaching mathematics? There would be many features of speed solving the learner may not perceive until Chester brings it to the learner's attention. And if Chester's choices are calculated, deliberate, purposeful... the learner may be helped or hindered dependent upon Chester's ability to communicate his thinking which leads to the automaticity of the procedures he applies to solve the cube. Just like learning how to read or learning how to drive a car, we want to teach learners how to do these tasks so well they become 'automated' at some level.  To understand mathematics deeply, I believe it is often necessary to unpackage some of these automated tasks. I will share an example of such a mental exercise below.

Today I've been thinking about procedures we accept as true while doing math at the high school level, algebra in particular. As an example, suppose we want to determine the location of the x-intercept for the line 5x - 3y = 7. We might approach this 'task' in the following way:

And we might even graph the line to confirm our solution...

Yep. There it is. The x-intercept at (1.4, 0). As a student, we might simply yawn and move on to the next exercise. The student must recognize the y-coordinate of the line will be zero when the line crosses the x-axis. Yes, we have a solution, but I'm not so sure as a math teacher I'm satisfied to stop there. What if we take a different approach? Let's turn the Rubik's Cube and look at the problem another way. A 'typical' algebra student might subscribe to the church of y = mx + b and do the following...

Just another equally valid path to the value of the x-intercept. I'd like to focus on something in one of the above lines.

This statement says five-thirds of some mystery number is seven-thirds. So, what's the mystery number? I think if we asked a room full of high school math teachers to draw a diagram explaining why we multiply each side of this equation by the multiplicative inverse of 5/3, namely 3/5, we would get some really surprising results. It's not an indictment of teacher education. Rather, it's to say some teachers may not have considered the how's and why's of this procedure before for the same reason someone learning to speed solve a Rubik's Cube may miss a key structure. They may not have the experience of needing to know why it works. Rather, it was more important that they can find the x-intercept of the line; the multiplication by the multiplicative inverse was viewed as "below" the task or level at hand. Perhaps a student never asked "why?" at the critical moment to give the teacher pause.

I struggled to produce the corresponding fraction diagram without trying to reverse engineer the solution. When I write this blog, I often worry I might make a mistake that will be indelibly written into the electronic space of the Internet. But this worry violates the spirit of my blog's theme: that we need to make mistakes in a good direction to evolve our mathematical understanding. Here's the images of my failed by-hand attempts to generate this fraction diagram:

ChickenScratchesPage_1Chicken scratches, page 1

ChickenScratchesPage_2

 

 

 

 

 

 

 

 

 

 

 

 

 

Chicken scratches, page 2

I had trouble thinking about making the diagram because 5/3 and 7/3 have the same denominator, so I wrote out some equivalent fractions. Then, I wanted to use "ninths" because that made sense to me in terms of the grid on the graph paper I had cut apart. But, I then realized I would need to cut fifths to find the mystery number, and I was REALLY struggling with trying to free-hand cut fifths with the grid in the background. That led me to coordinatize the points of the polygon. Then I had a problem with relationships between the linear units on the horizontal axis and the area (the fact the polygon is not "one" unit vertically... which is basically the notion of a unit fraction we see emphasized in CCSSM). So I abandoned the paper approach in favor of Geogebra because I could generate better precision and be more efficient with respect to time. <Sorry for the sloppiness of this paragraph, but it does describe my thinking and the mistakes I made.>

Below is an image of the fraction diagram I constructed using Geogebra.

When stating the equation , we should think of it as a verbal statement: "Seven-thirds is five-thirds of what mystery number?" Well, if the green polygon represents a whole, then the orange polygon is one-and-two-thirds of that whole. Then the green area of 1.4, which equals 7/5, corresponds to the solution. Mentally, how in the world did I end up with fifths, then? How did I know to cut the horizontal into fifths using vectors and vertical lines in the coordinate plane?

We can think of a fraction in the most basic way. Consider 5/3. If the denominator indicates the number of pieces we partition from a whole, and the numerator indicates how many pieces we possess, then I knew we needed to cut the orange rectangle in a way that would make five pieces. It gets to the root cause of WHY we invert and multiply. The numerator 5 becomes the desired number of pieces, considered in a denominator.

For the sake of time, I will stop my mental exercise there. Between working the problem, generating the fraction diagram by hand and on Geogebra, and typing this up, I've spent about two hours roughly on this article. This professional development is incredibly powerful for me as a teacher, and it's absolutely free (well, not quite free, I do pay for the website hosting, but you get the idea).

Let's end by stirring the discussion pot. Consider our understanding of how to find the x-intercept of the given line. Does our understanding, or lack of understanding, of the fraction diagram and how to construct the fraction diagram (essentially "invert and multiply" in many high school classrooms) inhibit our ability to solve the original problem? Is it still possible to understand the original solution without knowing all the nuts and bolts of the fraction procedure? Stephen Wolfram argues in favor of using computers to automate trivial computation procedures to help us access problems in the world outside school.  How will our teaching of mathematics change as computers continue to become faster and more powerful?

1 Comment

BlueCowSuppose for a moment a parent shows a child images of 10,000 blue cows. Yep. No typos. Ten. Thousand. Blue. Cows. We are talking blue. Like the cow pictured at the left. 10,000 is a healthy number of cows. This would amount to showing the child one blue cow every second for roughly 2 hours, 45 minutes. The child might conclude, given this overwhelming evidence, every cow is blue. We could really rock the child's world by introducing an image of a white cow or a brown cow or a black cow. This counterexample would stimulate the child to re-evaluate his or her conception about what makes a cow. The hope would be that the parent would help the child understand better the definition of "cow." What makes a cow? Four legs? Not necessarily, the cow could be an amputee. An udder? Not necessarily - couldn't the cow be a bull? Horns? A tail? A particular set of adenine, guanine, thymine, and cytosine? Do we need to be that specific? We collectively have a definition we use for the animal "cow," and it's often based on experience. As you are reading, you may have even conjured up, in your mind's eye, a picture of a cow or two. Specifically, that definition of "cow" may vary according to the context in which we are operating.

Take the above situation and replace every word "cow" with "mathematical example." If developing students with strong mathematical understanding is the goal, we must be wary of how to model for students how to behave when the white cow or brown cow or black cow comes along.

My experience with teachers and students tells me identifying the domain of a function is often tricky business for students. What types of functions does a typical Algebra 2 book examine when looking at the domain of a function? They look at lines, parabolas, and cubics for sure. A student sees the domain of a run-of-the-mill linear function and a run-of-the-mill cubic function are the same. They see the domain for a parabolic function is also the same. The student's mind attempts to search for some sort of pattern. The student may formulate misconceptions that do not generalize. It is a challenging task for the teacher to develop a robust understanding of domain in students. We hope the student understands the notion of a function, input values, binary operations that are not closed in the real number system, etc.

I was helping a former student yesterday with some material for an upcoming College Algebra exam. We came across the following problem.

There are many things to like about this question. The student must have a pretty solid understanding of what linear functions look like. The student must understand A(t) does not mean the product of quantities A and t. The student must recognize the quantity t/3 can be rewritten as (1/3)*t by leveraging the distributive property to combine the like terms 4t and t/3. The student must also recognize 8 can be rewritten as 8*t^0 to explain why 8 is not a like term with the others. Our work for the problem is below.

No trouble, run of the mill example (blue cow).

 

Now for the brown cow.

What a GREAT QUESTION. <Trumpets herald from the heavens>

Here's some initial work on the problem, which looks an awful lot like the student that does not see the problem for what it is - something different.

As a teacher, I was thinking of how to leverage other connected ideas to help the student deepen his understanding of linear functions. Heck, we even graphed it in Geogebra and the silicon genie confirmed the student's suspicions about linearity with a picture.

I put all that other stuff in the sheet (the slider and the point A whose x-coordinate is governed by the value of the slider) after the fact. The student was convinced the function was linear and was ready to move on.

My background knowledge from modern algebra and rings and binary operations and all that jazz let me see the problem for what it was. I was trying to think through how to meet the student at his level to help him develop an understanding that would allow him to identify functions that may appear linear but have potential domain issues. One could argue the function is "linear" everywhere except at x = 0. I asked the student about the operations he saw going on within th