Monthly Archives: January 2014

Our calculus class on Tuesday was investigating derivatives of functions involving logarithms, particularly functions including natural logs. To begin our notes set, I worked a problem from algebra to help students remember how we use inverse operations to solve equations. I took direction from the students and solved the problem below in the following way.

I knew what would happen in advance - that students would simply go through the motions and apply algebraic properties to isolate x and move on. On a yearly basis, this is one of a set of problems I like to use to demonstrate to students the value of multiple representations. I then showed them the graph of the equation and how we could solve for the intersection of the logarithmic function and the constant function 12.

We had an interesting discussion about the viewing window. Most students jumped all over "ZOOM-6", or Zoom --> Standard, the viewing window with x-axis and y-axis spanning the values -10 to 10 with units (tick marks) of 1.

Hmmmmm.... where's the 12? It's off the screen, of course!

After challenging their solution with the graphical argument, we decided to revisit our algebraic approach.

This was a powerful way to convince the students that the exponent 2 acting on the quantity x - 2 has the effect of making the result positive, and it drove home the idea that we must consider the definition of absolute value when dealing with squares and square roots.

We also had a good discussion about the original expression and whether the second power was acting on the quantity (x - 2), such as ln[ (x-2)*(x-2) ], or if it mattered if we consider ln[x-2]*ln[x-2].

Students often believe the plastic genie - the graphing calculator - knows all. I really enjoy the days in math class where I can show examples that disconfirm this student perception.

Our superintendent asked me to write a piece on the importance and relevance of mathematics to the math students in our district. The piece appeared in our Annual Report which can be found here. I thought it would be a good idea to share the article on my blog given the 'Annual Report' sometimes does not reach as many patrons as we would hope. Below is the article I wrote.

What will the world look like, 86 years from now, in the year 2100? What did people 86 years ago, in the year 1928, think the world of 2014 would look like? Would they have imagined the Internet? Smart phones? Could they have imagined the dilemmas of today’s world, such as climate change, stem cell research, global economic crises, and biological weaponry? According to the National Center on Teacher Quality, if a teacher’s career is 30 years, and their students live for 70, current teacher education programs are actually preparing teachers for 22nd century learners. What tools will our students need to solve problems in a world which we can’t even imagine?

To prepare our children for the world they will inherit, we must equip our children with effective problem solving strategies, robust critical thinking skills, and the ability to use sequential reasoning to make decisions. Mathematics offers the tools our students need to adapt in a changing world. Mathematics allows students to solve interesting problems with high cognitive demand to better understand the world outside school. Robert Moses writes in Radical Equations the importance of learning math, specifically algebra, in today’s society:

“Before, in the old system… algebra acted as one of the gates through which you entered college… Algebra could not stop you from going to college – not having it could hinder you but it couldn’t stop you. And it was okay to be in college unable to do math. People boasted… ‘Never could do that stuff,’ they said, on the college campus then. But those days are over. It’s not so cool or hip to be completely illiterate in math. The older generation may be able to get away with it, but the younger generation coming up now can’t – not if they’re going to function in the society, have economic viability, be in a position to meaningfully participate, and have some say-so in the decision making that affects their lives. They cannot afford to be completely ignorant of these technological tools and languages. So algebra, once solely in place as the gatekeeper for higher math and the priesthood who gained access to it, now is the gatekeeper for citizenship; and people who don’t have it are like the people who couldn’t read and write in the industrial age.”

45 of 50 states in the U. S. have adopted the Common Core State Standards for Mathematics. Nebraska is not one of these states; rather, Nebraska elects to use its own math standards. Our school district will revamp its math textbooks across all grade levels K-12 in the 2014-2015 school year. Our school district personnel understand the need for a rigorous, flexible, dynamic curriculum that will be cost effective regardless of future decisions made by Nebraska state leadership on mathematics curriculum standards. Additionally, our district leadership understands the single most important factor in our students’ mathematics education: the teachers that stand before our students every single day. Significant funds will go towards providing our teachers professional development. Our students need teachers equipped with sound pedagogy and powerful instructional strategies. Research suggests a highly effective teacher can elicit 1.5 years of academic growth in a single academic year. Imagine the possibilities for students with a highly effective mathematics teacher every year of school.

All students in Scottsbluff Public Schools need the best mathematics education available to be competitive in a global job market. According to Linda Gojak, President of the National Council of Teachers of Mathematics, “the question is no longer if students should take algebra but rather when students should take algebra.” Our teaching staff in SBPS continues to work towards offering rigorous mathematics classes for all students. This means developing an exceptional concept of number and quantity in elementary math students, proportional reasoning in middle school students, and a high school curriculum which provides the opportunity for all students to reach Geometry, Algebra 1, and Algebra 2 at the minimum. Research suggests students successfully completing Algebra 2 are four times more likely to complete a bachelors’ degree program. In a county where only 19% of adults hold a bachelors degree, and the median annual household income is $10,000 below the state average, effective mathematics instruction has the potential to break cycles of poverty and to change life trajectories for our students. It’s an exciting time to be a Bearcat. We thank the community for continued support of our teachers as they continue to find new and innovative ways to serve the students of Scottsbluff Public Schools.

This semester I decided to try something different with our approach to the first unit exam in our Trigonometry class. Typically, I give students in all my classes an "exam objectives sheet" prior to the exam. The sheet has a collection of objectives the students should master. Basically, I am telling them about what appears on the test and in what order. That way, there are no surprises or "Gotcha!" moments on test day. Here's a sample of a subset of exam objectives from an exam objectives sheet from AP Stats to give you a sense of how these sheets look in my classes:

In my teaching experience, students have struggled at times with being able to dissect the verbal instructions and figure out what potential exam problems might look like. I decided for the first Trigonometry test to run this process "in reverse."

Here's the exam objectives sheet students received Monday:

Trig_Exam_Objectives

 

 

 

 

 

 

 

 

 

 

 

On Monday, I had students work in self-selected small groups of 3 to 4 students for 22 minutes (timer on the board) on writing the objectives from the problem sets. I modeled on the board how to write the first objective for section 1.1 problems 1-8.

Here's a sample problem from section 1.1:

Find the domain and range of each relation.
#4: { (2,5), (3,5), (4,5), (5,5), (6,5) }

I talked students through writing an objective for items of this type: "Given a relation, state the domain and range." Several students questioned why I did not start with "given a set of ordered pairs." I told them this choice was purposeful because in practice exercises, students were also given relations in graphical form. We discussed that starting the objective with the phrase "given a relation" captured both types of problems.

Students worked in small groups and wrote objectives for EVERY item. I circulated the room and confirmed EVERY student had written objectives for EVERY item. Then, we spent the remaining half of class discussing common misconceptions and errors on problems, addressing why each error occurred and what the student making the error would be thinking, along with why the thinking was erroneous. Here's the data for one of my classes (n = 16)

Before I start jumping for joy, it's probably a good idea to consider my other classes. Here is a comparison of the three sections I have.

The other two sections did not fare so well. The much larger dispersion among scores in the classes labeled B and C concerns me (s = 13.0229 and s = 11.3002). On our grade scale, the median in all three sections is an "A."

Teachers have to be data detectives to diagnose what students do or do not know. The three outliers above had some misconceptions, evidence to suggest little to no work outside class is taking place. As a practitioner, I also have to think about how effective I was or was not teaching the material. Obviously something different is going on in the class labeled "Column C."

When I compare the global mean and median across my three sections this year (mean = 92.0351; median = 96) to last year's data (mean = 85.61; median = 87), I am pleased to see a dramatic improvement on the assessment (yes, this is the exact same test I used last year). I will likely take the same approach to preparing students for the second exam and use data analysis to determine if the approach is contributing to the improvement in scores.

A teacher from another school across the state emailed me last week. She described a terrible facilities issue she has dealt with for days.

I don't have all the details and may not be able to get them... a pipe in my room burst on Monday and caused water damage in the hallway end and a total of 6 classrooms in full or in part... I have been told varying amounts as to how high the water was in my room and mere estimation by our custodial staff on how much water they themselves eliminated... the plumber said that the little copper pipe was spewing 12 gal per min...

We had been working on a different project during MTPS (Math Theory Problem Solving) class that I didn't want students working on without me being present... I had a sub coming in last Friday and decided this would be a great opportunity for students to do some modeling work in the computer lab.

Take a moment to consider how elaborate this situation is. Water accumulates within a room. Many places exist where the water could escape. Some places are obvious, like underneath the door. Other places are not as obvious, such as through electrical outlets or through the cracks in the drywall where the tile meets the wall. There are many potential sources for error. Modeling simple cases is easy, flow in is positive, flow out negative. But this is definitely a problem from the world outside school (I've never been a fan of the term 'real world'... that would imply high school isn't real... and I remember sitting through some interminable classes with a very real feeling of when will this class ever end...)

I worked to write a worksheet that would help students identify some of the potential complications in modeling how water would accumulate in various rooms. I thought about using theoretical "bathtubs" to simplify the computations and help the kids understand some of the complexities they would encounter in modeling a real room.

Bathtub_Images

The students have spent one 46 minute period and one 90 minute period in the computer lab working on modeling the situation. The source worksheet appears below.

The Broken Pipe Problem-1

Students will have another 90 minute period in the computer lab Wednesday, then 30 minutes on Friday before each student gives a 3 minute presentation on Friday about their lab work and findings. I will share some of the students' work on this dilemma later.

P. S. On the worksheet, I use Google images to retrieve pictures of each of the bathtubs. The copy did not turn out as nicely as I had hoped, so I traced over the images and scanned the resulting worksheet. I told the students the half-cylinder tub should be oriented in a way where the curved side is tangent to the floor. (The 3D image on the worksheet makes it look like the tub is tilted, but I did not intend for the half cylinder tub to be tilted). The values for the time in minutes were arbitrary. All of my students have chosen to set up tables of values in Excel so they can address some of the interesting questions (like how many minutes will it take to fill the rectangular prism bathtub?)

 

A middle school teacher emailed me a really interesting problem earlier today:

I have a probability question for you that I hope you can answer. What is the probability of rolling 5 dice and getting 66553 in 3 attempts? It's not for any class or anything, a friend of mine wanted to know. (Apparently this is some bar room game of chance).

The situation reminded me of the game "Liar's Dice" featured in the movie Pirates of the Caribbean: Dead Man's Chest.

In the game, each player has five dice in a cup. I don't know whether Liar's Dice is the game the teacher references, but that's the first thing that came to mind. On to the problem...

Each of the five dice can show one of the values from {1, 2, 3, 4, 5, 6}. We can think about all five dice being different colors. That makes the outcome 66553 different than 65653. Thinking this way will help us count all possible outcomes.

I have a PDF of the solution I wrote up to this problem below. I plan to use this problem the next time I review independent events, dependent events, and mutually exclusive events in stats class.

Ways to Win the Dice Game Solution

Initial_Part

 

See the PDF for the continuation of the solution I wrote. Please feel free to comment below.

I have a confession to make. When I first started teaching AP Statistics in 2005, I had no idea why a Normal probability plot (an example is shown to the left) was important... or what it told us about data. I busy trying to stay a day ahead of students that first year. I never really sat down with several textbooks to compare definitions and examples as I probably should have. Simply put, when students asked, I told them the canned answer: "The more linear the plot is, the more "Normal" the data is." We'd use the calculator to make the plot, look at it, and move on.

Let's take a closer look at why we study a Normal probability plot in AP Statistics. I will do some borrowing from various discussion board posts of the past on the AP Stats forum and will add some commentary as we go.

First, consider the method we use to compute a z-score; that is, a positional score for Normally distributed data that indicates the number of standard deviation units above or below the mean a particular data point lives. For example, if z = -1.2, then the data point is 1.2 standard deviations below the mean. It makes sense that a standardized score [ z = (x-μ)/σ] depends on two things: the data value's physical distance from the mean *and* the distance tempered by a measure of spread, specifically the standard deviation. Let's isolate x in this equation to see what happens.

 

 

 

 

 

 

 

 

The algebra above is commonly used in problems where we are asked to find a score which corresponds to a particular percentile rank. For example, if the mean score of the ACT is 18, and the standard deviation is 6, then what composite score puts a student in the 70th percentile of all test takers that day? A score slightly north of 21, as shown below.

 

 

 

 

 

 

The InvNorm command above finds the z-score corresponding to a cumulative area of .70 under the standard Normal curve, which has mean 0 and standard deviation 1. We see a z-score of .5244005101, according to the TI-84, gives the position for a data point in the 70th percentile. We can then reverse engineer the score needed to fall into this percentile.

In the world outside school, it's usually not likely we know the actual value of σ, the population standard deviation, or μ, the actual population mean. As unbiased estimators of these unknown values, we use , the sample mean, in place of μ, and we use s, the sample standard deviation, in place of σ. Then the value of x looks like Technically, once we make the substitutions, we would really be using a t-distribution of some flavor to model the data. On the other hand, in the example below, since we can get data on every qualified point guard in the NBA as of right now, we can directly compute the mean and standard deviation for the entire population, making this substitution unnecessary in this case. However, students need to be aware of the need for t-procedures.

To show an example of a Normal probability plot, I pulled NBA data from ESPN regarding point guard performance thus far in the 2013-14 regular season. Let's take a look at the top 26 (since there's a tie for 25th place) point guards in the NBA with respect to average points scored per game, the gray column labeled "PTS."

 

Let's enter the data from the table above in the TI-84.

 

Next, let's construct the Normal probability plot for the data. Norm_Prob_Plot

 

 

 

 

 

So... what exactly does this plot represent? And what makes this representation so important? The x-values obviously correspond to the average points per game value for each point guard. What about the y-coordinate of each point on the graph? The y-coordinate corresponds to the z-score related to that particular x-value. In the screen shot above, Kemba Walker, the point guard with 18.6 points per game, has a z-score of approximately .7039. If the data followed exactly a Normal curve, then all the points on the above graph would lie exactly on a straight line. By looking at the z-score for each data point using this display, we can get a quick insight into whether the data are Normally distributed. Let's look at a boxplot for the same data:

 

 

We can see, in the plot above, the data for these 26 point guards have no outliers, but there appears to be some skewness. Computing the values (Max - Q3) = 4.4 and (Q1 - Min) = 10.8 - 9.3 = 1.5 and 4.4 > 1.5, we can demonstrate this skewness. This numeric argument doesn't take a lot of calculator kung fu, but we do have to perform an extra computation or two. Looking back at the Normal probability plot, we could use the image to immediately notice the skewness of the data. Suppose we graphed the original z-score equation [z = (x-μ)/σ] on the same graph as the Normal probability plot. In other words, we will make the Normal probability plot. Take a look!

 

We only used 26 data points, so the data is a sample of the population of NBA point guards. Again, if the data were perfectly Normal, all the blue points would be living directly on the red line. We can use our knowledge of linear equations to see clearly what's going on here.

So the slope of this red line representing the 'perfectly' Normal data has slope 1/4.271785124. Let's find an equivalent value that's slightly more user friendly:

If we express this value as

notice we can say for every additional unit increase in x, the average points scored per game, we expect to see a z-score increase of .2340941716. Much like when we consider residuals while doing linear regression, when x-values deviate noticeably from the expected red line, they are surprising from the "Normal curve's point of view." The curvature at the left end of the Normal probability plot immediately indicates the skewness of the data. You can find more examples of this on your favorite search engine by asking for "Normal probability plot skewness." If we know how to visually recognize this pattern, we can immediately recognize skewness of data using a Normal probability plot.

This connection between the Normal distribution and why its z-scores are linear has a pretty good explanation on the Wikipedia entry for "Standard score."

4 Comments

Implicit differentiation really helps my students understand why we can't just arbitrarily slap a prime on a function when we differentiate, that the variable we differentiate with respect to matters. Discussing the differences between y' and dy/dx, or P' and dP/dt helps facilitate later topics too, like related rates and optimization.

We had our second calculus class of the semester today. We spend part of the time going over results from our fall semester final exam. We also worked on a free response question from an old AP exam, 2000 AB #5 that mirrored another old AP question I have on our fall final. The question we worked today is below.

We had some good conversation about what it means to show the expression given in part a for dy/dx corresponds to the given implicit curve. After doing the implicit differentiation, we ended up with this:

I argued with the students our job wasn't quite done because the expression to the right of the equals sign does not match exactly with the original expression given in part a. I then wrote

And declared part a "done." A student challenged me, saying the y' did not match the left hand side of the equals sign of the original in part a (dy/dx). Great observation. I told the students if we are being precise, we should also go a step further and demonstrate we know y' and dy/dx are equivalent by writing dy/dx on the left. While this might seem like splitting hairs, I want my students to know attention to detail matters.

Part b came and went without much trouble. We found the two points on the curve whose x-coordinate is 1 and constructed the tangent line equations.

Then came part c. As the students were working part c, I went to Wolfram Alpha and created a graph of the implicit curve.

Wolfram_Implicit_Graph

 

 

 

 

 

 

 

 

 

 

 

I wasn't too concerned with the restrictions on the scale or viewing window. As we worked through part c together, the students understood why we needed to set because this would retrieve an undefined result for dy/dx.

The algebra we did looked like this:

The reason x = 0 is circled is because I forgot to verify at the end of the problem that x=0 is not a valid answer. If we try to directly substitute x = 0 into xy2 – x3y = 6, we get 0 = 6 which is nonsense. The substitution confirms x cannot be zero on the graph despite the fact we must infer the asymptote on the graph.

Here's another screenshot of the last slide I wrote during class:

I wrote -x5 = 24 and then asked how to isolate the x. I then wrote the solution (in black) above. The students were having trouble reconciling that the black value is equivalent to the red value. I referenced even and odd roots as the reason why I could pull the negative out of the radical (going from red to black).

HERE'S WHAT I WISH I HAD WRITTEN ON THE BOARD DURING CLASS...

I could sense the unease in the room around the solution, but I couldn't quite put my finger on it. After reflecting on this, I think I know why the kids seemed to disengage a bit. I have a lot of experience with negatives and roots and so on, so the simplification was not a stretch for me based on my experience. But the kids needed to see the work above, to reference the algebraic rules they know, to understand why it is permissible to move the negative out in front. I struggled to diagnose the students' need for this explanation during class. In hindsight, it explains why the students seemed to disengage and why my spider sense started tingling.

This post is really a reference for myself later, to remind me to think carefully on anticipated errors and what I can do to help students reconcile quantities that are numerically equivalent but not obviously numerically equivalent.

The break gave me a fair amount of time to reflect on teaching and my philosophy of education. I was scrolling through my Twitter feed and came across this tweet from the National Council on Teacher Quality (NCTQ):

While reading this tweet, I was struck by an interesting memory. In my first year of teaching in Omaha, back in 2004, the school I taught at had just implemented 1-to-1 computing with Macbooks. I had a student that year from China. After class ended one day, she and I talked about how interesting it was to watch how students were using the Macbooks in positive ways but also in negative ways. I asked the student what she thought about the computers. She offered a brilliant insight:

I think the computers are making the students impatient. It used to take a lot of time to look something up. All the students want the answer right now.

That conversation has stuck with me. I see this impatience in my classroom and other classrooms on a regular basis, whether it's students moaning and groaning when a problem takes more than thirty seconds or the students' body language revealing frustration or boredom.

Two years later, I was teaching at a different school, a school that has not yet gone to 1-to-1 computing, even in 2014. Even without 1-to-1 computing, the ubiquity of smartphones and tablets has had a profound impact on our young people. Personally, I wonder how different my school would have been with the temptations of Twitter, Facebook, and SnapChat lurking in the background. I am not saying technology is evil; I am saying educators need to be mindful of the impact technology has on students' physiology and psychology.

Joachim_de_PosadaOur culture in the United States is the embodiment of instant gratification. This has dire consequences for math teachers attempting to help students learn patient problem solving. Joachim de Posada discusses the predictive impact of studies on delayed gratification. Intuitively, it makes sense the students willing to delay gratification - those that push through and past the point of frustration - tend to be the successful students in school.

 

Back to the earlier question: are students evolving or changing? Technology provides the context for our students to literally stand on the shoulders of giants and answer more interesting questions that can profoundly impact the modern world. But with great power comes great responsibility. In nature, we sometimes see DuckBilledPlatypusevolutionary dead ends. Think duck-billed platypus. We want our students to be critical thinkers. To be problem solvers. And while it is convenient to rely on Google's search algorithms to find an answer quickly, we need our students to be able to analyze, synthesize, and evaluate the information they encounter in the world. It's not a stretch of the imagination to think certain technological behaviors we allow, like clicking on the first link on a Google search or citing only Wikipedia sources, are the evolutionary equivalent of a duck-billed platypus. I wonder what my students and their children will be doing in the 22nd century and how different the world will be. I want to empower my students with robust strategies to prepare them for this world that does not yet exist. What can I do as a math teacher to maximize these students' potential? Because barring medical miracles, I am undoubtedly preparing my students for a world in a new century I will not likely see. So what should be my function, my purpose for teaching students mathematics?

Awaken raw curiosity. Provide the context and vocabulary to describe the universe and everything in it. It starts with asking interesting questions and finding technological resources to address these questions. Michael Stevens is doing a phenomenal job leveraging curiosity to create teachable moments at Vsauce.

The spirit of the STEM movement is the interconnectedness between disciplines. Why we limit this 'interconnectedness' to only four disciplines causes me to scratch my head a bit. Mr. Stevens' video above references figures that cannot be measured directly with current measurement tools... but they can certainly be calculated. We can even compute how aesthetically pleasing an object is or isn't. What should the role of the math teacher be, then? What does this have to do with our students, and whether they are evolving or changing?

I think my best answer right now is to provide a balanced approach between traditional mathematics and problems from the world outside school. And for those items from the world outside school students may not have the mathematical horsepower to address, there is now a phenomenal resource to address both traditional topics and modeling challenges. Math teachers may experience some nervous excitement on the Wolfram Demonstrations website. I encourage you to check out the resources available on Wolfram Demonstrations.

Regardless what educational approaches we utilize, the world will continue to move forward. Students will continue to move on through the sorting algorithm that is school and the world will continue to realize the potential of students from many different school systems. What do you think? Are students evolving or changing? What can we do differently as teachers to prepare students for the future?