Measures, Targets, and Goodhart’s Law

Tis the season to be tardy, fa-la-la-la-la…la-la-la-la!

I’m reasonably jolly, too, but this week seems just a little bit rushed. Nonetheless, ya’ll deserve something decent from Delicious Ambiguity this week, so I’m going to put forth my best effort.

I stumbled across an older adage last weekend that seems remarkably apropos given my recent posts about retention rates at Augustana. This phrase is most often called “Goodhart’s Law,” although the concept has popped up in a number of different disciplines over the last century or so.

“When a measure becomes a target, it ceases to be a good measure.”

You can brush up on a quick summary of this little nugget on Wikipedia here, but if you want to have more fun I suggest that you take the time to plunge yourself into this academic paper on the origin of the idea and its subsequent applications here.

Although Goodhart’s Law emerges in the context of auditing monetary policy, there are more than a few well-written examples of its application to higher ed. Jon Boekenstedt at DePaul University lays out a couple of great examples here that we still see in the world of college admissions.  In all of the instances where Goodhart’s Law has produced almost absurd results (hilarious if they weren’t so often true), the take away is the same. Choosing a metric (a simple outcome) to judge the performance (a complex process) of an organization sets in motion behaviors by individuals within that organization that will inevitably play to the outcome (the metric) rather than the performance (the process) and, as a result, corrupt the process that was supposed to lead to that outcome.

So when we talk about retention rates, let’s remember that retention rates are a proxy for the thing we are actually trying to achieve.  We are trying to achieve student success for all students who enroll at Augustana College, and we’ve chosen to believe that if students return for their second year, then they are succeeding.

But we know that life is a lot more complicated than that. And scholars of organizational effectiveness note that organizations are less likely to fall into the Goodhart’s Law trap if they identify measures that focus on underlying processes that lead to an outcome (one good paper on this idea is here). So, even though we shouldn’t toss retention rates onto the trash heap, we are much more likely to truly accomplish our institutional mission if we focus on tracking the processes that lead to student success; processes that are also, more often than not, likely to lead to student retention.

Make it a good holiday break,

Mark

Two numbers going in the right direction. Are they related?

It always seems like it takes way too long to get the 10th-day enrollment and retention numbers for the winter term. Of course, that is because the Thanksgiving holiday pushes the whole counting of days into the third week of the term and . . . you get the picture.  But now that we’ve got those numbers processed and verified, we’ve got some good news to share.

Have a look at the last four years of fall-to-winter term retention rates for students in the first-year cohort –

  • 14/15 – 95.9%
  • 15/16 – 96.8%
  • 16/17 – 96.7%
  • 17/18 – 97.4%

What do those numbers look like to you? Whatever you want to call it, it looks to me like something good. Right away, this improvement in the proportion of first-year students returning for the winter term equates to about $70,000 in net tuition revenue that we wouldn’t have seen had this retention rate remained the same over the last four years.

Although stumbling onto a positive outcome (albeit an intermediate one) in the midst of producing a regular campus report makes for a good day in the IR office, it gets a lot better when we can find a similar sequence of results in our student survey data. Because that is how we start to figure out which things that we are doing to help our students correlate with evidence of increased student success.

About six weeks into the fall term, first-year students are asked to complete a relatively short survey about their experiences so far. Since this survey is embedded into the training session that prepares these students to register for winter classes, the response rate is pretty high. The questions in the survey focus on the academic and social experiences that would help a student acclimate successfully. One of those items, added in 2013, asks about the degree to which students had access to grades or other feedback that allowed them to adjust their study habits or seek help as necessary. In previous years, we’ve found this item to correlate with students’ sense of how hard they work to meet academic expectations.

Below I’ve listed the proportion of first-year students who agree or strongly agree that they had access to the sufficient grades or feedback during their first term. Compare the way this data point changes over the last four years to the fall-to-winter retention rates I listed earlier.

  • 14/15 – 39.6%
  • 15/16 – 53.3%
  • 16/17 – 56.4%
  • 17/18 – 75.0%

Obviously, both of these data points trend in the same direction over the past four years. Moreover, both of these trends look similar in that they jump a lot between the 1st and 2nd year, remain relatively flat between the 2nd and 3rd year, and jump again between the 3rd and 4th year.

I can’t prove that improved early academic feedback is producing improved fall-to-winter term retention. The evidence that we have is correlational, not causal. But we know enough to know that an absence of feedback early in the term hurts those students who either need to be referred for additional academic work or need to be shocked into more accurately aligning their perceived academic ability with their actual academic ability. We began to emphasize this element of course design (i.e., creating mechanisms for providing early term feedback about academic performance) because other research on student success (as well as our own data) suggested that this might be a way to improve student persistence.

Ultimately, I think it’s fair to suggest that something we are doing more often may well be influencing our students’ experience. At the very least, it’s worth taking a moment to feel good about both of these trends. Both data points suggest that we are getting better at what we do.

Make it a good day,

Mark