Anticipating what our students need to know is SO complicated!

Over the last few weeks, I’ve been wrestling with a couple of data trends and their accompanying narratives that seem pretty important for colleges like ours. However, unlike most posts in which I pretend to have some answers, this time I’m just struggling to figure out what it all means. So this week, I’m going to toss this discombobulated stew in your lap and hope you can help me sort it all out (or at least clean up some of the mess!).

First, the pressure on colleges to prepare their students to graduate with substantial “work readiness” appears to be at an all-time high. The Gallup Organization continues to argue that employers don’t think college graduates are well-prepared for success in the workplace. Even though there is something about the phrase “work readiness” that makes me feel like I just drank sour milk, we have to admit that preparing students to succeed in a job matters, especially when student loan debt is now such a large, and often frightening, part of the calculus that determines if, and where, a family can send their kids to college. Put all this together and it’s no wonder why students overwhelmingly say that the reason they want to go to college is to get a good-paying job.

Underneath all of this lies a pretty important assumption about what the world of work will be like when these students graduate. Student loans take, on average, 21 years to pay off, and the standard repayment agreement for a federal student loan is a 10-year plan. So it would seem reasonable that students, especially those who take out loans to pay for college, would anticipate that the job for which college prepares them should in most cases outlast the time it takes for them to pay off their loans. I’m not saying that everyone thinks this through completely, but I think most folks are assuming a degree of stability and income in the job they hope to obtain after earning a college degree, making the loans that they take out to pay for college a pretty safe bet.

But this is where it gets dicey. The world of work has been undergoing a seismic shift over the past several decades. The most recent report from the Bureau of Labor Statistics suggests that, on average, a person can expect to have 12 jobs between the ages of 18 and 50. What’s more, the majority of those job changes occur between the ages of 18 and 34 – the same period of time during which one would be expected to pay off a student loan. Moreover, between 2005 and 2015, almost all of the jobs added to the economy fit into a category called “alternative work.” This category of work includes contract labor, independent work, and any sort of temporary job (in addition to the usual suspects, think Turo, Lyft, or TaskRabbit). Essentially, these are jobs that are either spun as “providing wonderful flexibility” or depressingly described as depending on “the whim of the people.” As with so many other less-than-attractive realities, someone put a bow on it and labeled this whole movement “the gig economy” (sounds really cool except there’s no stage lighting or rock and roll glamor). It’s no surprise that the gig economy presents a rather stark set of downsides for individuals who choose it (or get sucked into it by circumstances beyond their control).

So what does all of this mean for colleges like ours that are (whether we like it or not) obligated to focus a lot of our attention on preparing students for a successful professional life?  I don’t have many great answers to this one. But a couple of questions seem pretty important:

  • To what degree are we responsible for ensuring that our students are financially literate and can manage through the unpredictability that seems likely for many early in their career?
  • What knowledge, skills, or dispositions should we prioritize to help our students thrive in a professional life that is almost certain to include instability, opportunity, and unexpected change?

Of all the possible options that an 18-year-old could sign up for, a small liberal arts college seems like it ought to be the ideal place for learning how to navigate, even transcend, the turbulent realities that seem more and more an unavoidable part of the world of work. But without designing what we do so that every student has to encounter this stuff, we leave that learning up to chance. And as usual, the students who most need to learn this stuff are the ones who are least likely to find it on their own.  Looks like we better role up our sleeves and get to work!

Make it a good day,

Mark

Sometimes you find a nugget where you least expect it

As many of you already know, data from the vast majority of the college ranking services is not particularly applicable to improving the day-to-day student experience. In many cases, this is because those who construct these rankings rely on “inputs” (i.e., information about the resources and students that come to the institution) and “outputs” (i.e., graduation rates and post-graduate salaries) rather than any data that captures what happens while students are actually enrolled in college.

But just recently I came across some of the data from the Wall Street Journal/Times Higher Education College Rankings that surprised me. Although this ranking is still (in my opinion) far too dependent on inputs and outputs, 20% of their underlying formula comes from a survey of current students. In this survey, they ask some surprisingly reasonable questions about the college experience, the responses to which might provide some useful information for us.

Here is a list of those questions, with the shortened label that I’ll use in the table below bolded within each question.

  • To what extent does your college or university provide opportunities for collaborative learning?
  • To what extent does the teaching at your university or college support critical thinking?
  • To what extent does the teaching at your university or college support reflection upon, and making connections among, things you have learned?
  • To what extent does the teaching at your university or college support applying your learning to the real world?
  • To what extent did the classes you took in your college or university so far challenge you?
  • If a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to recommend your college or university to them?
  • Do you think your college is effective in helping you to secure valuable internships that prepare you for your chosen career?
  • To what extent does your college or university provide opportunities for social engagement?
  • Do you think your college provides an environment where you feel you are surrounded by exceptional students who inspire and motivate you?
  • To what extent do you have the opportunity to interact with the faculty and teachers at your college or university as part of your learning experience?

Below is a table of average responses comparing the average responses of Augustana students with average responses from students at other US institutions. Although I haven’t been able to confirm it by checking the actual survey, it appears that the response options for each item consist of a 1-10 scale on which the participant can plot their response to each question.

Question Augustana Average Response Top US Institution Response 75th Percentile US Institution Response Median US Institution Response 25th Percentile US Institution Response Bottom US Institution Response
Collaborative Learning 8.5 9.5 8.4 8.1 7.7 6.7
Critical Thinking 8.8 9.6 8.7 8.3 8.0 7.1
Connections 8.5 9.4 8.5 8.2 7.9 7.0
Applying Learning 8.4 9.4 8.5 8.1 7.8 6.8
Challenge 8.2 9.4 8.6 8.3 8.0 7.2
Recommend 8.6 9.8 8.7 8.3 7.8 6.7
Prepare 8.3 9.4 8.3 7.8 7.4 6.2
Social 8.9 9.7 8.7 8.5 8.1 7.2
Inspire 8.0 9.3 8.1 7.7 7.2 6.0
Interact 9.3 10.0 9.2 8.9 8.4 7.3

Two things stand out to me in the table above. First, our students’ average responses compare quite favorably to the average responses from students at other institutions.  On six of the ten items, Augustana’s average student response equaled or exceeded the 75th percentile of all US institutions. On three of the remaining four items, Augustana students’ average response fell just short of the 75th percentile by a tenth of a point.

Second, our student’s response to one question – the degree to which they felt challenged by the classes they have taken so far – stands out like a sore thumb. Unlike the rest of the data points, Augustana’s average student response falls a tenth of a point below the median of all US institutions. Compared to the relative strength of all our other average response scores, the “challenge” score seems . . . curious.

Before going any further, it’s important to take into account the quality of the data that was used to generate these averages. The Wall Street Journal/Times says that they got responses from over 200,000 students, so if they want to make claims about overall average responses they’d be standing on pretty solid ground. However, they are trying to compare individual institutions against one another, so what matters is how many responses they received from students at each institution and to what degree those responses might represent all students at each institution. Somewhere in the smaller print farther down the page that explains their methodology, they state that in most cases they received between 50-100 responses from students at each institution (institutions with fewer than 50 responses were not included in their rankings). Wait, what? Given the total enrollments at most of the colleges and universities included in these rankings, 100 responses would represent less than 10% of all students at most of these institutions – in many cases far less than 10%. So we ought to approach the comparative part results with a generous dose of skepticism.

However, it doesn’t mean that we should dismiss the entirety of this data outright. In my mind, the findings from our own students ought to make us very curious. Why would data from a set of about 100 Augustana students (we received responses from 87 students who, upon further examination, turn out to be mostly first-year, female, pretty evenly scattered across different intended majors, and are almost all from the state of Illinois) produce such a noticeable gap between all of the other items on this survey and the degree to which our students feel challenged by their courses?

This is exactly why I named this blog “Delicious Ambiguity.” This is messy data. It definitely doesn’t come with a pre-packaged answer. One could point out several flaws in the Augustana data set (not to mention the entirety of this ranking system) and make a reasonable case to dismiss the whole thing. Yet, it seems like there is something here that isn’t nothing. So the question I’d ask you is this: are there other things going on at Augustana that might increase the possibility that some first-year students would not feel as challenged as they should? Remember, we aren’t talking about a dichotomy of challenged or not challenged. We are talking about degrees of quality and nuance that is the lifeblood of improving an already solid institution.

Make it a good day,

Mark

Measures, Targets, and Goodhart’s Law

Tis the season to be tardy, fa-la-la-la-la…la-la-la-la!

I’m reasonably jolly, too, but this week seems just a little bit rushed. Nonetheless, ya’ll deserve something decent from Delicious Ambiguity this week, so I’m going to put forth my best effort.

I stumbled across an older adage last weekend that seems remarkably apropos given my recent posts about retention rates at Augustana. This phrase is most often called “Goodhart’s Law,” although the concept has popped up in a number of different disciplines over the last century or so.

“When a measure becomes a target, it ceases to be a good measure.”

You can brush up on a quick summary of this little nugget on Wikipedia here, but if you want to have more fun I suggest that you take the time to plunge yourself into this academic paper on the origin of the idea and its subsequent applications here.

Although Goodhart’s Law emerges in the context of auditing monetary policy, there are more than a few well-written examples of its application to higher ed. Jon Boekenstedt at DePaul University lays out a couple of great examples here that we still see in the world of college admissions.  In all of the instances where Goodhart’s Law has produced almost absurd results (hilarious if they weren’t so often true), the take away is the same. Choosing a metric (a simple outcome) to judge the performance (a complex process) of an organization sets in motion behaviors by individuals within that organization that will inevitably play to the outcome (the metric) rather than the performance (the process) and, as a result, corrupt the process that was supposed to lead to that outcome.

So when we talk about retention rates, let’s remember that retention rates are a proxy for the thing we are actually trying to achieve.  We are trying to achieve student success for all students who enroll at Augustana College, and we’ve chosen to believe that if students return for their second year, then they are succeeding.

But we know that life is a lot more complicated than that. And scholars of organizational effectiveness note that organizations are less likely to fall into the Goodhart’s Law trap if they identify measures that focus on underlying processes that lead to an outcome (one good paper on this idea is here). So, even though we shouldn’t toss retention rates onto the trash heap, we are much more likely to truly accomplish our institutional mission if we focus on tracking the processes that lead to student success; processes that are also, more often than not, likely to lead to student retention.

Make it a good holiday break,

Mark

Two numbers going in the right direction. Are they related?

It always seems like it takes way too long to get the 10th-day enrollment and retention numbers for the winter term. Of course, that is because the Thanksgiving holiday pushes the whole counting of days into the third week of the term and . . . you get the picture.  But now that we’ve got those numbers processed and verified, we’ve got some good news to share.

Have a look at the last four years of fall-to-winter term retention rates for students in the first-year cohort –

  • 14/15 – 95.9%
  • 15/16 – 96.8%
  • 16/17 – 96.7%
  • 17/18 – 97.4%

What do those numbers look like to you? Whatever you want to call it, it looks to me like something good. Right away, this improvement in the proportion of first-year students returning for the winter term equates to about $70,000 in net tuition revenue that we wouldn’t have seen had this retention rate remained the same over the last four years.

Although stumbling onto a positive outcome (albeit an intermediate one) in the midst of producing a regular campus report makes for a good day in the IR office, it gets a lot better when we can find a similar sequence of results in our student survey data. Because that is how we start to figure out which things that we are doing to help our students correlate with evidence of increased student success.

About six weeks into the fall term, first-year students are asked to complete a relatively short survey about their experiences so far. Since this survey is embedded into the training session that prepares these students to register for winter classes, the response rate is pretty high. The questions in the survey focus on the academic and social experiences that would help a student acclimate successfully. One of those items, added in 2013, asks about the degree to which students had access to grades or other feedback that allowed them to adjust their study habits or seek help as necessary. In previous years, we’ve found this item to correlate with students’ sense of how hard they work to meet academic expectations.

Below I’ve listed the proportion of first-year students who agree or strongly agree that they had access to the sufficient grades or feedback during their first term. Compare the way this data point changes over the last four years to the fall-to-winter retention rates I listed earlier.

  • 14/15 – 39.6%
  • 15/16 – 53.3%
  • 16/17 – 56.4%
  • 17/18 – 75.0%

Obviously, both of these data points trend in the same direction over the past four years. Moreover, both of these trends look similar in that they jump a lot between the 1st and 2nd year, remain relatively flat between the 2nd and 3rd year, and jump again between the 3rd and 4th year.

I can’t prove that improved early academic feedback is producing improved fall-to-winter term retention. The evidence that we have is correlational, not causal. But we know enough to know that an absence of feedback early in the term hurts those students who either need to be referred for additional academic work or need to be shocked into more accurately aligning their perceived academic ability with their actual academic ability. We began to emphasize this element of course design (i.e., creating mechanisms for providing early term feedback about academic performance) because other research on student success (as well as our own data) suggested that this might be a way to improve student persistence.

Ultimately, I think it’s fair to suggest that something we are doing more often may well be influencing our students’ experience. At the very least, it’s worth taking a moment to feel good about both of these trends. Both data points suggest that we are getting better at what we do.

Make it a good day,

Mark

Ideals, Metrics, and Myths (oh no!)

Educators have always been idealists. We choose to believe what we hope is possible, and that belief often keeps us going when things aren’t going our way. It’s probably what drove many of us to finish a graduate degree and what drives us to put our hearts into our work despite all the discouraging news about higher ed these days.

But an abundance of unchecked idealism can also be a dangerous thing. Because the very same passion that can drive one to achieve can also blind one to believe in something just because it seems like it ought to be so. Caught up in a belief that feels so right, we are often less likely to scrutinize the metrics that we choose to measure ourselves or compare ourselves to others. Worse still, our repeated use of these unexamined metrics can become etched into institutional decision-making. Ultimately, the power of belief that once drove us to overcome imposing challenges can become our Achilles heel because we are absolutely certain of things that may, in fact, not be so.

For decades, colleges have tracked the distribution of their class sizes (i.e., the number of classes enrolling 2-9, 10-19, 20-29, 30-39, 40-49, 50-99, and more than 100 students, respectively) as a part of something called the Common Data Set. The implication behind tracking this data point is that a higher proportion of smaller classes ought to correlate with a better learning environment. Since the mid-1980s, the U.S. News and World Report rankings of colleges and universities have included this metric in its formula distilling it down to two numbers – the proportion of classes at an institution with 19 or fewer students (more is better) and the proportion of classes at an institution with 50 or more students (less is better). Two years ago U.S News added a twist by creating a sliding scale so that classes of 19 and fewer received the most credit, the percentage of classes with 20-29, 30-39, and 40-49 received proportionally less credit, and classes of over 50 received no credit. Over time these formulations have produced a powerful mythology across many postsecondary institutions: classes with 19 or fewer students are better than classes with 20 or more.

This begs a pretty important question: are those cut points (19/20, 29/30, etc.) grounded in anything other than an arbitrary application of the Roman numbering system?

Our own fall term IDEA course feedback data provides an opportunity to test the validity of this metric. The overall distribution of class sizes is almost perfect (a nicely shaped bell curve), with almost 80% of courses receiving a robust response rate. Moreover, IDEA’s aggregate dataset allows us to compare three useful measures of the student learning experience across all courses: a student-reported proxy of learning gains called the “progress on relevant objectives” (PRO) score (for a short explanation of the PRO score with additional links for further information, click here), the student perception of the instructor, and the student perception of the course. The table below spells out the average response scores for each measure across eight different categories of class size. Each average score comes from a 5-item response set (converted to a range of 1-5). The PRO score response options range from “no progress” to “exceptional progress,” and the perception of instructor and course excellence response options range from “definitely false” to “definitely true” (to see the actual items on the survey, click here). For this analysis, I’ve only included courses that exceed a 2/3rds (66.67%) response rate.

Class Size PRO Score Excellent Teacher Excellent Course
6-10 students (35 classes) 4.24 4.56 4.38
11-15 students (85 classes) 4.12 4.38 4.13
16-20 students (125 classes) 4.08 4.29 4.01
21-25 students (71 classes) 4.18 4.40 4.27
26-30 students (37 classes) 4.09 4.31 4.18
31-35 students (9 classes) 3.90 4.13 3.81
36-40 students (11 classes) 3.64 3.84 3.77
41 or more students (8 classes) 3.90 4.04 3.89

First, classes enrolling 6-10 students appear to produce notably higher scores on all three measures than any other category. Second, it doesn’t look like there is much difference between subsequent categories until we get to classes enrolling 31 or more students (further statistical testing supports this observation). Based on our own data – assuming that the fall 2017 data does not significantly differ from other academic terms, if we were going to replicate the notion that class size distribution correlates with the quality of the overall learning environment we might be inclined to choose only two cut points to create three categories of class size: those with 10 or fewer students, those with between 11 and 30 students, and those with more than 30 students.

However, further examination of the smallest category of classes indicates that these courses are almost entirely upper-level major courses. Since we know that all three metrics tend to score higher for upper-level major courses because the students in them are more intrinsically interested in the subject matter than students in lower-level courses (classes that often also meet general education requirements), we can’t attribute the higher scores for this group to class size per se. This leaves us with two general categories: classes with 30 or fewer students, and classes with more than 30 students.

How does this comport with the existing research on class size? Although there isn’t much out there, two brief overviews (here and here) don’t find much of a consensus. Some studies suggest that class size is not relevant, others find a positive effect on the learning experience as classes get smaller, and a few others indicate a slight positive effect as classes get larger(!). Especially in light of developments in pedagogy and technology over the past two decades, a 2013 essay that spells out some findings from IDEA’s extensive dataset suggests that other factors almost certainly complicate the relationship between class size and student learning.

So what do we do with all this? Certainly, mandating that all class enrollments sit just below 30 would be, um, stupid. There is a lot more to examine before anyone should march out onto the quad and declare a “class size” policy. One finding from researchers at IDEA that might be worth exploring on our own campus is the variation of learning objectives selected and achieved by class size. IDEA found that smaller classes might be more conducive to more complex (sometimes called “deeper”) learning objectives, while larger classes might be better suited for learning factual knowledge, general principles, or theories. If class size does, in fact, set the stage for different learning objectives, it might be worth assessing the relationship between learning objectives and class size at Augustana to see if we are taking full advantage of the learning environment that a smaller class size provides.

.And what should we do about the categories of class sizes that U.S. News uses in their college rankings formula? As family incomes remain stagnant, tuition revenue continues to lag behind institutional budget projections, and additional resources seem harder to come by, that becomes an increasingly valid question. Indeed, there might be a circumstance where an institution ought to continue using the Common Data Set class size index to guide the way that it fosters an ideal classroom learning environment. And it is certainly reasonable to take other considerations (e.g., faculty workload, available classroom space, intended learning outcomes of a course, etc.) into account when determining an institution’s ideal distribution of class enrollments. But if institutional data suggests that there is little difference in the student learning experience between classes with 16-20 students and classes with 21-25 students, it might be worth revisiting the rationale that an institution uses to determine its class size distribution. No matter what an institution chooses to do, it seems like we ought to be able to justify our choices based on the most effective learning environment that we can construct rather than an arbitrarily-defined and externally-imposed metric.

Make it a good day,

Mark

Have a wonderful Thanksgiving!

A short post for a short week . . .

We talk a lot about the number of students at Augustana who have multiple talents and seem like they will succeed in life no matter what they choose to do.  So many of them seem to qualify as “MultiPotentialites”.

Although it makes sense that we would first see this phenomenon among our students, I think we might be missing another group of particularly gifted folks all around us.  So many of you, the Augustana faculty and staff, have unique talents, insightful perspectives, and unparalleled interpersonal skills that make us good at what we do. Almost every day I see someone step into a gap and fill a need that just needs to get done. Maybe we are just Midwestern humble or maybe we are just so busy scrambling to put out one fire after another that we never really get the chance to pause and see the talent we all bring to this community.

So I want to make sure that I thank all of you.  I know this might sound hokey.  Maybe it is.

So what.

Make it a good Thanksgiving weekend.

Mark

Some anecdotes and data snippets from our first experience with the IDEA online course feedback system

Welcome to Winter Term! Maybe some of you saw the big snowflakes that fell on Sunday morning. Even though I know I am in denial, it is starting to feel like fall might have slipped from our collective grasp over the past weekend.

But on the bright side (can we get some warmth with that light?), during the week-long break between fall and winter term, something happened that had not happened since we switched to the IDEA course feedback system. Last Wednesday morning – only a 48 hours after you had entered your final grades, your IDEA course feedback was already processed and ready to view. All you had to do was log in to your faculty portal and check it out! (You can find the link to the IDEA Online Course Feedback Portal on your Arches faculty page).

I’m sure I will share additional observations and data points from our first experience with the online system this week during one of the three “Navigating your Online IDEA Feedback Report” sessions on Monday, Tuesday, and Thursday starting just after 4 PM in Olin 109. (A not so subtle hint – come to Olin 109 on Monday, Tuesday, or Thursday this week (Nov. 13, 14, and 16) at or just after 4 PM to walk through the online feedback reports and maybe one or two cool tricks with the data).  Bring a laptop if you’ve got one just in case we run out of computer terminals.

But in the meantime, I thought I’d share a couple of snippets that I found particularly interesting from our first online administration.

First, it seems that no news about problems logging in to the system turned out to be extremely good news. I was fully prepped to solve all kinds of connectivity issues and brainstorm all sorts of last-minute solutions. But I only heard from one person about one class having trouble getting on to the system . . . and that was when the internet was down all over campus for about 45 minutes. Otherwise, it appears that folks were able to administer the online course feedback forms in class or get their students to complete them outside of class with very little trouble. Even in the basement of Denkmann! This doesn’t mean that we won’t have some problems in the future, but at least with one term under our collective belt . . . maybe the connectivity issue isn’t nearly as big as we worried it might be.

Second, our overall student response rates were quite strong. Of the 467 course sections that could have administered IDEA online, about 74% of those course sections achieved a response rate of 75% or higher. Furthermore, several instructors tested what might happen if they asked students to complete the IDEA online outside of class (incentivized with an offer of extra credit to the class if the overall response rate reached a specific threshold). I don’t believe that any of these instructors’ classes failed to meet the established thresholds.

In addition, after a preliminary examination of comments that students provided, it appears that students actually may have written more comments with more detail than they previously provided on paper-and-pencil forms. This would seem to corroborate feedback from a few faculty members who indicated that their students were thankful that their comments would now be truly anonymous and no longer potentially identifiable given the instructor’s prior familiarity with the student’s handwriting.

Finally, in response to faculty concerns that the extended student access to their IDEA forms (i.e., students were able to enter data into their response forms until the end of finals no matter when they initially filled out their IDEA forms) might lead to students going back into the system and exacting revenge on instructors in response to a low grade on a final exam or paper, I did a little digging to see how likely this behavior might be. In talking to students about this option during week 10 of the term, I got two responses. Several international students said that they appreciated this flexibility because they had been unable to finish typing their comments in the time allotted in class. Since many international students (particular first-year international students) find that it takes them much longer than domestic students to express complex thoughts in written English. I also got the chance to ask a class of 35(ish) students whether or not they were likely to go back into the IDEA online system and change a response several days after they had completed that form. After giving me a bewildered look for an uncomfortably long time, one student finally blurted out, “Why would we do that?”  Upon further probing, the students said that they couldn’t imagine a situation where they would care enough to take the time to find the student portal and change their responses. When I asked, “Even if something happened at the end of the term like a surprisingly bad grade on a test or a paper that you felt was unfair?” The students responded by saying that by the end of the term they would already know enough to know what they thought of that instructor and that class. Even if they got a surprisingly low grade on a final paper or test, the students said that they would know the nature of that instructor and course long before the final test or paper.

To see if those student’s speculation about their own behavior matches with IDEA’s own data, I talked to the CEO of IDEA to ask what proportion of students go back into the system and change their responses and if that was a question that faculty at other institutions had asked.  He told me that he had heard that concern raised repeatedly since they introduced the online format. As a result, they have been watching that data point closely. Across all of the institutions that use the online system over the last several years, only 0.6% of all students actually go back into the system and edit their responses. He did not know what proportion of that small minority altered their responses in a substantially negative direction.
Since the first of my three training sesssions starts in about an hour, I’m going to stop now.  But so far, it appears that moving to IDEA online has been a pretty positive thing for students and our data. Now I hope we can make the most of it for all of our instructors. So I better get to work prepping for this week!
Make it a good day,
Mark

“Not so fast!” said the data . . .

I’ve been planning to write about retaining men for several weeks. I had it all planned out. I’d chart the number of times in the past five years that male retention rates have lagged behind female retention rates, suggest that this might be an issue for us to address, clap my hands together, and publish the post. Then I looked closer at the numbers behind those pesky percentages and thought, “Now this will make for an interesting conversation.”

But first, let’s get the simple stuff out of the way. Here are the differences in retention rates for men and women over the last five years.

Cohort Year Men Women
2016 83.2% 89.1%
2015 85.6% 91.3%
2014 85.0% 86.8%
2013 83.2% 82.7%
2012 78.6% 90.1%

It looks like a gap has emerged in the last four years, right? Just in case you’re wondering (especially if you looked more carefully at all five years listed in the table), “emerged” isn’t really the most accurate word choice. It looks like the 2013 cohort was more of an anomaly than anything else since the 2012 cohort experienced the starkest gap in male vs. female retention of any in the past five years. Looking back over the three years prior to the start of this table, this gap reappears within the 2011, 2010, and 2009 cohorts.

But in looking more closely at the number of men and women who enrolled at Augustana in each of those classes, an interesting pattern appears that adds a least one layer of complexity to this conversation. Here are the numbers of enrolled and retained men and women in each of the last five years.

Cohort Year                   Men                Women
Enrolled Retained Enrolled Retained
2016 304 253 393 350
2015 285 244 392 358
2014 294 250 432 375
2013 291 242 336 278
2012 295 232 362 326

Do you see what I see?  Look at the largest and smallest numbers of men enrolled and the largest and smallest numbers of men retained. In both cases, we are talking about a difference of about 20 male students (for enrolled men: 304 in 2016 for a high and 285 in 2015 for a low; for retained men, 253 in 2016 for a high and 232 in 2012 for a low). No matter the total enrollment in a given first-year class, these numbers seem pretty consistent. By contrast, look at the largest and smallest numbers of women enrolled and retained. The differences between the high and the low of either enrolled or retained women are much greater – by almost a factor of five.

So what does it mean when we put these retention rate gaps and the actual numbers of men and women enrolled/retained into the same conversation? For me, this exercise is an almost perfect example of how quantitative data that is supposed to reveal deep and incontrovertible truth can actually do exactly the opposite. Data just isn’t clean, ever.

Situating these data within the larger conversation about male and female rates of educational attainment, our own findings begin to make some sense. Nationally, the educational attainment gap between men and women starts long before college. Men (boys) finish high school at lower rates than women. Men go to college at lower rates than women. Men stay in college at lower rates than women. And men graduate from college at lower rates than women. So when the size of our first-year class goes up, it shouldn’t be all that surprising that the increase in numbers is explained by a disproportionate increase in women.

Finally, we have long known (and should also regularly remind ourselves) that retention rates are a proxy for something more important: student success. And student success is an outcome of student engagement in the parts of the college experience that we know help students grow and learn. On this score, we have plenty of evidence to suggest that we ought to focus more of our effort on male students. I wrote about one such example last fall when we examined some differences between men and women in their approaches toward social responsibility and volunteering rates. A few years back, I wrote about another troubling finding involving a sense of belonging on campus among Black and Hispanic men.

I hope we can dig deeper into this question over the next several weeks.  I’ll do some more digging into our own student data and share what I find. Maybe you’ve got some suggestions about where I might look?

Make it a good day,

Mark

 

 

Big Data, Blindspots, and Bad Statistics

As some of you know, last spring I wrote a contrarian piece for The Chronicle of Higher Education that posed some cautions to unabashedly embracing big data.  Since then, I’ve found two Ted Talks that add to the list of reasons to be suspicious of an overreliance on statistics and big data.

Tricia Wang outlines the dangers of relying on historical data at the expense of human insight when trying to anticipate the future.

Mona Chalabi describes three ways to spot a suspect statistic.

Both of these presenters reinforce the importance of triangulating information from quantitative data, individual or small-group expertise, and human observation. In addition, all of this information can’t eliminate ambiguity. Any assertion of certainty is almost always one more reason to be increasingly skeptical.

So if you think I’m falling victim to either of these criticisms, feel free to call me out!

Make it a good day,

Mark

Improving Interfaith Understanding at Augustana

This is a massively busy week at Augie. We had a packed house of high school students visiting on Monday (I’ve never seen the cafeteria so full of people ever!), the Board of Trustees will gather on campus for meetings on Thursday and Friday, and hundreds of alumni and family will arrive for Homecoming over the weekend. With all of this hustle and bustle, you probably wouldn’t have noticed three unassuming researchers from the Interfaith Diversity Experiences and Attitudes Longitudinal Survey (IDEALS) quietly talking to faculty, staff, and students on Monday and Tuesday. They were on campus to find out more about our interfaith programs, experiences, and emphasis over the past several years.

Apparently, we are doing something right when it comes to improving interfaith understanding at Augustana. Back in the fall of 2015, our first-year cohort joined college freshmen from 122 colleges and universities around the country to participate in a 4-year study of interfaith understanding development. The study was designed to collect data from those students at the beginning of the first year, during the fall of the second year, and in the spring of the fourth year. In addition to charting the ways in which these students changed during college, the study was also constructed to identify the experiences and environments that influence this change.

As the research team examined the differences between the first-year and second-year data, an intriguing pattern began to emerge. Across the entire study, students didn’t change very much. This wasn’t so much of a surprise, really, since the Wabash National Study of Liberal Arts Education had found the same thing. However, unlike students across the entire study, Augustana students consistently demonstrated improvement on most of the measures in the study. This growth was particularly noticeable in areas like appreciative knowledge of different worldviews, appreciative attitudes toward different belief systems, and global citizenship. Although the effect sizes weren’t huge, a consistent pattern of subtle but noticeable growth suggested that something good might be happening at Augustana.

However, using some fancy statistical tricks to generate an asterisk or two (denoting statistical significance) doesn’t necessarily help us much in practical terms. Knowing that something happened doesn’t tell us how we might replicate it or how we might do it even better. This is where the qualitative ninjas need to go to work and talk to people (something us quant nerds haven’t quite figured out how to do yet). Guided by the number-crunching, the real gems of knowledge are more likely to be unearthed through focus groups and interviews where researchers can delve deep into the experiences and observations of folks on the ground.

So what did our visiting team of researchers find? They hope to have a report of their findings for us in several months. So far all I could glean from them is that Augustana is a pretty campus with A LOT of steps.

But there is a set of responses from the second-year survey data that that might point in a direction worth contemplating. There is a wonderfully titled grouping of items called “Provocative Encounters with Worldview Diversity,” from which the responses to three statements seem to set our students’ experience apart from students across the entire study as well as students at institutions with a similar Carnegie Classification (Baccalaureate institutions – arts and sciences). In each case, we see a difference in the proportion of students who responded “all the time” or “frequently.”

  1. In the past year, how often have you had class discussions that challenged you to rethink your assumptions about another worldview?
    1. Augustana students: 51%
    2. Baccalaureate institutions: 43%
    3. All institutions in the study: 33%
  2. In the past year, how often have you felt challenged to rethink your assumptions about another worldview after someone explained their worldview to you?
    1. Augustana students: 44%
    2. Baccalaureate institutions: 34%
    3. All institutions in the study: 27%
  3. In the past year, how often have you had a discussion with someone of another worldview that had a positive influence on your perceptions of that worldview?
    1. Augustana students: 48%
    2. Baccalaureate institutions: 45%
    3. All institutions in the study: 38%

In the past several years, there is no question that we have been trying to create these kinds of interactions through Symposium Day, Sustained Dialogue, course offerings, a variety of co-curricular programs, and increased diversity among our student body. Some of the thinking behind these efforts dates back six or seven years when we could see from our Wabash National Study Data and our prior NSSE data that our students reported relatively fewer serious conversations with people who differed from them in race/ethnicity and/or beliefs/values. Since a host of prior research has found that these kinds of serious conversations across difference are key to developing intercultural competence (a skill that certainly includes interfaith understanding), it made a lot of sense for us to refine what we do so that we might improve our students’ gains on the college’s learning outcomes.

The response to the items above suggests to me that the conditions we are trying to create are indeed coming together. Maybe, just maybe, we have successfully designed elements of the Augustana experience that are producing the learning that we aspire to produce.

It will be very interesting to see what the research team ultimately reports back to us. But for now, I think it’s worth noting that there seems to be early evidence that we have implemented intentionally designed experiences that very well might be significantly impacting our students’ growth.

How about that?!

Make it a good day,

Mark