What experiences improve our student’s inclination toward complex thinking?

I’ve always been impressed by the degree to which the members of Augustana’s Board of Trustees want to understand the sometimes dizzying complexities that come with trying to nudge, guide, and redirect the motivations and behaviors of young people on the cusp of adulthood. Each board member that I talk to seems to genuinely enjoy thinking about these kinds of complicated, even convoluted, challenges and implications that they might hold for the college and our students.

This eagerness to wrestle with ambiguous, intractable problems exemplifies the intersection of two key Augustana learning outcomes that we aspire to develop in all of our students. We want our graduates to have developed incisive critical thinking skills and we want to have cultivated in them a temperament that enjoys applying those analytical skills to solve elusive problems.

Last spring Augustana completed a four-year study of one aspect of intellectual sophistication. We chose to measure the nature of our students’ growth by using a survey instrument called the Need for Cognition Scale, an instrument that assesses one’s inclination to engage in thinking about complex problems or ideas. Earlier in the fall, I presented our findings regarding our students’ growth between their initial matriculation in the fall of 2013 and their graduation in the spring of 2017 (summarized in a subsequent blog post). We found that:

  1. Our students developed a stronger inclination toward thinking about complex problems. The extent of our students’ growth mirrored the growth we saw in an earlier cohort of Augustana students while participating in the Wabash National Study between 2008 and 2012.
  2. Different types of students (defined by pre-college characteristics) grew similar amounts, although not all students started and finished with similar scores. Specifically, students with higher HS GPA or ACT/SAT scores started and finished with higher Need for Cognition scores than students with lower HS GPA or ACT/SAT scores.

But, as with any average change-over-time score, there are lots of individual cases scattered above and below that average. In many ways, that is often where the most useful information is hidden. Because if the individuals who produce change-over-time scores above, or below, the average are similar to each other in some other ways, teasing out the nature of that similarity can help us figure out what we could do more of (or less of) to help all students grow.

At the end of our first presentation, we asked for as many hypotheses as folks could generate involving experiences that they thought might help or hamper gains on the Need of Cognition Scale. Then we went to work testing every hypothesis we could possibly test. Taylor Ashby, a student working in the IR office, did an incredible job taking on this monstrous task. After several months of pulling datasets together, constructing new variables to approximate many of the hypotheses we were given, and running all kinds of statistical analyses, we found a couple of pretty interesting discoveries that could help Augustana get even better at developing our student’s inclination or interest in thinking about complex problems or ideas.

To help us organize all of the hypotheses that folks suggested, we organized them into two categories: participation in particular structured activities (e.g., being in the choir or completing a specific major) and experiences that could occur across a range of situations (e.g., reflecting on the impact of one’s interactions across difference or talking with faculty about theories and ideas).

First, we tested all of the hypotheses about participation in particular structured activities. We found five specific activities to produce positive, statistically significant effects:

  • service learning
  • internships
  • research with faculty
  • completing multiple majors
  • volunteering when it was not required (as opposed to volunteering when obligated by membership in a specific group)

In other words, students who did one or more of these five activities tended to grow more than students who did not. This turned out to be true regardless of the student’s race/ethnicity, sex, socioeconomic status, or pre-college academic preparation. Furthermore, each of these experiences produced a unique, statistically significant effect when they were all included in the same equation. This suggests the existence of a cumulative effect: students who participated in all of these activities grew more than students who only participate in some of these activities.

Second, we tested all of the hypotheses that focused on more general experiences that could occur in a variety of settings. Four experiences appeared to produce positive, statistically significant effects.

  • The frequency of discussing ideas from non-major courses with faculty members outside of class.
  • Knowledge among faculty in a student’s major of how to prepare students to achieve post-graduate plans.
  • Faculty interest in helping students grow in more than just academic areas.
  • One-on-one interactions with faculty had a positive influence on intellectual growth and interest in ideas.

In addition, we found one effect that sort of falls in between the two categories described above. Remember that having a second major appeared to produce a positive effect on the inclination to think about complex problems or ideas? Well, within that finding, Taylor discovered that students who said that faculty in their second major emphasized applying theories or concepts to practical problems or new situations “often” or “very often” grew even more than students who simply reported a second major.

So what should we make of all these findings? And equally important, how do we incorporate these findings into the way we do what we do to ensure that we use assessment data to improve?

That will be the conversation of the spring term Friday Conversation with the Assessment for Improvement Committee.

Make it a good day,

Mark

Should the male and female college experience differ?

The gap between males and females at all levels of educational attainment paints a pretty clear picture. Males complete high school at lower rates than females. Of those who finish high school, males enroll in college at lower rates than females. This pattern continues in college, where men complete college at lower rates than women. Of course, some part of the gap in college enrollment is a function of the gap in high school completion, and some part of the gap in college completion is a function of the gap in college enrollment. But overall, it still seems apparent that something troubling is going on with boys and young men in terms of educational attainment. Yet, looking solely at these outcome snapshots does very little to help us figure out what we might do if we were going to reverse these trends.

A few weeks ago, I dug into some interesting aspects of the differences in our own male and female enrollment patterns at Augustana, because understanding the complexity of the problem is a necessary precursor to actually solving it. In addition, last year I explored some differences between men and women in their interest in social responsibility and volunteering behaviors. Today, I’d like to share a few more differences that we see between male and female seniors in their responses to senior survey questions about their experience during college.

Below I’ve listed four of the six senior survey questions that specifically address aspects of our students’ co-curricular experience. In each case, there are five response options ranging from strongly disagree (1) to strongly agree (5). Each of the differences shown below between male and female responses is statistically significant.

  • My out-of-class experiences have helped me connect what I learned in the classroom with real-life events.
    • Men     –    3.86
    • Female –    4.17
  • My out-of-class experiences have helped me develop a deeper understanding of myself.
    • Men     –    4.10
    • Female –    4.34
  • My out-of-class experiences have helped me develop a deeper understanding of how I interact with someone who might disagree with me.
    • Men     –    4.00
    • Female –    4.28
  • My co-curricular involvement helped me develop a better understanding of my leadership skills.
    • Men     –    4.14
    • Female –    4.35

On one hand, we can take some comfort in noting that the average responses in all but one case equate with “agree.” However, when we find a difference across an entire graduating class that is large enough to result in statistical significance we need to take, at the very least, a second look.

Why do you think these differences are appearing in our senior survey data? Is it just a function of the imprecision that comes with survey data? Maybe women tend to respond in rosier terms right before graduation than men? Or maybe there really is something going on here that we need to address. One way to test that possibility is to ask whether or not there might be other evidence that corroborates these findings, be it anecdotal or otherwise qualitative. Certainly, the prior evidence I’ve noted and linked above should count some, but that data also comes from senior survey data.

Recent research on boys and young men seems to suggest that these differences in our data may not be a surprise (check out the books Guyland (I found a free pdf of the book!) and Angry White Men or a Ted Talk by Philip Zimbardo for a small sample of the scholarship on men’s issues). This growing body of scholarship also suggests that differences that we might see between males and females begin to emerge long before college, but it also suggests that we are not powerless to reverse some of the disparity.

At the board meetings this weekend, we will be talking about some of these issues. In the meantime, what do you think? And if you think that these differences in our data ought to be taken seriously, does it mean that we ought to construct educationally appropriate variations in the college experience for men and women?

I’d love to read what you think as you chew on this.

Make it a good day,

Mark

Anticipating what our students need to know is SO complicated!

Over the last few weeks, I’ve been wrestling with a couple of data trends and their accompanying narratives that seem pretty important for colleges like ours. However, unlike most posts in which I pretend to have some answers, this time I’m just struggling to figure out what it all means. So this week, I’m going to toss this discombobulated stew in your lap and hope you can help me sort it all out (or at least clean up some of the mess!).

First, the pressure on colleges to prepare their students to graduate with substantial “work readiness” appears to be at an all-time high. The Gallup Organization continues to argue that employers don’t think college graduates are well-prepared for success in the workplace. Even though there is something about the phrase “work readiness” that makes me feel like I just drank sour milk, we have to admit that preparing students to succeed in a job matters, especially when student loan debt is now such a large, and often frightening, part of the calculus that determines if, and where, a family can send their kids to college. Put all this together and it’s no wonder why students overwhelmingly say that the reason they want to go to college is to get a good-paying job.

Underneath all of this lies a pretty important assumption about what the world of work will be like when these students graduate. Student loans take, on average, 21 years to pay off, and the standard repayment agreement for a federal student loan is a 10-year plan. So it would seem reasonable that students, especially those who take out loans to pay for college, would anticipate that the job for which college prepares them should in most cases outlast the time it takes for them to pay off their loans. I’m not saying that everyone thinks this through completely, but I think most folks are assuming a degree of stability and income in the job they hope to obtain after earning a college degree, making the loans that they take out to pay for college a pretty safe bet.

But this is where it gets dicey. The world of work has been undergoing a seismic shift over the past several decades. The most recent report from the Bureau of Labor Statistics suggests that, on average, a person can expect to have 12 jobs between the ages of 18 and 50. What’s more, the majority of those job changes occur between the ages of 18 and 34 – the same period of time during which one would be expected to pay off a student loan. Moreover, between 2005 and 2015, almost all of the jobs added to the economy fit into a category called “alternative work.” This category of work includes contract labor, independent work, and any sort of temporary job (in addition to the usual suspects, think Turo, Lyft, or TaskRabbit). Essentially, these are jobs that are either spun as “providing wonderful flexibility” or depressingly described as depending on “the whim of the people.” As with so many other less-than-attractive realities, someone put a bow on it and labeled this whole movement “the gig economy” (sounds really cool except there’s no stage lighting or rock and roll glamor). It’s no surprise that the gig economy presents a rather stark set of downsides for individuals who choose it (or get sucked into it by circumstances beyond their control).

So what does all of this mean for colleges like ours that are (whether we like it or not) obligated to focus a lot of our attention on preparing students for a successful professional life?  I don’t have many great answers to this one. But a couple of questions seem pretty important:

  • To what degree are we responsible for ensuring that our students are financially literate and can manage through the unpredictability that seems likely for many early in their career?
  • What knowledge, skills, or dispositions should we prioritize to help our students thrive in a professional life that is almost certain to include instability, opportunity, and unexpected change?

Of all the possible options that an 18-year-old could sign up for, a small liberal arts college seems like it ought to be the ideal place for learning how to navigate, even transcend, the turbulent realities that seem more and more an unavoidable part of the world of work. But without designing what we do so that every student has to encounter this stuff, we leave that learning up to chance. And as usual, the students who most need to learn this stuff are the ones who are least likely to find it on their own.  Looks like we better role up our sleeves and get to work!

Make it a good day,

Mark

Sometimes you find a nugget where you least expect it

As many of you already know, data from the vast majority of the college ranking services is not particularly applicable to improving the day-to-day student experience. In many cases, this is because those who construct these rankings rely on “inputs” (i.e., information about the resources and students that come to the institution) and “outputs” (i.e., graduation rates and post-graduate salaries) rather than any data that captures what happens while students are actually enrolled in college.

But just recently I came across some of the data from the Wall Street Journal/Times Higher Education College Rankings that surprised me. Although this ranking is still (in my opinion) far too dependent on inputs and outputs, 20% of their underlying formula comes from a survey of current students. In this survey, they ask some surprisingly reasonable questions about the college experience, the responses to which might provide some useful information for us.

Here is a list of those questions, with the shortened label that I’ll use in the table below bolded within each question.

  • To what extent does your college or university provide opportunities for collaborative learning?
  • To what extent does the teaching at your university or college support critical thinking?
  • To what extent does the teaching at your university or college support reflection upon, and making connections among, things you have learned?
  • To what extent does the teaching at your university or college support applying your learning to the real world?
  • To what extent did the classes you took in your college or university so far challenge you?
  • If a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to recommend your college or university to them?
  • Do you think your college is effective in helping you to secure valuable internships that prepare you for your chosen career?
  • To what extent does your college or university provide opportunities for social engagement?
  • Do you think your college provides an environment where you feel you are surrounded by exceptional students who inspire and motivate you?
  • To what extent do you have the opportunity to interact with the faculty and teachers at your college or university as part of your learning experience?

Below is a table of average responses comparing the average responses of Augustana students with average responses from students at other US institutions. Although I haven’t been able to confirm it by checking the actual survey, it appears that the response options for each item consist of a 1-10 scale on which the participant can plot their response to each question.

Question Augustana Average Response Top US Institution Response 75th Percentile US Institution Response Median US Institution Response 25th Percentile US Institution Response Bottom US Institution Response
Collaborative Learning 8.5 9.5 8.4 8.1 7.7 6.7
Critical Thinking 8.8 9.6 8.7 8.3 8.0 7.1
Connections 8.5 9.4 8.5 8.2 7.9 7.0
Applying Learning 8.4 9.4 8.5 8.1 7.8 6.8
Challenge 8.2 9.4 8.6 8.3 8.0 7.2
Recommend 8.6 9.8 8.7 8.3 7.8 6.7
Prepare 8.3 9.4 8.3 7.8 7.4 6.2
Social 8.9 9.7 8.7 8.5 8.1 7.2
Inspire 8.0 9.3 8.1 7.7 7.2 6.0
Interact 9.3 10.0 9.2 8.9 8.4 7.3

Two things stand out to me in the table above. First, our students’ average responses compare quite favorably to the average responses from students at other institutions.  On six of the ten items, Augustana’s average student response equaled or exceeded the 75th percentile of all US institutions. On three of the remaining four items, Augustana students’ average response fell just short of the 75th percentile by a tenth of a point.

Second, our student’s response to one question – the degree to which they felt challenged by the classes they have taken so far – stands out like a sore thumb. Unlike the rest of the data points, Augustana’s average student response falls a tenth of a point below the median of all US institutions. Compared to the relative strength of all our other average response scores, the “challenge” score seems . . . curious.

Before going any further, it’s important to take into account the quality of the data that was used to generate these averages. The Wall Street Journal/Times says that they got responses from over 200,000 students, so if they want to make claims about overall average responses they’d be standing on pretty solid ground. However, they are trying to compare individual institutions against one another, so what matters is how many responses they received from students at each institution and to what degree those responses might represent all students at each institution. Somewhere in the smaller print farther down the page that explains their methodology, they state that in most cases they received between 50-100 responses from students at each institution (institutions with fewer than 50 responses were not included in their rankings). Wait, what? Given the total enrollments at most of the colleges and universities included in these rankings, 100 responses would represent less than 10% of all students at most of these institutions – in many cases far less than 10%. So we ought to approach the comparative part results with a generous dose of skepticism.

However, it doesn’t mean that we should dismiss the entirety of this data outright. In my mind, the findings from our own students ought to make us very curious. Why would data from a set of about 100 Augustana students (we received responses from 87 students who, upon further examination, turn out to be mostly first-year, female, pretty evenly scattered across different intended majors, and are almost all from the state of Illinois) produce such a noticeable gap between all of the other items on this survey and the degree to which our students feel challenged by their courses?

This is exactly why I named this blog “Delicious Ambiguity.” This is messy data. It definitely doesn’t come with a pre-packaged answer. One could point out several flaws in the Augustana data set (not to mention the entirety of this ranking system) and make a reasonable case to dismiss the whole thing. Yet, it seems like there is something here that isn’t nothing. So the question I’d ask you is this: are there other things going on at Augustana that might increase the possibility that some first-year students would not feel as challenged as they should? Remember, we aren’t talking about a dichotomy of challenged or not challenged. We are talking about degrees of quality and nuance that is the lifeblood of improving an already solid institution.

Make it a good day,

Mark

Measures, Targets, and Goodhart’s Law

Tis the season to be tardy, fa-la-la-la-la…la-la-la-la!

I’m reasonably jolly, too, but this week seems just a little bit rushed. Nonetheless, ya’ll deserve something decent from Delicious Ambiguity this week, so I’m going to put forth my best effort.

I stumbled across an older adage last weekend that seems remarkably apropos given my recent posts about retention rates at Augustana. This phrase is most often called “Goodhart’s Law,” although the concept has popped up in a number of different disciplines over the last century or so.

“When a measure becomes a target, it ceases to be a good measure.”

You can brush up on a quick summary of this little nugget on Wikipedia here, but if you want to have more fun I suggest that you take the time to plunge yourself into this academic paper on the origin of the idea and its subsequent applications here.

Although Goodhart’s Law emerges in the context of auditing monetary policy, there are more than a few well-written examples of its application to higher ed. Jon Boekenstedt at DePaul University lays out a couple of great examples here that we still see in the world of college admissions.  In all of the instances where Goodhart’s Law has produced almost absurd results (hilarious if they weren’t so often true), the take away is the same. Choosing a metric (a simple outcome) to judge the performance (a complex process) of an organization sets in motion behaviors by individuals within that organization that will inevitably play to the outcome (the metric) rather than the performance (the process) and, as a result, corrupt the process that was supposed to lead to that outcome.

So when we talk about retention rates, let’s remember that retention rates are a proxy for the thing we are actually trying to achieve.  We are trying to achieve student success for all students who enroll at Augustana College, and we’ve chosen to believe that if students return for their second year, then they are succeeding.

But we know that life is a lot more complicated than that. And scholars of organizational effectiveness note that organizations are less likely to fall into the Goodhart’s Law trap if they identify measures that focus on underlying processes that lead to an outcome (one good paper on this idea is here). So, even though we shouldn’t toss retention rates onto the trash heap, we are much more likely to truly accomplish our institutional mission if we focus on tracking the processes that lead to student success; processes that are also, more often than not, likely to lead to student retention.

Make it a good holiday break,

Mark

Two numbers going in the right direction. Are they related?

It always seems like it takes way too long to get the 10th-day enrollment and retention numbers for the winter term. Of course, that is because the Thanksgiving holiday pushes the whole counting of days into the third week of the term and . . . you get the picture.  But now that we’ve got those numbers processed and verified, we’ve got some good news to share.

Have a look at the last four years of fall-to-winter term retention rates for students in the first-year cohort –

  • 14/15 – 95.9%
  • 15/16 – 96.8%
  • 16/17 – 96.7%
  • 17/18 – 97.4%

What do those numbers look like to you? Whatever you want to call it, it looks to me like something good. Right away, this improvement in the proportion of first-year students returning for the winter term equates to about $70,000 in net tuition revenue that we wouldn’t have seen had this retention rate remained the same over the last four years.

Although stumbling onto a positive outcome (albeit an intermediate one) in the midst of producing a regular campus report makes for a good day in the IR office, it gets a lot better when we can find a similar sequence of results in our student survey data. Because that is how we start to figure out which things that we are doing to help our students correlate with evidence of increased student success.

About six weeks into the fall term, first-year students are asked to complete a relatively short survey about their experiences so far. Since this survey is embedded into the training session that prepares these students to register for winter classes, the response rate is pretty high. The questions in the survey focus on the academic and social experiences that would help a student acclimate successfully. One of those items, added in 2013, asks about the degree to which students had access to grades or other feedback that allowed them to adjust their study habits or seek help as necessary. In previous years, we’ve found this item to correlate with students’ sense of how hard they work to meet academic expectations.

Below I’ve listed the proportion of first-year students who agree or strongly agree that they had access to the sufficient grades or feedback during their first term. Compare the way this data point changes over the last four years to the fall-to-winter retention rates I listed earlier.

  • 14/15 – 39.6%
  • 15/16 – 53.3%
  • 16/17 – 56.4%
  • 17/18 – 75.0%

Obviously, both of these data points trend in the same direction over the past four years. Moreover, both of these trends look similar in that they jump a lot between the 1st and 2nd year, remain relatively flat between the 2nd and 3rd year, and jump again between the 3rd and 4th year.

I can’t prove that improved early academic feedback is producing improved fall-to-winter term retention. The evidence that we have is correlational, not causal. But we know enough to know that an absence of feedback early in the term hurts those students who either need to be referred for additional academic work or need to be shocked into more accurately aligning their perceived academic ability with their actual academic ability. We began to emphasize this element of course design (i.e., creating mechanisms for providing early term feedback about academic performance) because other research on student success (as well as our own data) suggested that this might be a way to improve student persistence.

Ultimately, I think it’s fair to suggest that something we are doing more often may well be influencing our students’ experience. At the very least, it’s worth taking a moment to feel good about both of these trends. Both data points suggest that we are getting better at what we do.

Make it a good day,

Mark

Ideals, Metrics, and Myths (oh no!)

Educators have always been idealists. We choose to believe what we hope is possible, and that belief often keeps us going when things aren’t going our way. It’s probably what drove many of us to finish a graduate degree and what drives us to put our hearts into our work despite all the discouraging news about higher ed these days.

But an abundance of unchecked idealism can also be a dangerous thing. Because the very same passion that can drive one to achieve can also blind one to believe in something just because it seems like it ought to be so. Caught up in a belief that feels so right, we are often less likely to scrutinize the metrics that we choose to measure ourselves or compare ourselves to others. Worse still, our repeated use of these unexamined metrics can become etched into institutional decision-making. Ultimately, the power of belief that once drove us to overcome imposing challenges can become our Achilles heel because we are absolutely certain of things that may, in fact, not be so.

For decades, colleges have tracked the distribution of their class sizes (i.e., the number of classes enrolling 2-9, 10-19, 20-29, 30-39, 40-49, 50-99, and more than 100 students, respectively) as a part of something called the Common Data Set. The implication behind tracking this data point is that a higher proportion of smaller classes ought to correlate with a better learning environment. Since the mid-1980s, the U.S. News and World Report rankings of colleges and universities have included this metric in its formula distilling it down to two numbers – the proportion of classes at an institution with 19 or fewer students (more is better) and the proportion of classes at an institution with 50 or more students (less is better). Two years ago U.S News added a twist by creating a sliding scale so that classes of 19 and fewer received the most credit, the percentage of classes with 20-29, 30-39, and 40-49 received proportionally less credit, and classes of over 50 received no credit. Over time these formulations have produced a powerful mythology across many postsecondary institutions: classes with 19 or fewer students are better than classes with 20 or more.

This begs a pretty important question: are those cut points (19/20, 29/30, etc.) grounded in anything other than an arbitrary application of the Roman numbering system?

Our own fall term IDEA course feedback data provides an opportunity to test the validity of this metric. The overall distribution of class sizes is almost perfect (a nicely shaped bell curve), with almost 80% of courses receiving a robust response rate. Moreover, IDEA’s aggregate dataset allows us to compare three useful measures of the student learning experience across all courses: a student-reported proxy of learning gains called the “progress on relevant objectives” (PRO) score (for a short explanation of the PRO score with additional links for further information, click here), the student perception of the instructor, and the student perception of the course. The table below spells out the average response scores for each measure across eight different categories of class size. Each average score comes from a 5-item response set (converted to a range of 1-5). The PRO score response options range from “no progress” to “exceptional progress,” and the perception of instructor and course excellence response options range from “definitely false” to “definitely true” (to see the actual items on the survey, click here). For this analysis, I’ve only included courses that exceed a 2/3rds (66.67%) response rate.

Class Size PRO Score Excellent Teacher Excellent Course
6-10 students (35 classes) 4.24 4.56 4.38
11-15 students (85 classes) 4.12 4.38 4.13
16-20 students (125 classes) 4.08 4.29 4.01
21-25 students (71 classes) 4.18 4.40 4.27
26-30 students (37 classes) 4.09 4.31 4.18
31-35 students (9 classes) 3.90 4.13 3.81
36-40 students (11 classes) 3.64 3.84 3.77
41 or more students (8 classes) 3.90 4.04 3.89

First, classes enrolling 6-10 students appear to produce notably higher scores on all three measures than any other category. Second, it doesn’t look like there is much difference between subsequent categories until we get to classes enrolling 31 or more students (further statistical testing supports this observation). Based on our own data – assuming that the fall 2017 data does not significantly differ from other academic terms, if we were going to replicate the notion that class size distribution correlates with the quality of the overall learning environment we might be inclined to choose only two cut points to create three categories of class size: those with 10 or fewer students, those with between 11 and 30 students, and those with more than 30 students.

However, further examination of the smallest category of classes indicates that these courses are almost entirely upper-level major courses. Since we know that all three metrics tend to score higher for upper-level major courses because the students in them are more intrinsically interested in the subject matter than students in lower-level courses (classes that often also meet general education requirements), we can’t attribute the higher scores for this group to class size per se. This leaves us with two general categories: classes with 30 or fewer students, and classes with more than 30 students.

How does this comport with the existing research on class size? Although there isn’t much out there, two brief overviews (here and here) don’t find much of a consensus. Some studies suggest that class size is not relevant, others find a positive effect on the learning experience as classes get smaller, and a few others indicate a slight positive effect as classes get larger(!). Especially in light of developments in pedagogy and technology over the past two decades, a 2013 essay that spells out some findings from IDEA’s extensive dataset suggests that other factors almost certainly complicate the relationship between class size and student learning.

So what do we do with all this? Certainly, mandating that all class enrollments sit just below 30 would be, um, stupid. There is a lot more to examine before anyone should march out onto the quad and declare a “class size” policy. One finding from researchers at IDEA that might be worth exploring on our own campus is the variation of learning objectives selected and achieved by class size. IDEA found that smaller classes might be more conducive to more complex (sometimes called “deeper”) learning objectives, while larger classes might be better suited for learning factual knowledge, general principles, or theories. If class size does, in fact, set the stage for different learning objectives, it might be worth assessing the relationship between learning objectives and class size at Augustana to see if we are taking full advantage of the learning environment that a smaller class size provides.

.And what should we do about the categories of class sizes that U.S. News uses in their college rankings formula? As family incomes remain stagnant, tuition revenue continues to lag behind institutional budget projections, and additional resources seem harder to come by, that becomes an increasingly valid question. Indeed, there might be a circumstance where an institution ought to continue using the Common Data Set class size index to guide the way that it fosters an ideal classroom learning environment. And it is certainly reasonable to take other considerations (e.g., faculty workload, available classroom space, intended learning outcomes of a course, etc.) into account when determining an institution’s ideal distribution of class enrollments. But if institutional data suggests that there is little difference in the student learning experience between classes with 16-20 students and classes with 21-25 students, it might be worth revisiting the rationale that an institution uses to determine its class size distribution. No matter what an institution chooses to do, it seems like we ought to be able to justify our choices based on the most effective learning environment that we can construct rather than an arbitrarily-defined and externally-imposed metric.

Make it a good day,

Mark

Have a wonderful Thanksgiving!

A short post for a short week . . .

We talk a lot about the number of students at Augustana who have multiple talents and seem like they will succeed in life no matter what they choose to do.  So many of them seem to qualify as “MultiPotentialites”.

Although it makes sense that we would first see this phenomenon among our students, I think we might be missing another group of particularly gifted folks all around us.  So many of you, the Augustana faculty and staff, have unique talents, insightful perspectives, and unparalleled interpersonal skills that make us good at what we do. Almost every day I see someone step into a gap and fill a need that just needs to get done. Maybe we are just Midwestern humble or maybe we are just so busy scrambling to put out one fire after another that we never really get the chance to pause and see the talent we all bring to this community.

So I want to make sure that I thank all of you.  I know this might sound hokey.  Maybe it is.

So what.

Make it a good Thanksgiving weekend.

Mark

Some anecdotes and data snippets from our first experience with the IDEA online course feedback system

Welcome to Winter Term! Maybe some of you saw the big snowflakes that fell on Sunday morning. Even though I know I am in denial, it is starting to feel like fall might have slipped from our collective grasp over the past weekend.

But on the bright side (can we get some warmth with that light?), during the week-long break between fall and winter term, something happened that had not happened since we switched to the IDEA course feedback system. Last Wednesday morning – only a 48 hours after you had entered your final grades, your IDEA course feedback was already processed and ready to view. All you had to do was log in to your faculty portal and check it out! (You can find the link to the IDEA Online Course Feedback Portal on your Arches faculty page).

I’m sure I will share additional observations and data points from our first experience with the online system this week during one of the three “Navigating your Online IDEA Feedback Report” sessions on Monday, Tuesday, and Thursday starting just after 4 PM in Olin 109. (A not so subtle hint – come to Olin 109 on Monday, Tuesday, or Thursday this week (Nov. 13, 14, and 16) at or just after 4 PM to walk through the online feedback reports and maybe one or two cool tricks with the data).  Bring a laptop if you’ve got one just in case we run out of computer terminals.

But in the meantime, I thought I’d share a couple of snippets that I found particularly interesting from our first online administration.

First, it seems that no news about problems logging in to the system turned out to be extremely good news. I was fully prepped to solve all kinds of connectivity issues and brainstorm all sorts of last-minute solutions. But I only heard from one person about one class having trouble getting on to the system . . . and that was when the internet was down all over campus for about 45 minutes. Otherwise, it appears that folks were able to administer the online course feedback forms in class or get their students to complete them outside of class with very little trouble. Even in the basement of Denkmann! This doesn’t mean that we won’t have some problems in the future, but at least with one term under our collective belt . . . maybe the connectivity issue isn’t nearly as big as we worried it might be.

Second, our overall student response rates were quite strong. Of the 467 course sections that could have administered IDEA online, about 74% of those course sections achieved a response rate of 75% or higher. Furthermore, several instructors tested what might happen if they asked students to complete the IDEA online outside of class (incentivized with an offer of extra credit to the class if the overall response rate reached a specific threshold). I don’t believe that any of these instructors’ classes failed to meet the established thresholds.

In addition, after a preliminary examination of comments that students provided, it appears that students actually may have written more comments with more detail than they previously provided on paper-and-pencil forms. This would seem to corroborate feedback from a few faculty members who indicated that their students were thankful that their comments would now be truly anonymous and no longer potentially identifiable given the instructor’s prior familiarity with the student’s handwriting.

Finally, in response to faculty concerns that the extended student access to their IDEA forms (i.e., students were able to enter data into their response forms until the end of finals no matter when they initially filled out their IDEA forms) might lead to students going back into the system and exacting revenge on instructors in response to a low grade on a final exam or paper, I did a little digging to see how likely this behavior might be. In talking to students about this option during week 10 of the term, I got two responses. Several international students said that they appreciated this flexibility because they had been unable to finish typing their comments in the time allotted in class. Since many international students (particular first-year international students) find that it takes them much longer than domestic students to express complex thoughts in written English. I also got the chance to ask a class of 35(ish) students whether or not they were likely to go back into the IDEA online system and change a response several days after they had completed that form. After giving me a bewildered look for an uncomfortably long time, one student finally blurted out, “Why would we do that?”  Upon further probing, the students said that they couldn’t imagine a situation where they would care enough to take the time to find the student portal and change their responses. When I asked, “Even if something happened at the end of the term like a surprisingly bad grade on a test or a paper that you felt was unfair?” The students responded by saying that by the end of the term they would already know enough to know what they thought of that instructor and that class. Even if they got a surprisingly low grade on a final paper or test, the students said that they would know the nature of that instructor and course long before the final test or paper.

To see if those student’s speculation about their own behavior matches with IDEA’s own data, I talked to the CEO of IDEA to ask what proportion of students go back into the system and change their responses and if that was a question that faculty at other institutions had asked.  He told me that he had heard that concern raised repeatedly since they introduced the online format. As a result, they have been watching that data point closely. Across all of the institutions that use the online system over the last several years, only 0.6% of all students actually go back into the system and edit their responses. He did not know what proportion of that small minority altered their responses in a substantially negative direction.
Since the first of my three training sesssions starts in about an hour, I’m going to stop now.  But so far, it appears that moving to IDEA online has been a pretty positive thing for students and our data. Now I hope we can make the most of it for all of our instructors. So I better get to work prepping for this week!
Make it a good day,
Mark

“Not so fast!” said the data . . .

I’ve been planning to write about retaining men for several weeks. I had it all planned out. I’d chart the number of times in the past five years that male retention rates have lagged behind female retention rates, suggest that this might be an issue for us to address, clap my hands together, and publish the post. Then I looked closer at the numbers behind those pesky percentages and thought, “Now this will make for an interesting conversation.”

But first, let’s get the simple stuff out of the way. Here are the differences in retention rates for men and women over the last five years.

Cohort Year Men Women
2016 83.2% 89.1%
2015 85.6% 91.3%
2014 85.0% 86.8%
2013 83.2% 82.7%
2012 78.6% 90.1%

It looks like a gap has emerged in the last four years, right? Just in case you’re wondering (especially if you looked more carefully at all five years listed in the table), “emerged” isn’t really the most accurate word choice. It looks like the 2013 cohort was more of an anomaly than anything else since the 2012 cohort experienced the starkest gap in male vs. female retention of any in the past five years. Looking back over the three years prior to the start of this table, this gap reappears within the 2011, 2010, and 2009 cohorts.

But in looking more closely at the number of men and women who enrolled at Augustana in each of those classes, an interesting pattern appears that adds a least one layer of complexity to this conversation. Here are the numbers of enrolled and retained men and women in each of the last five years.

Cohort Year                   Men                Women
Enrolled Retained Enrolled Retained
2016 304 253 393 350
2015 285 244 392 358
2014 294 250 432 375
2013 291 242 336 278
2012 295 232 362 326

Do you see what I see?  Look at the largest and smallest numbers of men enrolled and the largest and smallest numbers of men retained. In both cases, we are talking about a difference of about 20 male students (for enrolled men: 304 in 2016 for a high and 285 in 2015 for a low; for retained men, 253 in 2016 for a high and 232 in 2012 for a low). No matter the total enrollment in a given first-year class, these numbers seem pretty consistent. By contrast, look at the largest and smallest numbers of women enrolled and retained. The differences between the high and the low of either enrolled or retained women are much greater – by almost a factor of five.

So what does it mean when we put these retention rate gaps and the actual numbers of men and women enrolled/retained into the same conversation? For me, this exercise is an almost perfect example of how quantitative data that is supposed to reveal deep and incontrovertible truth can actually do exactly the opposite. Data just isn’t clean, ever.

Situating these data within the larger conversation about male and female rates of educational attainment, our own findings begin to make some sense. Nationally, the educational attainment gap between men and women starts long before college. Men (boys) finish high school at lower rates than women. Men go to college at lower rates than women. Men stay in college at lower rates than women. And men graduate from college at lower rates than women. So when the size of our first-year class goes up, it shouldn’t be all that surprising that the increase in numbers is explained by a disproportionate increase in women.

Finally, we have long known (and should also regularly remind ourselves) that retention rates are a proxy for something more important: student success. And student success is an outcome of student engagement in the parts of the college experience that we know help students grow and learn. On this score, we have plenty of evidence to suggest that we ought to focus more of our effort on male students. I wrote about one such example last fall when we examined some differences between men and women in their approaches toward social responsibility and volunteering rates. A few years back, I wrote about another troubling finding involving a sense of belonging on campus among Black and Hispanic men.

I hope we can dig deeper into this question over the next several weeks.  I’ll do some more digging into our own student data and share what I find. Maybe you’ve got some suggestions about where I might look?

Make it a good day,

Mark