Complicating the “over-involvement” complaint

Last week I promised that my next column would be short and sweet.  And in the context of the time crunch that inevitably wells up during week ten of the term, I am all about short and sweet.  So consider this data nugget as you bounce from commitment to commitment this week.

I think many of us seem to accept the campus narrative that our students are too busy.  If we were portioning out blame for this phenomenon, I suspect that a large proportion of it would fall on co-curricular involvement.  This claim isn’t entirely without merit.  We have legitimate evidence from our National Survey of Student Engagement (NSSE) data that our students spend more hours per week involved in co-curricular activities than students at comparable institutions.

But rather than debunk this narrative, I’d like to complicate it.  Because I am not sure the real question should be whether or not our students are over-involved or under-involved in co-curricular activities.  Instead, maybe the question should be whether each of our students is involved in the right amount and array of experiences that best fit their developmental needs – a very different question than whether we should be managing our student body to an “average” amount of co-curricular involvement.

In addition to NSSE, our participation in the Wabash National Study (WNS) also provides insight into our first-year students’ behaviors and allows us to compare our first-year students to those at a number of comparable small liberal arts colleges.  While the WNS utilized the identical NSSE question regarding co-curricular involvement, it also asked students to report the number of student organizations in which they participated during the first year.  I wanted to know whether or not our high rank in co-curricular involvement would be replicated in our students’ organizational memberships.  Essentially, I wanted to know more about the nature of our students’ involvement.

Interestingly, the average number of organizations in which our first-year students participated ended up in the middle of the pack and did not mirror our high rank in amount of co-curricular involvement.  This suggests to me that our students are not bouncing around from meeting to meeting (as the “myth” might imply) without having the time to meaningfully immerse themselves in these experiences.

That is not to say that this contradicts the claim outright.  Instead, I would suggest that this finding might provide some insight into the nature of purpose – or lack of purpose – that drives our students’ co-curricular involvement.  I’ll let you chew on the implications of this possibility for our own work in between meetings, grading, teaching, and every other little thing you have to do this week.

Make it a good day – and a good end of the fall term!


The dynamics of tracking diversity

Over the past few weeks I’ve been digging into an interesting conundrum regarding the gathering and reporting of “diversity” data – the percentage of Augustana students who do not identify as white or Caucasian.  What emerges is a great example of the frustratingly tangled web we weave when older paradigms of race/ethnicity classification get tied up in the limitations of survey measurement and then run headlong into the world in which we actually live and work.  To fully play out the metaphor (Sir Walter Scott’s famous text, “Oh what a tangled web we weave, when first we practice to deceive”), if we don’t understand the complexities of this issue, I would suggest that in the end we might well be the ones who get duped.

For decades, questions about race or ethnicity on college applications reflected an “all or nothing” conception of race/ethnic identification.  The response options included the usual suspects – Black/African-American, White/Caucasian, Hispanic, Asian/Pacific-Islander, and Native American, and sometimes a final category of “other” – with respondents only allowed to select one category.  More recently, an option simply called “two or more races” was added to account for individuals who might identify with multiple race/ethnic categories, suggesting something about our level of (dis)interest in the complexities of multi-race/ethnic heritage.

In 2007, the Department of Education required colleges to adopt a two-part question in gathering race/ethnicity data.  The DOE gave colleges several years to adopt this new system, which we implemented for the incoming class of 2010.  The first question asks whether or not the respondent identifies as Hispanic/Latino.  The second question asks respondents to indicate all of the other race/ethnicity categories that might also apply.  The response choices are American Indian, Asian, Black/African-American, Native Hawaiian/Pacific-Islander, and White, with parenthetical expansions of each category to more clearly define their intended scope.

While this change added some nuance to reporting race/ethnicity, it perpetuated some of the old problems while introducing some new ones as well.  First, the new DOE regulations only addressed incoming student data; it didn’t obligate institutions to convert previous student data to the new configuration – creating a 3-4 year period where there was no clear way to determine a “diversity” profile.  Second, the terminology used in the new questions actually invited the possibility that individuals would classify themselves differently than they would have previously.  Third, since Augustana (like virtually every other college) receives prospective student data from many different sources that do not necessarily comport with the new two-part question, it increased the possibility of conflicting self-reported race/ethnicity data.  Similarly, the added complexity of the two-part question increased the likelihood that even the slightest variation in internal data gathering could exacerbate the extent of inconsistent responses.  Finally, over the past decade students have increasingly skipped race/ethnicity questions, as older paradigm of racial/ethnic identification have seemed increasingly less relevant to them.  This means that the effort to acquire more nuanced data could actually accelerate the increasing percentage of students who skip these questions altogether.

As a result of the new federal rules, we currently have race/ethnicity data for two groups of students (freshmen/sophomores who entered after the new rules were implemented and juniors/seniors who entered under the old rules) that reflect two different conceptions of race/ethnicity.  Although we developed a crosswalk in an attempt to create uniformity in the data, for each additional wrinkle that we resolve another one appears. Thus, we admittedly have more confidence in the “diversity” numbers that we reported this year (2011) than those we reported last year (2010).  Moreover, the change in questions has set up a domino effect across many colleges where, depending upon how an institution tried to deal with these changes, an individual institution could come up with vastly different “diversity” numbers, each supported by a reasonable analytic argument (See this recent article in Inside Higher Ed).

Recognizing the enormity of these problems, IPEDS only requires that the percentage of students we report as “race unknown” be less than 100% during the transition years (in effect allowing institutions to convert all prior student race/ethnicity data to the unknown category). And lets not even get into the issues of actual counting.  For example, the new rule says that someone who indicates “yes” to the Hispanic/Latino question and selects “Asian” on the race question must be counted as Hispanic, but someone who indicates “no” to the Hispanic/Latino question and selects both “Asian” and “African-American” to the race question must be counted as multi-racial.  Anyone need an aspirin yet?

But we do ourselves substantial harm if we get hung up on a quest for precision.  In reality, the problem originates not in the numbers themselves but in the relative value we place on those numbers and the decisions we make or the money we spend as a result.  Interestingly, if you ask our current students, they will tell you that they conceive of diversity in very different ways than those of us who came of age several decades ago (or more).  Increasingly, for example, socio-economic class is becoming a powerful marker of difference, and a growing body of research has made it even more apparent that the intersection of socio-economic class and race/ethnicity produces vastly different effects across diverse student types.

I am in no way suggesting that we should no longer care about race or ethnicity.  On the contrary, I am suggesting that if our conception of “diversity” is static and naively simplistic, we are less likely to recognize the emergence of powerfully influential dimensions on which difference also exists and opportunities are also shaped.  Thus, we put ourselves at substantial risk of treating our students not as they are, but as we have categorized them.  Worse still, we risk spending precious time and energy arguing over what we perceive to be the “right” number under the assumption that those numbers were objectively derived, when it is painfully clear that they are not.

Thanks for indulging me this week.  Next week will be short and sweet – I promise.

Make it a good day,


Student relationships with faculty and administrators

One of Augustana College’s fundamental values is the importance of high quality relationships between students and faculty.  Indeed, this is one of the most compelling arguments for attending a small liberal arts college.  I would argue that this assertion has always included the quality of relationships between students and everyone who might impact our students’ educational experience – not just those traditionally considered to be faculty.  In fact, students don’t differentiate between those of us who are “faculty” and those of us who are “staff” or “administrators” in the same way that we do.  Thus, all of us should fully expect to have a significant impact on our students’ education – no matter the context of those interactions.

There are two questions on the National Survey of Student Engagement (NSSE) that ask about the quality of students’ relationships with 1) faculty members and 2) administrative personnel and offices. The response options are somewhat unique – instead of a simple 5-item agree/disagree scale, the available responses are portrayed on a spectrum from 1-7; with 1 representing “unfriendly, unhelpful, sense of alienation” and 7 representing “friendly, supportive, sense of belonging.”  This spectrum may require a little more thought, but the idea is that these choices more fully reflect the concept underlying the question.

In comparison with all of the schools that use NSSE (most of which are much larger than Augustana), both our freshmen and our seniors report a significantly higher quality of relationships with faculty.  Moreover, the quality of relationships appears to improve between the freshmen and senior year.

Quality of Relationships w/ Faculty


Overall NSSE Average








However, something happens when our students are asked the same question about their quality of relationships with administrative personnel and offices.  While our freshmen scores are significantly higher than the overall NSSE average, our senior scores are significantly lower than the comparable overall NSSE average.

Quality of Relationships w/ Administrative Personnel and Offices


Overall NSSE Average








Regardless of the possible reasons for lower ratings of relationships with administrators generally (one such reason might be that administrators tend to dispense fines more often than faculty, for example), it seems reasonable to strive for and expect an increase in the quality of those relationships that mirrors the change in student/faculty relationships.  Unfortunately, this does not appear to be case.  While the reasons for these opposing trends are probably complex, I would humbly suggest that they not beyond our control.

Usually I end my column with a simple “Make it a good day.”  This time, I’d like to end with something slightly different.

In your own way – big or small, make it a good day for a student.


Dusting under the retention furniture

A couple of weeks ago I highlighted our success in maintaining a historically high 1st-2nd year retention rate (87%) despite a substantial increase in the size of the freshmen class between 2009 to 2010.  Although this is something that we should indeed celebrate, we need to be willing to look inside these numbers and explore whether our overall rate accurately reflects the behaviors of various student types.  This week I want to dig a little deeper and explore those variations.  To be precise, I’ll call it persistence when talking about the students’ decision and retention when talking about the number that we track as a proxy measure for student experience and success in the first year.

As you might expect, our retention rate isn’t the same for all student types.  Pre-college academic ability plays a big role.  Students with an ACT score of 23 or above persisted at 89%, while those with an ACT score below 23 persisted at 81%.  Likewise, there are two demographic characteristics – race/ethnicity and gender – that historically influence retention rates across the country as well as at Augustana.  In the 2010 cohort, white students persisted at 89%, while multicultural students persisted at 81%.  In addition, female students persisted at 89%, while male students persisted at 85%.

Before talking about what these differential rates might mean, it is important to remember that pre-college ability, race/ethnicity, and gender don’t exist independently – an individual student is necessarily categorized along all three dimensions.  So the question also becomes whether or not there is a subset of categories that, when combined, produce a starkly lower likelihood of persistence to the second year.

Not surprisingly, we have such a troublesome combination at Augustana.  Of the three categories listed above, it is the combination of being male and multicultural that produces the lowest retention rate of any combination – 77%.  Interestingly the effect of being male also influences the retention rate of students with higher incoming ACT scores.  Females with ACT scores of 23 or above persisted at 92%, while males with a similar ACT score persisted at 86%.  By comparison, the retention rate of students with lower ACT scores (below 23) did not vary significantly by gender.

Although these differences might suggest an array of programmatic interventions, solving a retention “problem” can be a bit like the old carnival game Whack-a-Mole.  A singular focus on one subset of students can become a frustratingly reactionary exercise over time. Yet, understanding the nature of these students’ challenges can be a critical first step in addressing retention issues. What common issues might be at the core of these differences across gender and race/ethnicity?  To help us take a first step in thinking about the issues that our male students face, I’d encourage you to attend Dr. Tracy Davis’ presentation at Friday Conversation this week.  I’ll talk more about the challenges facing multicultural students in a later column.

Make it a good day,