Revisiting the Value of Early Feedback

It is becoming increasingly apparent in the world of quantitative research that producing a single statistically significant finding shouldn’t carry too much weight (whether or not the value of a single statistically significant finding should have ever been allowed to rise to such a level of deference in the first place is a thoroughly valid discussion for a more technical blog than mine here or here).  In recent years, scholars in medicine, psychology, and economics (among others) have increasingly failed in their attempts to reproduce the statistically significant findings of an earlier study, creating what has been labeled “the replication crisis” across a host of disciplines and splattering egg on the faces of many well-known scholars.

So in the interest of making sure that our own studies of Augustana student data don’t fall prey to such an embarrassing fate (although I love a vaudevillian cracked egg on a buffoon’s head as much as the next guy), I thought it would be worth digging into the archives to rerun a prior Delicious Ambiguity analysis and see if the findings can be replicated when applied to a different dataset.

In February 2014, I posted some analysis under the provocative heading, “What if early feedback made your students work harder?”  Looking back, I’m a little embarrassed by the causal language that I used in the headline (my apologies to the “correlation ≠ causation” gods, but a humble blogger needs some clickbait!).  We had introduced a new item into our first-year survey that asked students to indicate their level of agreement (or disagreement) with the statement, “I had access to my grades and other feedback early enough in the term to adjust my study habits or seek help as necessary.”  The response set included the usual suspects: five options ranging from strongly disagree to strongly agree.

While we found this item to significantly predict (in a statistical sense) several important aspects of positive student interactions with faculty, the primary focus of the February 2014 post turned to another potentially important finding. Even after accounting for several important pre-college demographic traits (race, sex, economic background, and pre-college academic performance) and dispositions (academic habits, academic confidence, and persistence and grit), the degree to which students agreed that they had access to grades and other feedback early in the term significantly predicted student’s response to this item: “How often did you work harder than you have in the past in order to meet your instructor’s expectations?”  In essence, it appeared that students who felt that they got more substantive feedback early in the term also tended to work harder to meet their instructor’s expectations more often.

Replication is risky business. Although the technical details that need to be reconstructed can make for a dizzying free-fall into the minutiae, committing to reproduce a prior study and report the results publicly sort of feels like playing Russian Roulette with my integrity.  Nonetheless, into the breach rides . . . me and my trusty data-wrangling steed.

Although it would have been nice if none of the elements of the analysis had changed, that turned out not to be the case – albeit for good reason.  We tend to review the usefulness of various survey items every couple of years just to make sure that we aren’t wasting everyone’s time by asking questions that really don’t tell us anything.  This turned out to be a possibility with the original wording of the item we were predicting (what stats nerds would call the dependent variable). When we put the statement, “How often did you work harder than you have in the past in order to meet your instructor’s expectations?” under the microscope, we saw what appeared to be some pockets of noise (stats nerd parlance for unexplainable chaos) across the array of responses. Upon further evaluation, we decided that maybe the wording of the question was a little soft.  After all, what college freshman would plausibly say “never” or “rarely” in response?  I think it’s safe to assume that most students would expect college to make them work harder than they had in the past (i.e., in high school) to meet the college instructor’s expectations. If we were a college where students regularly found the curricular work easier than they experienced in high school . . . we’d have much bigger problems.

Since the purpose of this item was to act as a reasonable proxy for an intrinsically driven effort to learn, in 2016 we altered the wording of this item to, “How often did you push yourself to work harder on an assignment even though the extra effort wouldn’t necessarily improve your grade?” and added it to the end-of-the-first-year survey.  Although this wording has proven to increase the validity of the item (subsequent analyses suggests that we’ve reduced some of the previous noise in the data), it’s important to note at the outset that this change in wording and relocation of the item to the end of the year alters the degree to which we can precisely reproduce our previous study.  On the other hand, if the degree to which students get early feedback (an item that is asked at the end of the fall term) significantly predicts the degree to which students push themselves to work harder on their homework regardless of their grade (now asked at the end of the spring term) in the current replication study, it strikes me that this finding might be even more important than the 2014 study.

Thankfully, all of the other variables in the 2016-17 data remained the same as the 2014-15 first-year data. So . . . what did we find?

I’ve provided the vital statistics in the table below.  In a word – bingo!  Even after taking into account sex, race/ethnicity, socioeconomic status (i.e., Pell grant status), pre-college academic performance (i.e., ACT score), academic habits, academic confidence, and persistence and grit, the degree to which students receive early feedback appears to significantly predict the frequency of pushing oneself to work harder on assignment regardless of whether or not the extra effort might improve one’s grade.

Variable Standardized coefficient Standard error P-value
Sex (female = 1) 0.022 0.143 0.738
Race/ethnicity (white = 1) 0.001 0.169 0.089
Socioeconomic Status (Pell = 1) -0.088 0.149 0.161
Pre-college academic performance -0.048 0.010 0.455
Academic habits scale 0.317 *** 0.149 0.000
Academic Confidence scale -0.065 0.167 0.374
Persistence and grit scale 0.215 ** 0.165 0.010
Received early feedback 0.182 ** 0.056 0.005

It is particularly intriguing to me that the statistically significant effect of receiving early feedback in the fall term appears when the outcome item is asked at the end of the spring term – a full six months later. Furthermore, it seems important that receiving early feedback produces a unique effect even in the presence of measures of academic habits (e.g., establishing a plan before starting a paper, starting homework early, etc.) and persistence and grit (e.g., continuing toward a goal despite experiencing disappointment, sticking with a plan to reach a goal over a longer period of time, etc.), both of which produce unique effects of their own.

The implications of these findings seem pretty important. In essence, no matter the student’s pre-college academic performance, the degree of positive academic habits, or the depth of persistence and grit when arriving at Augustana, receiving early feedback in the fall term appears to improve a student’s likelihood of working harder on their schoolwork no matter how that effort might impact their grade.

Whew!  I guess my integrity survives for another day.  More importantly (MUCH more importantly), it seems even more clear now after replicating the 2014-15 finding with 2016-17 data, that creating ways to provide early feedback to students so that they can recalibrate their study habits as necessary appears to be a critical element of effective course design.

Make it a good day,

Mark

Data, Analysis, ACTION (now the camera’s bright lights shine on you!)

A couple of weeks ago, the Assessment for Improvement Committee (AIC) and Institutional Research and Assessment (IR&A) hosted the third of three Friday Conversations focused on improving our students’ cognitive sophistication. Unless you’ve been living under a rock (or a pile of semester transition documents!), you know by now that one of the primary functions of AIC and IR&A is to foster an organizational culture of perpetual improvement. To that end, we run a perpetual cycle of data collection, analysis, and communication about the relationships between the student learning and student experience to shine a light on the ways in which we can improve what we do as a college.

The cycle that culminated this year (the entire process takes 5-6 years) focused on the category of learning outcomes we have called “cognitive sophistication.” In particular, we explored data gathered from the cohort of students who entered Augustana in the fall of 2013 and graduated in the spring of 2017 to examine the development of our students’ inclination to, and interest in, thinking about complex or complicated issues or ideas. Just in case you need to catch yourself up, have a quick look at the three previous posts about this process:

  1. Does our Students’ Interest in Complex Thinking Change over Four Years
  2. What Experiences Improve our Student’s Inclination Toward Complex Thinking
  3. Doing Something with What We Now Know

In the fall term, we presented what we had found about the nature of our student’s growth and collected your suggestions about student experiences and characteristics that might influence this growth. In the winter term, we presented the results of testing your suggestions to identify the student experiences that appear to be statistically significant predictors (i.e., particularly influential experiences) of our students’ growth. By contrast, during the spring term Friday Conversation, AIC and IR&A changes it up a bit and turn the session over to whoever shows up. Because if we – meaning the Augustana community – are going to convert our findings into demonstrable improvements, then we – meaning AIC and IR&A – need to hand these findings over to you and let you shape the way that we translate evidence into improvement.

If you clicked on the third post linked above, you didn’t find the results of the third Friday Conversation, but rather a plug and a plea for attendees. Fortunately, a healthy number of faculty and staff showed up ready to put their brains to work. Folks broke into three groups and narrowed a range of ideas into one or two practical ways that the college could put our findings to use. So without further ado, here are the focal points of the conversation from the last Friday Conversation.

Learning in Context

The first set of findings from our data suggested that when students engage in hands-on or experiential learning experiences, their inclination toward complex thinking seems to increase. This may be because experiencing learning in real-world or hands-on settings inevitably add a context that often complicates what might have seemed more simple when discussed in the sanitary safety of a classroom. As students get accustomed to learning or applying prior learning in these real-world settings, research on experiential learning reveals that students find this learning more interesting and sometimes even invigorating.

Even though Augustana offers all sorts of hands-on learning experiences (e.g., internships, research with faculty, community involvement, etc.), it seems that the distribution of these opportunities across majors is uneven. As a result, students in some programs have a much higher chance of gaining access to these kinds of experiences than other students. The faculty and staff focused on this topic considered policy or practice ideas that could bring more of these kinds of opportunities to programs where they have not traditionally thrived. At the same time, the faculty and staff who joined this part of the conversation emphasized the need to offer professional development in order to help faculty in these programs imagine or craft an expanded range of hands-on learning opportunities, especially in disciplines where faculty research tends to be a solo endeavor or the nature of that research tends to explore far beyond an undergraduate’s scope of understanding.

Integrative Advising

This discussion focused on the “integrative” part of integrative advising. Our findings suggested that the more students engage in the integrative aspects of advising conversations (i.e. when faculty or staff prod students to weave together the variety of things they’ve done in college – AKA that long list at the bottom of the email signature – into a coherent narrative), the more they tend to develop an inclination toward complex thinking. This may be because asking students to turn their own raw data (after all, a list of disparate activities is very much like a set of raw data) into a story requires them to engage in complex thinking about uncertainty from two directions: 1) what themes are already present throughout my various activities that could form the basis of a compelling narrative and 2) given where I want to end up after college, how should I alter my list of activities to better prepare for success in that setting?

Participants in this discussion honed in on three ideas that are either already in development or could be introduced. First, they talked about the existing FYI proposal that includes a portfolio. This portfolio might be an especially good way to get first-year students to map out their college experience with the end (i.e. who they want to be when they receive their diploma) explicitly in mind. Second, the participants talked about the need for a way to continue this way of thinking beyond the first year portfolio and landed on a common assignment within the Reasoned Examination of Faith course (formerly Christian Traditions) that would focus on vocation-seeking and purpose. Third, they identified a continuing need for faculty development that would help individuals apply holistic/integrative advising practices no matter the advising context.

Interdisciplinary Discussions

The third group of faculty and staff tackled the challenge of increasing student participation in interdisciplinary discussions. It shouldn’t be much of a surprise by now that the experiences that we found to predict greater gains in cognitive sophistication were those that required students to apply one set of perspectives or ideas within a different, and often more tangible, context or framework. Augustana already offers several avenues for these kinds of conversations (e.g., Salon, Symposium Day, etc.), and there is a certain subset of students who continually participate with enthusiasm. But increasing student participation in these events means focusing on the subset of students who don’t jump at these opportunities. One possibility included finding ways for students to attend conferences in the region when they aren’t presenting research. Another possibility included fostering more interdisciplinary student groups. A third intriguing idea involved the conversations about a Creativity Center on campus and the idea that this initiative might be an ideal vehicle to bring together students from disciplines that might not normally intersect.

Now comes the hardest part of this process. There isn’t a lot of reason to collect student learning data and identify the experiences that shape that learning if we don’t do anything with what we find out.  AIC and IR&A will continue to encourage the campus to plug these findings into policy, program, or curricular design. But we need you to take these findings and discussion points and champion them within your own work.

When you (notice the “when” rather than “if”?) have implemented something cool and creative, can you send me an email and tell me about it?  I’ll be sure to share it with the rest of the college and celebrate your work!

Make it a good day,

Mark

An educational idea that evidence doesn’t support

Good morning,

Over the last week or so, the IR office has been prepping the various large-scale surveys that we send out annually to first-year and senior students. After a couple of years of administering the same survey instruments, it’s tempting to just “plug and play” without thinking much about whether the questions we’ve been asking are actually supported by the evidence we have gathered previously or are even still relevant at all.

Although there are good reasons to maintain consistency in survey questions over time, it is also true that we ought to change survey questions if they no longer match what we are trying to do or what we know to be true. Because we are human, we can get ourselves caught rationalizing something that we think ought to be so at exactly the time when we ought to do something else. It isn’t uncommon for us to believe something to always be so because it either seemed so at one time (and maybe even was so at one time) or because it appeared to be so in one instance it seemed like it ought to be so in every other situation or context.

Last week, I read an article in the Atlantic about one such educational “best practice” that subsequent research seems to have debunked. It’s not a very long article, but what it describes might be important for some as many of us are designing and redesigning classes for the new semester calendar.

The Myth of ‘Learning Styles’

A popular theory that some people learn better visually or aurally keeps getting debunked.

Hmmmmm  . . . . .

Make it a good day,

Mark

What good are those Starfish flags anyway?

Now that we’ve been using the Starfish tool for a couple of years to foster a network of early alerts and real-time guidance for our students, I suppose it makes sense to dig into this data and see if there are any nifty nuggets of knowledge worth knowing. Kristin Douglas (the veritable Poseidon of our local Starfish armada) and I have started combing through this data to look for useful insights. Although there is a lot more combing to be done (no balding jokes, please), I thought I’d share just a few things that seem like they might matter.
Starfish is an online tool that allows us to provide something close to real-time feedback, positive, negative, or informational, to students. In addition, this same information goes to faculty and staff who work closely with that student in an effort to provide early feedback that influences future behavior. Positive feedback should beget more the same behavior. Negative feedback hopefully spurs the student to do something differently.
In general, there are two ways to raise a Starfish flag for a student. The first is pretty simple: you see something worth noting to a student, you raise a flag. These flags can come from anyone who works with students and has access to Starfish. The second is through one of two surveys that are sent to faculty during the academic term. This data is particularly interesting because it is tied to performance in a specific class and, therefore, can be connected to the final grade the student received in that class. The data I’m going to share today comes from this survey data.
We send a Starfish survey to faculty twice per term.  The first goes out in week 3 and asks faculty to raise flags on any student that has inspired one(or more) of four different concerns:
  • Not engaged in class
  • Unprepared for class
  • Missing/late assignments
  • Attendance concern
The second Starfish survey goes out in week 6 and asks faculty to raise flags that address two potential concerns:
  • Performing at a D level
  • Performing at an F level
We now have a dataset of almost six thousand flags from winter, 2015/16 through winter, 2017/18. Do any of these flags appear to suggest a greater likelihood of success or failure? Given that we are starting with the end in mind, let’s first look at the flags that come from the week 6 academic concerns survey.
There are 1,947 flags raised for performing at a D level and 940 flags raised for performing at an F level. What proportion of those students (represented by a single flag each) ultimately earned a passing grade in the class in which a flag was raised?
The proportion that finished with a C or higher final grade
  • Performing at a D level (1059 out of 1947)   –   54%
  • Performing at an F level (232 out of 940)    –    25%

On first glance, these findings aren’t much of a surprise. Performing at an F level is pretty hard to recover from with only three weeks left in a term. At the same time, over half of the students receiving the “D” flag finished that course with a C grade or higher. This information seems useful for those advising conversations where you need to have a frank discussion with a student about what it will take to salvage a course or drop it late in the term.

The second set of flags comes from the third week of the term and represent behaviors instead of performance. Are any of these raised flags – not engaged in class (278), unprepared for class (747), missing/late assignments (1126), and attendance concern (904) – more or less indicative of final performance?

The proportion that finished with a C or higher final grade
  • Not engaged in class (202/278)       –        73%
  • Unprepared for class (454/747)        –        61%
  • Missing/late assignments (571/1126)   –    51%
  • Attendance concern (387/904)        –        43%
There appears that these four flags vary considerably in their correlation with a final grade. Attendance concern flags appear to be the most indicative of future trouble while appearing unengaged in class seems relatively salvageable.
Without knowing exactly what happened after these flags were raised, it’s hard to know exactly what (if anything) might have spurred a change in the behavior of those students who earned a final grade of C or higher. However, at the very least these findings add support to the old adage about just showing up.
What does this data suggest to you?
Make it a good day,
Mark

You need a laugh today? Let’s go!

Even us emotionally stunted numbers-nerds get bogged down by rainy April days. So I thought I’d share a quick video lesson on predictive analytics that I hope will make you smile (while educating in the most high minded fashion, of course!).

Predict This!

Of course, if you’re in the kind of curmudgeonly mood that is way beyond laughter, try out this Mountain Goats song and belt out the chorus at the top of your lungs.

Make it a good day,

Mark

Beware of the Average!

It’s been a crazy couple of weeks, so I’m just going to put up a nifty little picture.  But since I generally try to write about 1000 words, this pic ought to do the trick . . .

In case you can’t make out the sign on the river bank, it says that the average depth of the water is 3 ft!

Beware-The-Flaw-of-Averages

Sometimes an average is a useful number, but we get ourselves in some deep water if we assume that there is no variation across the range of data points from which that average emerged. Frequently, there is a lot of variation. And if that variation clusters according to another set of characteristics, then we can’t spend much time celebrating anything no matter how good that average score might seem.

Make it a good day,

Mark

Improving an Inclination toward Complex Thinking – Part III (AKA doing something with what we now know)

So far this year, the Assessment for Improvement Committee (AIC) and the Office of Institutional Research and Assessment (IRA) have hosted two Friday Conversations to explore findings from our 4-year study of intellectual sophistication growth among Augustana students. The first conversation focused on how our students changed from Welcome Week (just before freshmen start their first fall term) to graduation and how different types of students (depending on traits like race, sex, or pre-college academic preparedness), although they may have started in different places, seem to have grown about the same amount over four years (I summarized that presentation in a blog post here). The second conversation examined the different student experiences that appear to influence that change, either positively or negatively (you can read a summary of that presentation in a blog post here). Clearly, our findings suggest that the degree to which students take ideas they’ve learned in one discipline and apply them or vet them in a different disciplinary or real-world setting demonstrably increases our student’s inclination toward complex thinking.

Although the findings we’ve discussed so far are interesting in their own right, they don’t do anything by themselves to help us improve student learning. In fact, without collectively committing to do something with our results, we end up just like most organizations – chock-full of data but unable to turn those insights into actions that actually make them better. If we’re being honest, the fact that we know so much about how our students’ growth and the experiences that shape that growth puts us in the unenviable position of being almost morally obligated to do something with what we know – no matter how daunting that might be.

I know all of that sounds a little heavy-handed (ok – more than a little heavy-handed), but in the 8 years I’ve been at Augustana, the times when we’ve been at our absolute best have been when we’ve let down our defenses, humbly looked in the mirror, and chosen to believe in the best of each other. Then we’ve all put our shoulders to the plow to make the education we provide just a little bit better than it was before.

And that is the focus of the third, and most important, AIC/IRA Friday Conversation at the end of this week. After we briefly review what we have learned from our data, we will organize into smaller groups to come up with 2-4 viable ways in which we can turn these findings into action. This might take the form of professional development sessions, policy for course design or pedagogical nuance, or co-curricular emphases to apply our findings to impact a larger proportion of students.

So please come to the AIC/IRA Friday Conversation this Friday, March 23rd. We will be in the Wilson Center. Food and drinks are available at 3:30 and the conversation will start at 4:00.

We are really good at getting better. I’ve seen us do it over and over again. I, for one, can’t wait to see what we come up with!

Make it a good day,

Mark

What do students do about the textbooks and additional materials we assign?

At first, that might seem like an incredibly dumb question.  If you’re in a salty mood, you might snarl, “Buy them and learn or fail the damn course.”  For most of us, I suspect the thought of not buying the additional materials required (or even recommended) for a class might seem utterly absurd. When I was an undergraduate, I remember being warned not to buy used books because they would likely have someone else’s notes in the margins, leaving no room for me to write my own (ok, maybe not the most convincing argument). Nonetheless, I definitely remember feeling like a slacker if I didn’t show up to the first day of class with a shiny new version of each required text.

Fast forward 30-odd years and things couldn’t be more different. The cost of textbooks has risen even faster than the cost of college tuition (have a look at this graphic from the Bureau of Labor Statistics), even as the cost of recreational books has gone down.

More and more, it appears that students are renting textbooks, borrowing from friends, or just foregoing some books altogether. The Chronicle of Higher Ed highlighted a study in 2011 suggesting that 7 of 10 students have skipped buying a textbook because of cost. More recent online columns and blogs seem to perpetuate the notion, if not brag outright, that a student can succeed in college without buying books. In January, the Atlantic published a longer piece examining the reality that, despite the surge in online and other edtech resources, the cost of textbooks and/or their online equivalent remains exorbitantly high. And in the context of the financial pressures that many students experience just paying tuition, room, and board, I guess it shouldn’t surprise us much when already financially-strapped students take advantage of any alternative that might save them some money.

A few weeks ago, about forty faculty and staff gathered in the library to kickstart a conversation about Augustana students and textbooks. After discussing the financial realities of textbook costs, the conversation turned toward the ways in which we choose the textbooks and additional materials that we assign. Although this is something that we might take for granted at times (especially if one might be scrambling to put a course together), it’s an issue that more and more folks are trying to address.  I’m sure there are plenty of examples, but three impressive efforts include the Open Textbook LibraryCollege Open Textbooks, and the Open Educational Resources Commons. Most recently, 40 colleges have made the move to simply go without textbooks and only use freely available learning resources (see here and here).

At the end of the meeting, it seemed clear that we really need to know more about our student’s engagement with the textbooks and additional materials assigned in our courses. One person posed an exceedingly logical suggestion: could we add a couple of questions to the end of every IDEA course feedback survey at the end of spring term asking about:

  • The amount that students spent on textbooks for a given class
  • How often they used the textbooks and additional materials they bought for that class
  • How effective those materials were in helping the student learn in that class

It seems like this would be particularly useful information. But before acting on any of these ideas, I think it’s important to know what you all think about gathering this information, what questions you might have about what is done with this information, and any other concerns you might have about this project.

So . . . . what do you think?  Should we ask these questions?  What should we do with the data?  If we ask these questions, how do we need to be careful and transparent so that whatever we find, 1) gives us a deeper understanding of our students’ engagement with textbooks and additional materials, and 2) genuinely spurs our perpetual effort to improve in a way that fosters inclusiveness and understanding.

Please – send me your thoughts.  If you know my email, you can send them there. If you’d rather post in the comments section below, please post away.

Make it a good day,

Mark

 

 

Warming Perceptions across the Political Divide

Welcome back to campus for the headlining event – Spring Term! (about as likely a band name as anything else these days, right?).

At the very end of winter term, Inside Higher Ed published a short piece highlighting a study that suggested the first year of college might broaden students’ political views. The story reviewed findings (described in more depth here) from an ongoing national study of college students’ interfaith understanding development that goes by the acronym IDEALS (AKA, the Interfaith Diversity Experiences & Attitudes Longitudinal Survey). In essence, both politically conservative and politically liberal students (self-identified at the beginning of their first year in college) developed more positive perceptions of each other by the beginning of their second year. Since Augustana is one of the participating institutions in this study, I thought it might be interesting to see if our local data matches up with the national findings.

The IDEALS research project is designed to track change over four years, asking students to complete a set of survey questions at the beginning of the first year (fall, 2015), at the beginning of the second year (fall, 2016), and at the end of the fourth year (spring, 2019). Many of the survey questions ask individuals about their perceptions of people of different religions, races, ethnicities, and beliefs. For the purposes of this post, I’ll focus on the responses to four statements listed below and zero in on the responses from conservative students about liberal students and the responses from liberal students about conservative students.

  • In general, I have a positive attitude toward people who are politically conservative
  • In general, I have a positive attitude toward people who are politically liberal
  • In general, individuals who are politically conservative are ethical people
  • In general, individuals who are politically liberal are ethical people
  • In general, people who are politically conservative make a positive contribution to society
  • In general, people who are politically liberal make a positive contribution to society
  • I have things in common with people who are politically conservative
  • I have things in common with people who are politically liberal

For each item, the five response options ranged from “disagree strongly” to “agree strongly.”

First, let’s look at the responses from politically conservative students. The table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Conservative Student’s Perceptions of Politically Liberal Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.71 3.46
Ethical People 3.21 3.50
Positive Contributors 3.64 3.92
Positive Commonalities 3.23 3.29

Overall, it appears that conservative students’ perceptions of liberal students improved during the first year. Scores on two items (ethical people and positive contributors) increased substantially. Perceptions of commonalities remained essentially the same, and a self-assessment of positive attitudes toward liberal students declined. Normally, the drop in positive attitude would seem like a cause for concern, but conservative students positive attitudes toward other conservatives dropped as well, from 4.29 to 3.92. So maybe it’s just that the first year of college makes conservatives grouchy about everyone.

Second, let’s look at the responses from politically liberal students when asked to assess their perceptions of politically conservative students. Again, the table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Liberal Student’s Perceptions of Politically Conservative Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.61 3.65
Ethical People 3.58 3.78
Positive Contributors 3.33 3.76
Positive Commonalities 3.31 3.69

It appears that liberal students’ views of conservative students improved as well, maybe even more so. While positive attitudes about conservative students didn’t change, perceptions of conservatives as ethical people, positive contributors to society, and people with whom liberals might have things in common increased significantly.

Although the repeated gripe from conservative pundits is that colleges are a bastion of liberalism indoctrinating young minds, research (here and here) seems to contest this assertion. While the findings above don’t directly address students’ changing political beliefs, they do suggest that both politically conservative and politically liberal student’s perceptions of the other shift in a positive direction (i.e., they perceive each other more positively after the first year). This would seem to bode well for our students, our campus community, and for the communities in which they will reside after graduation. Because no matter how any of these student’s political views might change over four years in college, more positive perceptions of each other sets the stage for better interactions across differing belief systems. And that is good for all of us.

If we situate these findings in the context of a four-year period of development, I think we ought to be encouraged by these findings, no matter if we lean to the left or to the right. Maybe, even in the midst of all the Sturm und Drang we’ve experienced in the past few years, we are slowly developing students who are more equipped to interact successfully despite political differences.

Make it a good day,

Mark

 

What experiences improve our student’s inclination toward complex thinking?

I’ve always been impressed by the degree to which the members of Augustana’s Board of Trustees want to understand the sometimes dizzying complexities that come with trying to nudge, guide, and redirect the motivations and behaviors of young people on the cusp of adulthood. Each board member that I talk to seems to genuinely enjoy thinking about these kinds of complicated, even convoluted, challenges and implications that they might hold for the college and our students.

This eagerness to wrestle with ambiguous, intractable problems exemplifies the intersection of two key Augustana learning outcomes that we aspire to develop in all of our students. We want our graduates to have developed incisive critical thinking skills and we want to have cultivated in them a temperament that enjoys applying those analytical skills to solve elusive problems.

Last spring Augustana completed a four-year study of one aspect of intellectual sophistication. We chose to measure the nature of our students’ growth by using a survey instrument called the Need for Cognition Scale, an instrument that assesses one’s inclination to engage in thinking about complex problems or ideas. Earlier in the fall, I presented our findings regarding our students’ growth between their initial matriculation in the fall of 2013 and their graduation in the spring of 2017 (summarized in a subsequent blog post). We found that:

  1. Our students developed a stronger inclination toward thinking about complex problems. The extent of our students’ growth mirrored the growth we saw in an earlier cohort of Augustana students while participating in the Wabash National Study between 2008 and 2012.
  2. Different types of students (defined by pre-college characteristics) grew similar amounts, although not all students started and finished with similar scores. Specifically, students with higher HS GPA or ACT/SAT scores started and finished with higher Need for Cognition scores than students with lower HS GPA or ACT/SAT scores.

But, as with any average change-over-time score, there are lots of individual cases scattered above and below that average. In many ways, that is often where the most useful information is hidden. Because if the individuals who produce change-over-time scores above, or below, the average are similar to each other in some other ways, teasing out the nature of that similarity can help us figure out what we could do more of (or less of) to help all students grow.

At the end of our first presentation, we asked for as many hypotheses as folks could generate involving experiences that they thought might help or hamper gains on the Need of Cognition Scale. Then we went to work testing every hypothesis we could possibly test. Taylor Ashby, a student working in the IR office, did an incredible job taking on this monstrous task. After several months of pulling datasets together, constructing new variables to approximate many of the hypotheses we were given, and running all kinds of statistical analyses, we found a couple of pretty interesting discoveries that could help Augustana get even better at developing our student’s inclination or interest in thinking about complex problems or ideas.

To help us organize all of the hypotheses that folks suggested, we organized them into two categories: participation in particular structured activities (e.g., being in the choir or completing a specific major) and experiences that could occur across a range of situations (e.g., reflecting on the impact of one’s interactions across difference or talking with faculty about theories and ideas).

First, we tested all of the hypotheses about participation in particular structured activities. We found five specific activities to produce positive, statistically significant effects:

  • service learning
  • internships
  • research with faculty
  • completing multiple majors
  • volunteering when it was not required (as opposed to volunteering when obligated by membership in a specific group)

In other words, students who did one or more of these five activities tended to grow more than students who did not. This turned out to be true regardless of the student’s race/ethnicity, sex, socioeconomic status, or pre-college academic preparation. Furthermore, each of these experiences produced a unique, statistically significant effect when they were all included in the same equation. This suggests the existence of a cumulative effect: students who participated in all of these activities grew more than students who only participate in some of these activities.

Second, we tested all of the hypotheses that focused on more general experiences that could occur in a variety of settings. Four experiences appeared to produce positive, statistically significant effects.

  • The frequency of discussing ideas from non-major courses with faculty members outside of class.
  • Knowledge among faculty in a student’s major of how to prepare students to achieve post-graduate plans.
  • Faculty interest in helping students grow in more than just academic areas.
  • One-on-one interactions with faculty had a positive influence on intellectual growth and interest in ideas.

In addition, we found one effect that sort of falls in between the two categories described above. Remember that having a second major appeared to produce a positive effect on the inclination to think about complex problems or ideas? Well, within that finding, Taylor discovered that students who said that faculty in their second major emphasized applying theories or concepts to practical problems or new situations “often” or “very often” grew even more than students who simply reported a second major.

So what should we make of all these findings? And equally important, how do we incorporate these findings into the way we do what we do to ensure that we use assessment data to improve?

That will be the conversation of the spring term Friday Conversation with the Assessment for Improvement Committee.

Make it a good day,

Mark