An educational idea that evidence doesn’t support

Good morning,

Over the last week or so, the IR office has been prepping the various large-scale surveys that we send out annually to first-year and senior students. After a couple of years of administering the same survey instruments, it’s tempting to just “plug and play” without thinking much about whether the questions we’ve been asking are actually supported by the evidence we have gathered previously or are even still relevant at all.

Although there are good reasons to maintain consistency in survey questions over time, it is also true that we ought to change survey questions if they no longer match what we are trying to do or what we know to be true. Because we are human, we can get ourselves caught rationalizing something that we think ought to be so at exactly the time when we ought to do something else. It isn’t uncommon for us to believe something to always be so because it either seemed so at one time (and maybe even was so at one time) or because it appeared to be so in one instance it seemed like it ought to be so in every other situation or context.

Last week, I read an article in the Atlantic about one such educational “best practice” that subsequent research seems to have debunked. It’s not a very long article, but what it describes might be important for some as many of us are designing and redesigning classes for the new semester calendar.

The Myth of ‘Learning Styles’

A popular theory that some people learn better visually or aurally keeps getting debunked.

Hmmmmm  . . . . .

Make it a good day,

Mark

What good are those Starfish flags anyway?

Now that we’ve been using the Starfish tool for a couple of years to foster a network of early alerts and real-time guidance for our students, I suppose it makes sense to dig into this data and see if there are any nifty nuggets of knowledge worth knowing. Kristin Douglas (the veritable Poseidon of our local Starfish armada) and I have started combing through this data to look for useful insights. Although there is a lot more combing to be done (no balding jokes, please), I thought I’d share just a few things that seem like they might matter.
Starfish is an online tool that allows us to provide something close to real-time feedback, positive, negative, or informational, to students. In addition, this same information goes to faculty and staff who work closely with that student in an effort to provide early feedback that influences future behavior. Positive feedback should beget more the same behavior. Negative feedback hopefully spurs the student to do something differently.
In general, there are two ways to raise a Starfish flag for a student. The first is pretty simple: you see something worth noting to a student, you raise a flag. These flags can come from anyone who works with students and has access to Starfish. The second is through one of two surveys that are sent to faculty during the academic term. This data is particularly interesting because it is tied to performance in a specific class and, therefore, can be connected to the final grade the student received in that class. The data I’m going to share today comes from this survey data.
We send a Starfish survey to faculty twice per term.  The first goes out in week 3 and asks faculty to raise flags on any student that has inspired one(or more) of four different concerns:
  • Not engaged in class
  • Unprepared for class
  • Missing/late assignments
  • Attendance concern
The second Starfish survey goes out in week 6 and asks faculty to raise flags that address two potential concerns:
  • Performing at a D level
  • Performing at an F level
We now have a dataset of almost six thousand flags from winter, 2015/16 through winter, 2017/18. Do any of these flags appear to suggest a greater likelihood of success or failure? Given that we are starting with the end in mind, let’s first look at the flags that come from the week 6 academic concerns survey.
There are 1,947 flags raised for performing at a D level and 940 flags raised for performing at an F level. What proportion of those students (represented by a single flag each) ultimately earned a passing grade in the class in which a flag was raised?
The proportion that finished with a C or higher final grade
  • Performing at a D level (1059 out of 1947)   –   54%
  • Performing at an F level (232 out of 940)    –    25%

On first glance, these findings aren’t much of a surprise. Performing at an F level is pretty hard to recover from with only three weeks left in a term. At the same time, over half of the students receiving the “D” flag finished that course with a C grade or higher. This information seems useful for those advising conversations where you need to have a frank discussion with a student about what it will take to salvage a course or drop it late in the term.

The second set of flags comes from the third week of the term and represent behaviors instead of performance. Are any of these raised flags – not engaged in class (278), unprepared for class (747), missing/late assignments (1126), and attendance concern (904) – more or less indicative of final performance?

The proportion that finished with a C or higher final grade
  • Not engaged in class (202/278)       –        73%
  • Unprepared for class (454/747)        –        61%
  • Missing/late assignments (571/1126)   –    51%
  • Attendance concern (387/904)        –        43%
There appears that these four flags vary considerably in their correlation with a final grade. Attendance concern flags appear to be the most indicative of future trouble while appearing unengaged in class seems relatively salvageable.
Without knowing exactly what happened after these flags were raised, it’s hard to know exactly what (if anything) might have spurred a change in the behavior of those students who earned a final grade of C or higher. However, at the very least these findings add support to the old adage about just showing up.
What does this data suggest to you?
Make it a good day,
Mark

You need a laugh today? Let’s go!

Even us emotionally stunted numbers-nerds get bogged down by rainy April days. So I thought I’d share a quick video lesson on predictive analytics that I hope will make you smile (while educating in the most high minded fashion, of course!).

Predict This!

Of course, if you’re in the kind of curmudgeonly mood that is way beyond laughter, try out this Mountain Goats song and belt out the chorus at the top of your lungs.

Make it a good day,

Mark

Beware of the Average!

It’s been a crazy couple of weeks, so I’m just going to put up a nifty little picture.  But since I generally try to write about 1000 words, this pic ought to do the trick . . .

In case you can’t make out the sign on the river bank, it says that the average depth of the water is 3 ft!

Beware-The-Flaw-of-Averages

Sometimes an average is a useful number, but we get ourselves in some deep water if we assume that there is no variation across the range of data points from which that average emerged. Frequently, there is a lot of variation. And if that variation clusters according to another set of characteristics, then we can’t spend much time celebrating anything no matter how good that average score might seem.

Make it a good day,

Mark

Improving an Inclination toward Complex Thinking – Part III (AKA doing something with what we now know)

So far this year, the Assessment for Improvement Committee (AIC) and the Office of Institutional Research and Assessment (IRA) have hosted two Friday Conversations to explore findings from our 4-year study of intellectual sophistication growth among Augustana students. The first conversation focused on how our students changed from Welcome Week (just before freshmen start their first fall term) to graduation and how different types of students (depending on traits like race, sex, or pre-college academic preparedness), although they may have started in different places, seem to have grown about the same amount over four years (I summarized that presentation in a blog post here). The second conversation examined the different student experiences that appear to influence that change, either positively or negatively (you can read a summary of that presentation in a blog post here). Clearly, our findings suggest that the degree to which students take ideas they’ve learned in one discipline and apply them or vet them in a different disciplinary or real-world setting demonstrably increases our student’s inclination toward complex thinking.

Although the findings we’ve discussed so far are interesting in their own right, they don’t do anything by themselves to help us improve student learning. In fact, without collectively committing to do something with our results, we end up just like most organizations – chock-full of data but unable to turn those insights into actions that actually make them better. If we’re being honest, the fact that we know so much about how our students’ growth and the experiences that shape that growth puts us in the unenviable position of being almost morally obligated to do something with what we know – no matter how daunting that might be.

I know all of that sounds a little heavy-handed (ok – more than a little heavy-handed), but in the 8 years I’ve been at Augustana, the times when we’ve been at our absolute best have been when we’ve let down our defenses, humbly looked in the mirror, and chosen to believe in the best of each other. Then we’ve all put our shoulders to the plow to make the education we provide just a little bit better than it was before.

And that is the focus of the third, and most important, AIC/IRA Friday Conversation at the end of this week. After we briefly review what we have learned from our data, we will organize into smaller groups to come up with 2-4 viable ways in which we can turn these findings into action. This might take the form of professional development sessions, policy for course design or pedagogical nuance, or co-curricular emphases to apply our findings to impact a larger proportion of students.

So please come to the AIC/IRA Friday Conversation this Friday, March 23rd. We will be in the Wilson Center. Food and drinks are available at 3:30 and the conversation will start at 4:00.

We are really good at getting better. I’ve seen us do it over and over again. I, for one, can’t wait to see what we come up with!

Make it a good day,

Mark

What do students do about the textbooks and additional materials we assign?

At first, that might seem like an incredibly dumb question.  If you’re in a salty mood, you might snarl, “Buy them and learn or fail the damn course.”  For most of us, I suspect the thought of not buying the additional materials required (or even recommended) for a class might seem utterly absurd. When I was an undergraduate, I remember being warned not to buy used books because they would likely have someone else’s notes in the margins, leaving no room for me to write my own (ok, maybe not the most convincing argument). Nonetheless, I definitely remember feeling like a slacker if I didn’t show up to the first day of class with a shiny new version of each required text.

Fast forward 30-odd years and things couldn’t be more different. The cost of textbooks has risen even faster than the cost of college tuition (have a look at this graphic from the Bureau of Labor Statistics), even as the cost of recreational books has gone down.

More and more, it appears that students are renting textbooks, borrowing from friends, or just foregoing some books altogether. The Chronicle of Higher Ed highlighted a study in 2011 suggesting that 7 of 10 students have skipped buying a textbook because of cost. More recent online columns and blogs seem to perpetuate the notion, if not brag outright, that a student can succeed in college without buying books. In January, the Atlantic published a longer piece examining the reality that, despite the surge in online and other edtech resources, the cost of textbooks and/or their online equivalent remains exorbitantly high. And in the context of the financial pressures that many students experience just paying tuition, room, and board, I guess it shouldn’t surprise us much when already financially-strapped students take advantage of any alternative that might save them some money.

A few weeks ago, about forty faculty and staff gathered in the library to kickstart a conversation about Augustana students and textbooks. After discussing the financial realities of textbook costs, the conversation turned toward the ways in which we choose the textbooks and additional materials that we assign. Although this is something that we might take for granted at times (especially if one might be scrambling to put a course together), it’s an issue that more and more folks are trying to address.  I’m sure there are plenty of examples, but three impressive efforts include the Open Textbook LibraryCollege Open Textbooks, and the Open Educational Resources Commons. Most recently, 40 colleges have made the move to simply go without textbooks and only use freely available learning resources (see here and here).

At the end of the meeting, it seemed clear that we really need to know more about our student’s engagement with the textbooks and additional materials assigned in our courses. One person posed an exceedingly logical suggestion: could we add a couple of questions to the end of every IDEA course feedback survey at the end of spring term asking about:

  • The amount that students spent on textbooks for a given class
  • How often they used the textbooks and additional materials they bought for that class
  • How effective those materials were in helping the student learn in that class

It seems like this would be particularly useful information. But before acting on any of these ideas, I think it’s important to know what you all think about gathering this information, what questions you might have about what is done with this information, and any other concerns you might have about this project.

So . . . . what do you think?  Should we ask these questions?  What should we do with the data?  If we ask these questions, how do we need to be careful and transparent so that whatever we find, 1) gives us a deeper understanding of our students’ engagement with textbooks and additional materials, and 2) genuinely spurs our perpetual effort to improve in a way that fosters inclusiveness and understanding.

Please – send me your thoughts.  If you know my email, you can send them there. If you’d rather post in the comments section below, please post away.

Make it a good day,

Mark

 

 

Warming Perceptions across the Political Divide

Welcome back to campus for the headlining event – Spring Term! (about as likely a band name as anything else these days, right?).

At the very end of winter term, Inside Higher Ed published a short piece highlighting a study that suggested the first year of college might broaden students’ political views. The story reviewed findings (described in more depth here) from an ongoing national study of college students’ interfaith understanding development that goes by the acronym IDEALS (AKA, the Interfaith Diversity Experiences & Attitudes Longitudinal Survey). In essence, both politically conservative and politically liberal students (self-identified at the beginning of their first year in college) developed more positive perceptions of each other by the beginning of their second year. Since Augustana is one of the participating institutions in this study, I thought it might be interesting to see if our local data matches up with the national findings.

The IDEALS research project is designed to track change over four years, asking students to complete a set of survey questions at the beginning of the first year (fall, 2015), at the beginning of the second year (fall, 2016), and at the end of the fourth year (spring, 2019). Many of the survey questions ask individuals about their perceptions of people of different religions, races, ethnicities, and beliefs. For the purposes of this post, I’ll focus on the responses to four statements listed below and zero in on the responses from conservative students about liberal students and the responses from liberal students about conservative students.

  • In general, I have a positive attitude toward people who are politically conservative
  • In general, I have a positive attitude toward people who are politically liberal
  • In general, individuals who are politically conservative are ethical people
  • In general, individuals who are politically liberal are ethical people
  • In general, people who are politically conservative make a positive contribution to society
  • In general, people who are politically liberal make a positive contribution to society
  • I have things in common with people who are politically conservative
  • I have things in common with people who are politically liberal

For each item, the five response options ranged from “disagree strongly” to “agree strongly.”

First, let’s look at the responses from politically conservative students. The table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Conservative Student’s Perceptions of Politically Liberal Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.71 3.46
Ethical People 3.21 3.50
Positive Contributors 3.64 3.92
Positive Commonalities 3.23 3.29

Overall, it appears that conservative students’ perceptions of liberal students improved during the first year. Scores on two items (ethical people and positive contributors) increased substantially. Perceptions of commonalities remained essentially the same, and a self-assessment of positive attitudes toward liberal students declined. Normally, the drop in positive attitude would seem like a cause for concern, but conservative students positive attitudes toward other conservatives dropped as well, from 4.29 to 3.92. So maybe it’s just that the first year of college makes conservatives grouchy about everyone.

Second, let’s look at the responses from politically liberal students when asked to assess their perceptions of politically conservative students. Again, the table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Liberal Student’s Perceptions of Politically Conservative Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.61 3.65
Ethical People 3.58 3.78
Positive Contributors 3.33 3.76
Positive Commonalities 3.31 3.69

It appears that liberal students’ views of conservative students improved as well, maybe even more so. While positive attitudes about conservative students didn’t change, perceptions of conservatives as ethical people, positive contributors to society, and people with whom liberals might have things in common increased significantly.

Although the repeated gripe from conservative pundits is that colleges are a bastion of liberalism indoctrinating young minds, research (here and here) seems to contest this assertion. While the findings above don’t directly address students’ changing political beliefs, they do suggest that both politically conservative and politically liberal student’s perceptions of the other shift in a positive direction (i.e., they perceive each other more positively after the first year). This would seem to bode well for our students, our campus community, and for the communities in which they will reside after graduation. Because no matter how any of these student’s political views might change over four years in college, more positive perceptions of each other sets the stage for better interactions across differing belief systems. And that is good for all of us.

If we situate these findings in the context of a four-year period of development, I think we ought to be encouraged by these findings, no matter if we lean to the left or to the right. Maybe, even in the midst of all the Sturm und Drang we’ve experienced in the past few years, we are slowly developing students who are more equipped to interact successfully despite political differences.

Make it a good day,

Mark

 

What experiences improve our student’s inclination toward complex thinking?

I’ve always been impressed by the degree to which the members of Augustana’s Board of Trustees want to understand the sometimes dizzying complexities that come with trying to nudge, guide, and redirect the motivations and behaviors of young people on the cusp of adulthood. Each board member that I talk to seems to genuinely enjoy thinking about these kinds of complicated, even convoluted, challenges and implications that they might hold for the college and our students.

This eagerness to wrestle with ambiguous, intractable problems exemplifies the intersection of two key Augustana learning outcomes that we aspire to develop in all of our students. We want our graduates to have developed incisive critical thinking skills and we want to have cultivated in them a temperament that enjoys applying those analytical skills to solve elusive problems.

Last spring Augustana completed a four-year study of one aspect of intellectual sophistication. We chose to measure the nature of our students’ growth by using a survey instrument called the Need for Cognition Scale, an instrument that assesses one’s inclination to engage in thinking about complex problems or ideas. Earlier in the fall, I presented our findings regarding our students’ growth between their initial matriculation in the fall of 2013 and their graduation in the spring of 2017 (summarized in a subsequent blog post). We found that:

  1. Our students developed a stronger inclination toward thinking about complex problems. The extent of our students’ growth mirrored the growth we saw in an earlier cohort of Augustana students while participating in the Wabash National Study between 2008 and 2012.
  2. Different types of students (defined by pre-college characteristics) grew similar amounts, although not all students started and finished with similar scores. Specifically, students with higher HS GPA or ACT/SAT scores started and finished with higher Need for Cognition scores than students with lower HS GPA or ACT/SAT scores.

But, as with any average change-over-time score, there are lots of individual cases scattered above and below that average. In many ways, that is often where the most useful information is hidden. Because if the individuals who produce change-over-time scores above, or below, the average are similar to each other in some other ways, teasing out the nature of that similarity can help us figure out what we could do more of (or less of) to help all students grow.

At the end of our first presentation, we asked for as many hypotheses as folks could generate involving experiences that they thought might help or hamper gains on the Need of Cognition Scale. Then we went to work testing every hypothesis we could possibly test. Taylor Ashby, a student working in the IR office, did an incredible job taking on this monstrous task. After several months of pulling datasets together, constructing new variables to approximate many of the hypotheses we were given, and running all kinds of statistical analyses, we found a couple of pretty interesting discoveries that could help Augustana get even better at developing our student’s inclination or interest in thinking about complex problems or ideas.

To help us organize all of the hypotheses that folks suggested, we organized them into two categories: participation in particular structured activities (e.g., being in the choir or completing a specific major) and experiences that could occur across a range of situations (e.g., reflecting on the impact of one’s interactions across difference or talking with faculty about theories and ideas).

First, we tested all of the hypotheses about participation in particular structured activities. We found five specific activities to produce positive, statistically significant effects:

  • service learning
  • internships
  • research with faculty
  • completing multiple majors
  • volunteering when it was not required (as opposed to volunteering when obligated by membership in a specific group)

In other words, students who did one or more of these five activities tended to grow more than students who did not. This turned out to be true regardless of the student’s race/ethnicity, sex, socioeconomic status, or pre-college academic preparation. Furthermore, each of these experiences produced a unique, statistically significant effect when they were all included in the same equation. This suggests the existence of a cumulative effect: students who participated in all of these activities grew more than students who only participate in some of these activities.

Second, we tested all of the hypotheses that focused on more general experiences that could occur in a variety of settings. Four experiences appeared to produce positive, statistically significant effects.

  • The frequency of discussing ideas from non-major courses with faculty members outside of class.
  • Knowledge among faculty in a student’s major of how to prepare students to achieve post-graduate plans.
  • Faculty interest in helping students grow in more than just academic areas.
  • One-on-one interactions with faculty had a positive influence on intellectual growth and interest in ideas.

In addition, we found one effect that sort of falls in between the two categories described above. Remember that having a second major appeared to produce a positive effect on the inclination to think about complex problems or ideas? Well, within that finding, Taylor discovered that students who said that faculty in their second major emphasized applying theories or concepts to practical problems or new situations “often” or “very often” grew even more than students who simply reported a second major.

So what should we make of all these findings? And equally important, how do we incorporate these findings into the way we do what we do to ensure that we use assessment data to improve?

That will be the conversation of the spring term Friday Conversation with the Assessment for Improvement Committee.

Make it a good day,

Mark

Should the male and female college experience differ?

The gap between males and females at all levels of educational attainment paints a pretty clear picture. Males complete high school at lower rates than females. Of those who finish high school, males enroll in college at lower rates than females. This pattern continues in college, where men complete college at lower rates than women. Of course, some part of the gap in college enrollment is a function of the gap in high school completion, and some part of the gap in college completion is a function of the gap in college enrollment. But overall, it still seems apparent that something troubling is going on with boys and young men in terms of educational attainment. Yet, looking solely at these outcome snapshots does very little to help us figure out what we might do if we were going to reverse these trends.

A few weeks ago, I dug into some interesting aspects of the differences in our own male and female enrollment patterns at Augustana, because understanding the complexity of the problem is a necessary precursor to actually solving it. In addition, last year I explored some differences between men and women in their interest in social responsibility and volunteering behaviors. Today, I’d like to share a few more differences that we see between male and female seniors in their responses to senior survey questions about their experience during college.

Below I’ve listed four of the six senior survey questions that specifically address aspects of our students’ co-curricular experience. In each case, there are five response options ranging from strongly disagree (1) to strongly agree (5). Each of the differences shown below between male and female responses is statistically significant.

  • My out-of-class experiences have helped me connect what I learned in the classroom with real-life events.
    • Men     –    3.86
    • Female –    4.17
  • My out-of-class experiences have helped me develop a deeper understanding of myself.
    • Men     –    4.10
    • Female –    4.34
  • My out-of-class experiences have helped me develop a deeper understanding of how I interact with someone who might disagree with me.
    • Men     –    4.00
    • Female –    4.28
  • My co-curricular involvement helped me develop a better understanding of my leadership skills.
    • Men     –    4.14
    • Female –    4.35

On one hand, we can take some comfort in noting that the average responses in all but one case equate with “agree.” However, when we find a difference across an entire graduating class that is large enough to result in statistical significance we need to take, at the very least, a second look.

Why do you think these differences are appearing in our senior survey data? Is it just a function of the imprecision that comes with survey data? Maybe women tend to respond in rosier terms right before graduation than men? Or maybe there really is something going on here that we need to address. One way to test that possibility is to ask whether or not there might be other evidence that corroborates these findings, be it anecdotal or otherwise qualitative. Certainly, the prior evidence I’ve noted and linked above should count some, but that data also comes from senior survey data.

Recent research on boys and young men seems to suggest that these differences in our data may not be a surprise (check out the books Guyland (I found a free pdf of the book!) and Angry White Men or a Ted Talk by Philip Zimbardo for a small sample of the scholarship on men’s issues). This growing body of scholarship also suggests that differences that we might see between males and females begin to emerge long before college, but it also suggests that we are not powerless to reverse some of the disparity.

At the board meetings this weekend, we will be talking about some of these issues. In the meantime, what do you think? And if you think that these differences in our data ought to be taken seriously, does it mean that we ought to construct educationally appropriate variations in the college experience for men and women?

I’d love to read what you think as you chew on this.

Make it a good day,

Mark

Anticipating what our students need to know is SO complicated!

Over the last few weeks, I’ve been wrestling with a couple of data trends and their accompanying narratives that seem pretty important for colleges like ours. However, unlike most posts in which I pretend to have some answers, this time I’m just struggling to figure out what it all means. So this week, I’m going to toss this discombobulated stew in your lap and hope you can help me sort it all out (or at least clean up some of the mess!).

First, the pressure on colleges to prepare their students to graduate with substantial “work readiness” appears to be at an all-time high. The Gallup Organization continues to argue that employers don’t think college graduates are well-prepared for success in the workplace. Even though there is something about the phrase “work readiness” that makes me feel like I just drank sour milk, we have to admit that preparing students to succeed in a job matters, especially when student loan debt is now such a large, and often frightening, part of the calculus that determines if, and where, a family can send their kids to college. Put all this together and it’s no wonder why students overwhelmingly say that the reason they want to go to college is to get a good-paying job.

Underneath all of this lies a pretty important assumption about what the world of work will be like when these students graduate. Student loans take, on average, 21 years to pay off, and the standard repayment agreement for a federal student loan is a 10-year plan. So it would seem reasonable that students, especially those who take out loans to pay for college, would anticipate that the job for which college prepares them should in most cases outlast the time it takes for them to pay off their loans. I’m not saying that everyone thinks this through completely, but I think most folks are assuming a degree of stability and income in the job they hope to obtain after earning a college degree, making the loans that they take out to pay for college a pretty safe bet.

But this is where it gets dicey. The world of work has been undergoing a seismic shift over the past several decades. The most recent report from the Bureau of Labor Statistics suggests that, on average, a person can expect to have 12 jobs between the ages of 18 and 50. What’s more, the majority of those job changes occur between the ages of 18 and 34 – the same period of time during which one would be expected to pay off a student loan. Moreover, between 2005 and 2015, almost all of the jobs added to the economy fit into a category called “alternative work.” This category of work includes contract labor, independent work, and any sort of temporary job (in addition to the usual suspects, think Turo, Lyft, or TaskRabbit). Essentially, these are jobs that are either spun as “providing wonderful flexibility” or depressingly described as depending on “the whim of the people.” As with so many other less-than-attractive realities, someone put a bow on it and labeled this whole movement “the gig economy” (sounds really cool except there’s no stage lighting or rock and roll glamor). It’s no surprise that the gig economy presents a rather stark set of downsides for individuals who choose it (or get sucked into it by circumstances beyond their control).

So what does all of this mean for colleges like ours that are (whether we like it or not) obligated to focus a lot of our attention on preparing students for a successful professional life?  I don’t have many great answers to this one. But a couple of questions seem pretty important:

  • To what degree are we responsible for ensuring that our students are financially literate and can manage through the unpredictability that seems likely for many early in their career?
  • What knowledge, skills, or dispositions should we prioritize to help our students thrive in a professional life that is almost certain to include instability, opportunity, and unexpected change?

Of all the possible options that an 18-year-old could sign up for, a small liberal arts college seems like it ought to be the ideal place for learning how to navigate, even transcend, the turbulent realities that seem more and more an unavoidable part of the world of work. But without designing what we do so that every student has to encounter this stuff, we leave that learning up to chance. And as usual, the students who most need to learn this stuff are the ones who are least likely to find it on their own.  Looks like we better role up our sleeves and get to work!

Make it a good day,

Mark