Designing semesters bit by bit – Look what we can do!

In the midst of all the inevitable end-of-spring-term craziness, the thought of contemplating one more semester design vote doesn’t seem all that appealing. Arguable, the question of whether or not to include advising within our calculus of faculty load is the most complicated of the many decisions we’ve made this year. I don’t fault anyone one bit for feeling overwhelmed, or even a little crabby, about this last vote – no matter what you think we ought to do.

But in the midst of all this, I think it is worthwhile to step back a little and have a look at what we’re on the verge of accomplishing. You might not be in the mood for hyperbole at the moment, but the truth is that we are about to complete something that almost no other institution has done. We’ve actually designed an entire semester calendar and curriculum framework out in the open, step by step, modeling the implications of all the competing issues from the very beginning and then remodeling the implications of each decision on the larger picture at each step along the way. That isn’t to say that we’ve done everything perfectly – after all, we are no more than a bunch of imperfect yokels trying to pull off extraordinary, something that few schools have ever done in a way that most wouldn’t dare to try. Call me a Pollyanna, but after zooming out and having a look back at what we’ve accomplished this year, you’d be hard pressed not to be impressed.

What have we done since September?  Here are the decisions faculty have made that set each of the major elements of the new semester design in place.

  • Voted for an immersive term 140 to 26
  • Voted for a 4-credit course base instead of a 3-credit course base 126 to 37
  • Voted for the immersion term to occur in January instead of May 136 to 35
  • Voted for 124 credits to graduate instead of 120 credits to graduate 92 to 71
  • Approved the structure proposed for General Education 109 to 31
  • Approved the second language requirement unanimously
  • Approved a framework for major design and footprint unanimously

Other than the vote about the total number of credits to graduate, each vote seems to reflect a clear sense among the community about the direction that is best for us.

In addition, two faculty votes have provided advisory positions to the Board of Trustees, the body that makes the final decision on these two specific issues.

  • Voted for a pre-Labor Day start to the academic year 67 to 59
  • Voted that tuition should cover a relatively higher number of credits per year rather than a relatively smaller number of credits 98 to 62

All in all, the amount of intellectual and emotional work that we have successfully sorted through to accomplish all of these decisions is truly extraordinary. It’s hard to imagine anyone NOT feeling at least a little bit more tired than normal these days.

So even if you’re feeling like you are running on fumes these days, try to take a second, breathe deeply, and look at how much we have accomplished. I, for one, am truly amazed and humbled. It’s an honored to be able to call myself a member of the Augustana community.

Make it a good day,

Mark

What’s all this talk about big data?

Maybe it hasn’t popped up on your radar yet, but it seems like everywhere one turns these days there’s another perfectly coiffed Nostradamus-impersonator lauding the inevitable big data revolution that’s just around the corner for higher education.

In case you’re wondering what I think about big data and all of the hubbub about it, I’ve shared a link to something I wrote for the Chronicle of Higher Education recently that they titled, “Big Data, Scant Evidence.” If you can’t access the it from where you are reading this post but really want to read the piece, send me a note and I’ll try to get an unlocked copy to you. My article is part of a larger supplement published last week about the big data trend in higher education. You might find some of the other articles interesting, although it’s hard to read some of this stuff and not think, “Isn’t this what we’ve been doing at Augustana for a while now?” Well . . . yes. Except that we aren’t necessarily a big enough place to produce big data. So what do we call our data? Diminutive? Pocket-sized? Lean?

Whatever you want to call it, we seem to be pretty good at improving based upon solid information.

Make it a good day,

Mark

A Shameless Plea

First or all, I owe you all a hearty heap of thanks for your patience this spring. For a couple of reasons, some of which can be chalked up to coincidence and some of which can be blamed squarely on me, we are participating in more than the usual number of surveys this spring.

Of course, I’d be remiss if I didn’t say something about the awesome data that we will have at the end of this term and how much it will likely inform the ways that we keep trying to improve our campus. But you’ve heard all that from me before, and by now you either believe me or you don’t.

Nonetheless . . .

We really need your help in getting first year students to respond to our End of the First Year survey. It’s especially important because we’ve been paying close attention to the experience of various subgroups of students (i.e., African-American students, Hispanic students, first-generation students, students coming from particularly low income families, etc.) that have historically not succeeded at the same rates as more affluent white students. In order to have the most robust data from these students, we need to do everything in our power to encourage participation.

And this leads me to my shameless plea.

Please, please, please: if you interact with first year students or have the wherewithal to communicate with first year students, would you please take 30 seconds to make a personal plea on behalf of the college and encourage them to complete the first year survey? All first year students received an email earlier today inviting them to take the survey. I’ll send a link to anyone who would like so that they can include it on their course’s Moodle site or web page.

Thanks very much. It really does make a difference.

Make it a good day,

Mark

And it’s down to three . . .

Good morning everyone!

It’s not every week that you get to see three pretty smart people talk about the way that they might approach a leadership role as provost at Augustana College.  So if you can find a way to be there, I hope you’ll come to see each of the provost candidates present this week.

Each of them will be presenting at 11 AM and at 3 PM on Monday, Tuesday, and Wednesday, respectively. They will each be giving the same presentation in the afternoon that they gave in the morning, so you can come to one or the other.

Your participation in this process matters for several reasons.

  1. The more feedback the better for the search committee after all three finalists have been to campus.
  2. The more questions asked of the candidates the more everyone in attendance gets a sense of each candidate’s approach to public communication.
  3. The more people in attendance at these presentations the more we communicate to each candidate our investment in our provost and the college.

So come on down to the Wilson Center as often as you can make it.

Make it a good day,

Mark

Improving our first-year advising: sometimes structure does matter

If you’ve been reading this blog for a while, you’ve almost certainly seen some of my posts about the data we’ve collected to assess and guide our advising practices at Augustana College (here, here, and here). However, those posts only get at part of the story. Since all of those posts drew from senior survey data, we can be almost sure that those findings primarily reflect our students’ advising experiences in their major(s). But we also know that first-year advising matters a lot. Many would argue it matters at least as much as major advising. So I’d like to dive into some of the advising data from our first-year students and see if there’s anything that we can learn from it.

In this post I’d like to focus on two items that we know are important for a successful first-year experience. First-year students answered these questions late in their fall term.

  1. My first year adviser connected me with other campus offices, resources, or opportunities (offices like Student Activities, the Community Engagement Center, the Counseling Center) to help me succeed during my first year.
  2. My first-year adviser made me feel like I could succeed at Augustana.

The table below presents the average response scores to these items over the last four years. The response options were strongly disagree, disagree, neutral, agree, and strongly agree. These responses were converted to a 1-5 scale where 1 equals strongly disagree.

Question 2013-14 2014-15 2015-16 2016-17
My first-year Adviser connected me with other campus offices, resources, or opportunities to help me succeed during my first year.    3.55    3.62    3.83    3.89
My first-year adviser made me feel like I could succeed at Augustana.    4.07    4.20    4.25    4.21

You can see that we’ve improved on both measures since 2013-14. I know that our first-year advising program has emphasized the importance of connecting students with the campus offices that can best help them, and it’s heartening to see that this effort may be producing results. With that said, it looks like we might still need to improve since our average score hasn’t quite surpassed “agree” yet. By contrast, in each of the last four years, on average our student’s “agree” that we have made them feel like they could succeed at Augustana.

Interestingly, while the improvement in referring students to other campus resources seems fairly consistent, the improvement in making students feel like they could succeed seems to have plateaued over the last couple of years. But digging a little deeper, there is a wrinkle in our 2016-17 data that both seems to explain this plateau and may further emphasize the value in moving to the first-year advising structure that the faculty has now approved to implement next year.

This year (i.e., during the fall of 2016), about a third of our first-year student advising groups were enrolled in an FYI-100 course instead of merely meeting informally with their adviser throughout the term. For the students who were enrolled in this class, the average response score to the statement “My first-year adviser made them feel like they could succeed at Augustana” was 4.34. For the students who were not enrolled in this class (about 2/3rds of the whole group), the average response score was 4.17.

Many long time advisers said that the FYI 100 format helped them develop stronger relationships with their advisee. These advisers indicated that the stronger relationships allowed them to engage in more substantive conversations that, in turn, helped the students think more deeply about the nature of their college experience and the ways in which they could make the most of it.

As wonderful as it is to hear that we seem to be making improvements in our advising practices, It is even more exciting to see data confirming these bold strides toward even better first-year advising.

Make it a good day,

Mark

Wondering about fall-to-spring retention? Well, guess what?!

Even though most of us only wonder about retention in the fall, down here in the belly of the data beast we’ve been paying closer attention to our term-to-term retention rates for each cohort of students. Although those numbers can fluctuate more, they can also give us an early hint about whether we might be trending in the wrong direction, in the right direction, or if we are just holding steady. More specifically, in the context of last year’s excitement over a record high fall-to-fall retention rate for first year students, it makes some sense to have a look and see if our broader retention efforts are continuing to hold strong … or if all of last year’s hubbub was just that.

So now that we’ve locked in our enrollment numbers for the spring term we can calculate fall-to-spring retention rates for each cohort. Although we also record winter-to-spring retention rates, it seems like it’s a little easier to make sense of fall-to-spring numbers since winter-to-spring rates are, in essence, a percentage of a proportion (i.e., the number of students enrolled in winter term is only a percentage of those who enrolled in the fall, so a winter-to-spring retention rate by itself can be deceiving).

To put our present numbers in context, the table below shows last year’s fall-to-spring (2015-16) retention rates, the prior three-year average (Academic Years 2013, 2014, 2015) fall-to-spring retention rates, and finally our most current fall-to-spring retention rates.

Cohort 2015-16

Fall/Spring

Prior 3-Year Average

(13/14, 14/15, 15/16)

2016-17

Fall/Spring

1st year 94.2% 93.7% 93.8%
2nd year 96.8% 95.9% 97.2%
3rd year 97.5% 97.0% 98.1%
4th year 92.9% 93.4% 95.2%

A couple of things jump out. First, our first-year fall-to-spring retention rate is down slightly from last year’s high. To put this difference in real terms, we would have needed to retain three additional students to match last year’s percentage. However, we did manage to beat the prior three-year average by a hair, which is often a good way to tell if we are headed in the right direction. It’s also good to remind ourselves that a few years ago, we estimated that if everything went perfectly with a first-year class, the best retention rate we could hope for would be 90%. Last year we hit 88.9%. So we are already close to banging our heads on the proverbial ceiling. We will just have to wait to see how this translates into a fall-to-fall retention rate for the first year cohort.

Retention within the 2nd, 3rd, and 4th year cohorts is a different story. In each case, our fall-to-spring retention rate clearly beats both last year’s rate as well as the prior three-year average. Some folks have rightly suggested that we should be careful not to lose touch with the needs of upperclass students as we strive to bring up our first-year retention rate. These numbers seem to suggest that we might have managed to maintain that balance pretty well.

And if you are wondering if all of these increased retention rates translate into more students on campus this spring, indeed they do. Last year at this time, we had a student FTE of 2345 (FTE stands for “full-time-equivalent” and is calculated by taking the number of full-time students and adding a third of the total number of part-time students – supposedly, three part-time students roughly equals one full-time student). Coincidentally, the prior three-year average spring FTE is also 2345.

But this spring, our FTE is 2399. That’s the largest spring FTE we’ve ever recorded.

Congratulations to everyone for your hard work on behalf of our students! In the face of all the budget pressures that we can’t control, it’s really heartening to see us so successful on one metric that we can influence.

Make it a good day,

Mark

This March, it’s Survey Madness!

Even folks who are barely familiar with social science research know the term “survey fatigue.” It describes a phenomenon, empirically supported now by a solid body of research, in which people who are asked to take surveys seem to have only a finite amount of tolerance for it (shocking, I know). So as a survey gets longer, respondents tend to skip questions or take less time answering them carefully. When the term first emerged, it primarily referred to something that could happen within an individual survey. But now that solicitations to take surveys seem to appear almost everywhere, the concept is appropriately applied in reference to a sort of meta survey fatigue.

But if we want to get better at something, we need information to guide our choices.  We ought to know by now that “winging it” isn’t much of a strategy. So we need to collect data, and oftentimes survey research is the most efficient way to do that.

Therefore, in my never-ending quest to turn negatives into positives, I’m going to launch a new phrase into the pop culture ether. Instead of focusing on the detrimental potential of “survey fatigue,” I’m going to ask that we all dig down and build up our “survey fitness.”

Here’s why . . .

In the next couple of months, you are going to receive a few requests for survey data. Many of you have already received an invitation to participate in the “Great Colleges to Work For” survey. The questions in this survey try to capture a sense of the organization’s culture and employee engagement. For all of you who take pride in your curmudgeonly DNA, I can’t argue your criticism of the name of that survey. But they didn’t ask me when they wrote it, so we’re stuck with it. Nonetheless, the findings actually prove useful. So please take the time to answer honestly if you get an email from them.

The second survey invitation you’ll receive is for a new instrument called The Campus Living, Learning, and Work Environment. It tries to tackle aspects of equity and inclusion across a campus community. One of the reasons I signed on for this study is because it is the first that I know of to survey the entire community – faculty, staff, administration, and students. We have been talking a lot lately about the need for this kind of comprehensive data, and here is our chance to get some.

So if you find yourself getting annoyed at the increased number of survey requests this spring, you can blame it all on me. You are even welcomed to complain to me about all the surveys I’ve sent out this term if that is what it takes to get you to complete them. And if you start to worry about survey fatigue in yourself or others during the next few months, think of it as an opportunity to develop your survey fitness! And thanks for putting up with a few more requests for data than usual. I guarantee that I won’t let the data just sit at the bottom of a hard drive.

Make it a good day,

Mark

Additional evidence that our first generation students might need more explicit guidance

Sometimes social science researchers get too excited about testing new hypotheses and forget about the importance of retesting old ones. Although it’s understandable (why drive a used car when you could drive a new car?), this tendency is exceedingly detrimental to the body of knowledge we claim to know. Because no matter how perfect the study design or how fantastic the results, one set of findings just doesn’t mean that much – a reality that often gets lost in the hype.

In recent years, the tendency to overhype a single set of findings has become the subject of much hand-wringing. In 2010, the New Yorker published a longer piece about a phenomenon called the decline effect where efforts to replicate prior studies are increasingly producing comparatively smaller and sometime even insignificant results. Such results call into question the validity of many prior research findings. A 2013 article in the Economist outlined other research that produced similarly chilling reminders of the fallibility of science and scientists. Not to be outdone, this conundrum starts to get really weird when a 2015 replication study appearing to challenge the validity of 100 well-known psychology findings was taken apart by a 2016 study that critiqued many of the 2015 study’s replication designs and summary conclusions.

I say all this to set up what might otherwise seem like a pretty mundane data point about first-generation students. But first, what do we think we know about first-gen students?

According to the current body of research on first generation students, the existing evidence suggests that these students a more likely to lack basic knowledge about how college is supposed to work. In the absence of this knowledge, the fog is a little thicker, the path is less clear, and they are more susceptible to feeling lost and uncertain about their progress. All this sets up an increased vulnerability that heightens the potential for difficulty and early departure. Although we can see the gap in first-second year retention rates between first-gen students and their peers, differences in retention rates don’t necessarily confirm the more granular elements of prior findings about the first-gen experience.

To find that kind of granular confirmation, we need to identify specific items in the first year surveys that could suss out these differences, parse the array of data we gather from first year students by first generation status, and test for statistically significant differences.

One prime possibility is a survey item from the end of the first year that asks first-year students to respond (i.e., choosing from 5 options that range from strongly disagree to strongly agree) to the statement, “Reflecting on the past year, I can think of specific experiences or conversations that helped me clarify my life/career goals.” If the first-generation student experience involves a relatively higher frequency of feeling lost or unsure about how to connect all of one’s activities, classes, and experiences into a coherent narrative, then first generation student responses, on average, should end up lower (and statistically significantly lower) than the overall response.

It turns out that this gap in average responses is profound.  While the overall average score is 3.83 (which translates to just south of ‘agree’), the average score for first-gen students is 3.23 (just north of ‘neither agree nor disagree’), a gap that amounts to an “extremely” statistically significant difference (i.e., p<.001 for all you quant nerds out there). Since we can conclude from these two mean scores that the average response from non first-gen students is a good bit higher than 3.83, it’s even more clear that whatever is going on isn’t merely a function of chance.

It’s possible that this difference mirrors the degree to which first-generation students simply do not engage in as many potentially influential activities and experiences as other students. If this were the case, we’d likely see these differences emerge elsewhere in the data. However, every other measure of involvement and participation suggests that there are no differences in frequency of engagement between first-generation students and their peers.

So maybe this difference in recalling specific experiences of conversations that helped clarify life/career goals is exactly the kind of thing that we might expect based on our prior understanding of first-generation students’ experience. Maybe first-gen students are engaged in the same average number of experiences as other students, but they are less likely to recognize the potential value of these experiences. As a result, maybe not knowing to look for the potential value of an experience makes it less likely that these students would see a way to connect these experiences to a longer-term goal.

It seems that this finding fits with our prior understanding of first-generation students. It also has important implications for the way that we talk with first-gen students about what they are doing in college. More than simply suggesting what they might do, it appears that first-gen students might need even more explicit guidance about how to reflect on the impact of a given experience, how that reflective activity might help them decide what experiences to prioritize, and how to connect what they might have learned through one experience with the developmental purpose of a subsequent experience.

In future years it’s very likely that a healthy proportion (about a third) of our new students will continue to be first-generation students. Much of what they don’t know about college is stuff that they don’t know they need to know. So our job is not only to tell them what they could do, but to show them how to decide what to do and how to use what they learn through those experiences to guide their future choices.

Make it a good (snow) day,

Mark

Differences in our students’ major experiences by race/ethnicity; WARNING: messy data ahead

It’s great to see the campus bustling again.  If you’ve been away during the two-week break, welcome back!  And if you stuck around to keep the place intact, thanks a ton!

Just in case you’re under the impression that every nugget of data I write about comes pre-packaged with a statistically significant bow on top, today I’d like to share some data findings from our senior survey that aren’t so pretty. In this instance, I’ve focused on data from the nine questions that comprise the section called “Experiences in the Major.” For purposes of brevity, I’ve paraphrased each of the items in the table below, but if you want to see the full text of the question, here’s the link to the 2015-16 senior survey on the IR web page. The table below disaggregates the responses to each of these items by Hispanic, African-American, and Caucasian students. The response options are one through five, and range either from strongly disagree to strongly agree or from never to very often (noted with an *).

Item Hispanic African-American Caucasian
Courses allowed me to explore my interests 3.86 3.82 4.09
Courses seemed to follow in a logical sequence 3.85 3.93 4.11
Senior inquiry brought out my best intellectual work 3.61 4.00 3.78
I received consistent feedback on my writing 3.72 4.14 3.96
Frequency of analyzing in class * 3.85 4.18 4.09
Frequency of applying in class * 3.87 4.14 4.15
Frequency of evaluating in class * 3.76 4.11 4.13
Faculty were accessible and responsive outside of class 4.10 4.21 4.37
Faculty knew how to prepare me for my post-grad plans 3.69 4.00 4.07

Clearly, there are some differences in average scores that jump out right away. The scores from Hispanic students are lowest among the three groups on all but one item. Sometimes there is little discernible difference between African-American and Caucasian students’ score while in other instances the gap between those two groups seems large enough to indicate something worth noting.

So what makes this data messy? After all, shouldn’t we jump to the conclusion that Hispanic students’ major experience needs substantial and urgent attention?

The problem, from the standpoint of quantitative analysis, is that none of the differences conveyed in the table meet the threshold for statistical significance. Typically, that means that we have to conclude that there are no differences between the three groups. But putting these findings in the context of the other things that we know already about differences in student experiences and success across these three groups (i.e., differences in sense of belonging, retention, and graduation) makes a quick dismissal of the findings much more difficult. And a deeper dive into the data both adds more useful insights to the mess.

The lack of statistical significance seems attributable to two factors. First, the number of students/majors in each category (570 responses from Caucasian students, 70 responses from Hispanic students, and 28 responses from African-American students) makes it a little hard to reach statistical significance. The interesting problem is that, in order to increase the number of Hispanic and Black students we would need to enroll more students in those groups, which might in part happen as a result of improving the quality of those students’ experience. But if we adhere to the statistical significance threshold, we would have to conclude that there is no difference between the three groups and would then be less likely to take the steps that might help us improve the experience, which would in turn improve the likelihood of enrolling more students in these two groups and ultimately get us to the place where a quantitative analysis would find statistical significance.

The other factor that seems to be getting in the way is that the standard deviations among Hispanic and African-American students is unusually large. In essence, this means that their responses (and therefore their experiences) are much more widely dispersed across the range of response options, while the responses from white students are more closely packed around the average score.

So we have a small number of non-white students relative to the number of white students and the range of experiences for Hispanic or African-American students seem unusually varied. Both of these finds make it even harder to conclude that “there’s nothing to see here.”

Just in case, I checked to see if the distribution of majors among each group differed. They did not. I also checked to see if there were any other strange differences between these student groups that might somehow affect these data. Although average incoming test score, the proportion of first-generation status, and the proportion of Pell Grant qualifiers differed, these differences weren’t stark enough to explain all of the variation in the table.

So the challenge I’m struggling with in this case of messy data is this:

We know that non-Caucasian students on average indicate a lower sense of belonging than their Caucasian peers. We know that our retention and graduation rates of non-white students are consistently lower than white students. We also know that absolute differences between two groups of .20-.30 are often statistically significant if the number of cases in each group is closer in size and if the standard deviation (aka dispersion) is in an expected range.

As a result, I can’t help thinking that just because a particular analytic finding doesn’t meet the threshold for statistical significance doesn’t necessarily mean that we should discard it outright. At the same time, I’m not comfortable arguing that these findings are rock solid.

In cases like these, one way to inform the inquiry is to look for other data sources with which we might triangulate our findings. So I ask all of you, do any of these findings match with anything you’ve observed or heard from students?

Make it a good day,

Mark

Can I ask a delicate question?

Since this is a crazy week for everyone, I’m going to try to post something that you can contemplate when you get the chance to relax your heart rate and breathe. I hope that you will give me the benefit of the doubt when you read this post, because I can imagine this question might be a delicate one and I raise it because I suspect it might help us more authentically and more honestly navigate through some obviously choppy waters as we make some key decisions about our new semester design.

Sometimes, when we advocate for the value of double majors and similar, or even improved, access to double majors in the new semester system, it seems like the rationale for this argument is grounded in the belief that double-majoring is advantageous for Augustana graduates and, as a corollary, relatively easy access to a double-major is helpful in recruiting strong prospective students. In other instances, it sounds as if we advocate for ease of access to double-majoring because we are afraid that programs with smaller numbers of majors will not survive if we build a system that produces fewer double majors.

Without question, both rationales come from the best of places. Doing all that we can for the sake of our student’s potential future success or the possibility of attracting a stronger and larger pool of future students seems utterly reasonable. Likewise, ensuring the health of all current academic departments, especially those that currently enjoy a smaller number of majors, and therefore ensuring the employment stability of all current faculty, is also utterly reasonable.

Yet I wonder if our endeavor to design the best possible semester system would benefit from parsing these concerns more clearly, examining them as distinct issues, and addressing them separately as we proceed. Because it seems to me that prioritizing double-majoring because it benefits post-graduate success, prioritizing double-majoring because it improves recruiting, and prioritizing double-majoring because it ensures employment stability for faculty is not the same as more directly identifying the factors that maximize our students’ post-graduate success, optimizing our offerings (and the way we communicate them) to maximize our recruiting efforts, and designing a system that maintains employment stability and quality for all of our current faculty members. The first approach asserts a causal relationship and seems to narrow our attention toward a single means to an end. The second approach focuses our attention on a goal while broadening the potential ways by which we might achieve it.

Certainly we can empirically test the degree to which double-majoring increases our student’s post-graduate success or whether a double-major friendly system strengthens our efforts to recruit strong students. We could triangulate our findings with other research on the impact of double-majoring on either post-graduate success or prospective student recruiting and design a system that situates double-majoring to hit that sweet spot for graduates and prospective students.

Likewise, we could (and I would argue, should) design a new semester system that ensures gratifying future employment for all current faculty (as opposed to asking someone with one set of expertise and interests to spend all of their time doing something that has little to do with that expertise and interest). However, it seems to me that we might be missing something important if we assume, or assert, that we are not likely to achieve that goal of employment stability if we do not maintain historically similar proportions of double-majors distributed in historically similar ways.

Those of you who have explored the concept of design thinking know that one of its key elements is an openness to genuinely consider the widest possible range of options before beginning the process of narrowing toward a final product or concept. At Augustana we are trying to build something new, and we are trying to do it in ways that very few institutions have done before. Moreover, we aren’t building it from an infinite array of puzzle pieces; we are building it with the puzzle pieces that we already have. So it seems that we ought not box ourselves prematurely. Instead, we might genuinely help ourselves by opening our collective scope to every possibility that 1) gives our students the best chance for success, 2) gives us the best chance to recruit future students, AND 3) uses every current faculty member’s strengths to accomplish our mission in a new semester system.

Please don’t misunderstand me – I am NOT arguing against double-majors (on the contrary, I am intrigued by the idea). I’m only suggesting that, especially as we start to tackle complicated issues that tie into very real and human worries about the future, we are probably best positioned to succeed, both in process and in final product, to the degree that we directly address the genuine and legitimate concerns that keep us up at night. We are only as good as our people and our relationships with each other. I know we are capable of taking all of this into account as we proceed into the spring. I hope every one of you take some time to relax and enjoy the break between terms so that you can start the spring refreshed and fully able to tackle the complex decisions that we have before us.

Make it a good day,

Mark