Can I ask a delicate question?

Since this is a crazy week for everyone, I’m going to try to post something that you can contemplate when you get the chance to relax your heart rate and breathe. I hope that you will give me the benefit of the doubt when you read this post, because I can imagine this question might be a delicate one and I raise it because I suspect it might help us more authentically and more honestly navigate through some obviously choppy waters as we make some key decisions about our new semester design.

Sometimes, when we advocate for the value of double majors and similar, or even improved, access to double majors in the new semester system, it seems like the rationale for this argument is grounded in the belief that double-majoring is advantageous for Augustana graduates and, as a corollary, relatively easy access to a double-major is helpful in recruiting strong prospective students. In other instances, it sounds as if we advocate for ease of access to double-majoring because we are afraid that programs with smaller numbers of majors will not survive if we build a system that produces fewer double majors.

Without question, both rationales come from the best of places. Doing all that we can for the sake of our student’s potential future success or the possibility of attracting a stronger and larger pool of future students seems utterly reasonable. Likewise, ensuring the health of all current academic departments, especially those that currently enjoy a smaller number of majors, and therefore ensuring the employment stability of all current faculty, is also utterly reasonable.

Yet I wonder if our endeavor to design the best possible semester system would benefit from parsing these concerns more clearly, examining them as distinct issues, and addressing them separately as we proceed. Because it seems to me that prioritizing double-majoring because it benefits post-graduate success, prioritizing double-majoring because it improves recruiting, and prioritizing double-majoring because it ensures employment stability for faculty is not the same as more directly identifying the factors that maximize our students’ post-graduate success, optimizing our offerings (and the way we communicate them) to maximize our recruiting efforts, and designing a system that maintains employment stability and quality for all of our current faculty members. The first approach asserts a causal relationship and seems to narrow our attention toward a single means to an end. The second approach focuses our attention on a goal while broadening the potential ways by which we might achieve it.

Certainly we can empirically test the degree to which double-majoring increases our student’s post-graduate success or whether a double-major friendly system strengthens our efforts to recruit strong students. We could triangulate our findings with other research on the impact of double-majoring on either post-graduate success or prospective student recruiting and design a system that situates double-majoring to hit that sweet spot for graduates and prospective students.

Likewise, we could (and I would argue, should) design a new semester system that ensures gratifying future employment for all current faculty (as opposed to asking someone with one set of expertise and interests to spend all of their time doing something that has little to do with that expertise and interest). However, it seems to me that we might be missing something important if we assume, or assert, that we are not likely to achieve that goal of employment stability if we do not maintain historically similar proportions of double-majors distributed in historically similar ways.

Those of you who have explored the concept of design thinking know that one of its key elements is an openness to genuinely consider the widest possible range of options before beginning the process of narrowing toward a final product or concept. At Augustana we are trying to build something new, and we are trying to do it in ways that very few institutions have done before. Moreover, we aren’t building it from an infinite array of puzzle pieces; we are building it with the puzzle pieces that we already have. So it seems that we ought not box ourselves prematurely. Instead, we might genuinely help ourselves by opening our collective scope to every possibility that 1) gives our students the best chance for success, 2) gives us the best chance to recruit future students, AND 3) uses every current faculty member’s strengths to accomplish our mission in a new semester system.

Please don’t misunderstand me – I am NOT arguing against double-majors (on the contrary, I am intrigued by the idea). I’m only suggesting that, especially as we start to tackle complicated issues that tie into very real and human worries about the future, we are probably best positioned to succeed, both in process and in final product, to the degree that we directly address the genuine and legitimate concerns that keep us up at night. We are only as good as our people and our relationships with each other. I know we are capable of taking all of this into account as we proceed into the spring. I hope every one of you take some time to relax and enjoy the break between terms so that you can start the spring refreshed and fully able to tackle the complex decisions that we have before us.

Make it a good day,

Mark

 

Time to break out the nerves of steel

When I used to coach soccer, other coaches and I would sarcastically say that if you want to improve team chemistry, start winning. Of course we knew that petty disagreements and personal annoyances didn’t vanish just because your team got on a winning streak. But it was amazing to see how quickly those issues faded into the shadows when a team found themselves basking in a winner’s glow. Conversely, when that glow faded it was equally amazing to see how normally small things could almost instantaneously mushroom into team-wide drama that would suck the life out of the locker room.

Even though one might think that in order to win again we just needed to practice harder or find a little bit of luck, almost always the best way to get back to winning was to get the team chemistry right first. That meant deliberately refocusing everyone on being the best of teammates, despite the steamy magma of hot emotion that might be bubbling up on the inside. In the end, it always became about the choice to be the best of who we aspired to be while staring into the pale, heartless eyes of the persons we could so easily become.

You might think that I’m going to launch into a speech about American values, immigration, and refugees. But actually I’m thinking about the choices that face all of us at Augustana College as we start to sort through the more complicated parts of the design process in our conversion to semesters. Like a lot of complex organisms, a functioning educational environment (especially one that includes a residential component) is much more than a list of elements prioritized from most to least important. Instead, a functioning educational environment – especially one that maximizes its impact – is an ecosystem that thrives because of the relationships between elements rather than the elements themselves. It is the combination of relationships that maintain balance throughout the organism and give it the ability to adapt, survive, adjust, recover, and thrive. If one element dominates the organism, the rest of the elements will eventually die off, ultimately taking that dominant element down with them. But if all the elements foster a robust set of relationships that hold the whole thing together, the organism really does become greater than the sum of its parts.

Likewise, we are designing a new organism that is devoted to exceptional student learning and growth. Moreover, we have to design this organism so that each of the elements can thrive while gaining strength from (and giving strength to) each other. We give ourselves the best chance of getting it right if we keep the image of an ecosystem fresh in our minds and strive to design an ecosystem in which all of the relationships between elements perpetuate resilience and energy.

But in order to collaboratively build something so complex, we have to be transparent and choose to trust. And this is where we need to break out the nerves of steel. Because we all feel the pressure, the anxiety, the unknown, and the fear of that unknown. The danger, of course, is that in the midst of that pressure it would be easy, even human, to grab on to one element that represents certainty in the near-term and lose sight of 1) the relationships that sustain any given element (including the one you might currently be squeezing the air out of), and 2) the critical role of all of those relationships in sustaining the entire organism.

As we embark toward the most challenging parts of this semester conversion design, I hope we can find a way, especially when we feel the enormity of it all bearing down on us, to embody transparency and choose to trust. That will mean willingly deconstructing our deepest concerns, facing them openly, and straight-forwardly solving them together.

Think about where we were a year ago and where we are now. We’ve done a lot of impressive work that can’t be understated. (Based on the phone calls I’ve received from other institutions asking us how we are navigating the conversion to semesters, we might just be the golden child of organizational functionality!). Now, as the more complex challenges emerge and the pressure mounts, let’s remember what got us here, what will get us through this stretch of challenging decisions, and what will get us safely to the other side.

Make it a good day,

Mark

“We all want to belong, yeah …”

I just watched a wonderful TEDx talk by Terrell Strayhorn, Professor of Higher Education at (the) Ohio State University, called “Inalienable Rights: Life, Liberty, and the Pursuit of Belonging.” With enviable ease, Dr. Strayhorn walks his audience through the various factors that impede college persistence and demonstrates why a sense of belonging is so important for student success. He concludes his talk with his remarkably smooth singing voice, crooning, “We all want to belong, yeah . . .”

If you’ve been following my blog over the last year you’ve seen me return to our student data that reveals troubling differences in sense of belonging on campus across various racial and ethnic groups. The growing body of research on belongingness and social identity theory continues to demonstrate that the factors that shape a sense of belonging are extensive. While these complicated findings might gratify the social scientist in me, the optimistic activist part of me has continued to beg for more concrete solutions; things that individuals within a community can do right away to strengthen a sense of membership for anyone in the group who might not be so sure that they belong.

So here are a couple of ideas that poured some of the best kind of fuel onto my fire over the weekend: Micro-Kindness and Micro-Affirmations. Both terms refer to a wonderfully simple yet powerful idea. In essence, both concepts recognize that we live in an imperfect world rife with imperfect interactions and, if we want the community in which we exist to be better than it is (no matter how good or bad it is at present), then individual members of that community have to take action to change it. Applied to the ongoing discussion of microaggressions and their potential impact on individuals within a community (particularly those from traditionally marginalized groups), both ideas assert that there are things that we can do to emphasize to others that we welcome them into our community and reduce the existence of microaggressions. These actions can be as simple as opening a door for someone and smiling at them, making eye contact and saying hello, or engaging in brief but inclusive conversation. Instructors can have a powerful micro-affirmative impact by taking the time to tell a student who might be hesitant or struggling that you know that he or she can succeed in your class.

Researchers at the Higher Education Research Institute at UCLA have found that validating experiences, much like the micro-kindnesses and micro-affirmations described above, appear to have a significant impact in reducing perceptions of discrimination and bias. In fact, after accounting for the negative impact of discrimination and bias on a sense of belonging, interpersonal validations generated by far the largest positive effect on a sense of belonging.

Research on the biggest mistakes that people can make in trying to change behavior has found that trying to eliminate bad behaviors is much less effective than instituting new behaviors. Since individuals often perceive microaggressions to come in situations where a slight was not intended, eradicating everything that might be perceived as a slight or snub seems almost impossible. But if each of us were to make the effort to enact a micro-kindness or a micro-affirmation several times each day, we might set in motion a change in which we

  1. substantially improve upon the community norms within which microaggressions might occur, and
  2. significantly increase a sense a belonging among those most likely to feel like outsiders.

Make it a good day,

Mark

 

Applying a Story Spine to Guide Assessment

As much as I love my assessment compadres, sometimes I worry that the language we use to describe the process of continual improvement sounds pretty stiff. “Closing the loop” sounds too much like teaching a 4 year-old to tie his shoe. Over the years I’ve learned enough about my own social science academic nerdiness to envy those who see the world through an entirely foreign lens. So when I stumbled upon a simple framework for telling a story called a “Story Spine,” it struck me that this framework might spell out the fundamental pieces of assessment in a way that just makes much more sense.

The Story Spine idea can be found in a lot of places on the internet (e.g., Pixar and storytelling), but I found out about it through the world of improv. At its core, the idea is to help improvisers go into a scene with a shared understanding of how a story works so that, no matter what sort of craziness they discover in the course of their improvising, they know that they are all playing out the same meta-narrative.

Simply put, the Story Spine divides a story into a series of sections that each start with the following phrases. As you can tell, almost every story you might think of would fit into this framework.

Once upon a time . . .

And every day . . .

Until one day . . .

Because of that . . .

Because of that . . .

Until finally . . .

And ever since then . . .

These section prompts can also fit into four parts of a cycle that represent the transition from an existing state of balance (“once upon a time” and “every day”), encountering a disruption of the existing balance (“until one day”), through a quest for resolution (“because of that,” “because of that,” and “until finally”), and into a new state of balance (“and ever since then”).

To me, this framework sounds a lot like the assessment loop that is so often trotted out to convey how an individual or an organization engages assessment practices to improve quality. In the assessment loop, we are directed to “ask questions,” “gather evidence,” “analyze evidence,” and “use results.” But to be honest, I like the Story Spine a lot better. Aside from being pretty geeky, the assessment loop starts with a vague implication that trouble exists below the surface and without our knowledge. This might be true, but it isn’t particularly comforting. Furthermore, the assessment loop doesn’t seem to leave enough room for all of the forces that can swoop in and affect our work despite our best intentions. There is a subtle implication that educating is like some sort of assembly line that should work with scientific precision. Finally, the assessment loop usually ends with “using the results” or, at its most complex, some version of “testing the impact of something we’ve added to the mix as a result of our analysis of the evidence.” But in the real world, we are often faced with finding a way to adjust to a new normal – another way of saying that entering a new state of balance is as much a function of our own adjustment as it is the impact of our interventions.

So if you’ve ever wondered if there was a better way to convey the way that we live an ideal of continual improvement, maybe the Story Spine works better. And maybe if we were to orient ourselves toward the future by thinking of the Story Spine as a map for what we will encounter and how we ought to be ready to respond, maybe – just maybe – we will be better able to manage our way through our own stories.

Make it a good day,

Mark

Some comfort thoughts about mapping

I hope you are enjoying the bright sunshine today.  Seeing that we might crack the 70 degree mark by the end of the week makes the sun that much more invigorating!

As you almost certainly know by now, we have been focusing on responding to the suggestions raised in the Higher Learning Commission accreditation report regarding programmatic assessment. The first step in that response has been to gather curricular and learning outcome maps for every major.

So far, we have 32 out of 45 major-to-college outcomes maps and 14 out of 45 courses-to-major outcomes maps.  Look at it as good or look at it as bad – at least we are making progress, and we’ve still got a couple weeks to go before I need to have collected them all. More importantly, I’ve been encouraged by the genuine effort that everyone has made to tackle this task. So thank you to everyone.

Yet as I’ve spoken with many of you, two themes have arisen repeatedly that might be worth sharing across the college and reframing just a bit.

First, many of you have expressed concern that these maps are going to be turned into sticks that are used to poke you or your department later. Second, almost everyone has worried about the inevitable gap between the ideal student’s progress through a major and the often less-ideal realities of the way that different students enter and progress through the major.

To both of those concerns, I’d like to suggest that you think of these maps as a perpetually working document instead of some sort of contract that cannot be changed. The purpose of drawing out these maps is to make explicit the implicit only as a starting point from which your program will constantly evolve. You’ll change things as your students change, as your instructional expertise changes, and as the future for which your program prepares students changes. In fact, probably the worst thing that could happen is a major that never changes anything no matter what changes around it.

The goal at this point isn’t to produce an unimprovable map. Instead, the goal is put a map together that is your best estimate of what you and your colleagues are trying to do right now. From there, you’ll have a shared starting point that will make it a lot easier to identify and implement adjustments that will in turn produce tangible improvement.

So don’t spend too much time on your first draft. Just get something on paper (or pixels) that honestly represents what you are trying to do and send it to me using the templates I’ve already shared with everyone. Then expect that down the road you’ll decide to make a change and produce a second draft. And so on, and so on. It really is that simple.

Make it a good day,

Mark

Transparency Travails and Sexual Assault Data

The chill that dropped over campus on Monday seems like an apt metaphor for the subject that’s been on my mind for the past week. Last spring, Augustana participated in a multi-institutional study focused on sexual assault campus climate that was developed and administered by the Higher Education Data Sharing Consortium (HEDS). We hoped that the findings from this survey would help us, 1) get a better handle on the nature and prevalence of sexual assault and unwanted sexual contact among our students, and 2) better understand our campus climate surrounding sexual assault and unwanted sexual contact. We actively solicited student participation in the survey, collaborating with student government, faculty, and administration to announce the survey and encourage students to respond. The student response was unusually robust, particularly given the sensitivity of the topic. Equally important, many people across campus – students, faculty, administrators, and staff alike – took note of our announced intentions to improve and repeatedly asked when we would have information about the findings to share with the campus community. You saw the first announcement of these results on Sunday in a campus-wide email from Dean Campbell. If you attended the Monday night screening of The Hunting Ground and the panel discussion that followed, you likely heard additional references to findings from this survey. As Evelyn Campbell indicated, the full report is available from Mark Salisbury (AKA, me!) in the IR office upon request.

It has been interesting to watch the national reporting this fall as several higher ed consortia and individual institutions have begun to share data from their own studies of sexual assault and related campus climate. While some news outlets have reported in a fairly objective manner (Inside Higher Ed and The Chronicle of Higher Education), others have tripped over their own feet trying to impose a tale of conspiracy and dark motives (Huffington Post) or face-planted trying to insert a positive spin where one doesn’t really exist (Stanford University). Moreover, the often awkward word-choices and phrasing in the institutional press releases (e.g., Princeton’s press release) announcing these data seem to accentuate the degree to which colleges and university aren’t comfortable talking about their weaknesses, mistakes, or human failings (not to mention the extent to which faculty and college administrators might need to bone up on their quantitative literacy chops!).

Amidst all of this noise, we are watching two very different rationales for transparency play out in entirely predictable ways. One rationale frames transparency as a necessary imposition from the outside, like the piercing beam of an inspector’s flashlight pointed into an ominous darkness to expose bad behavior and prove a supposition. The other rationale frames transparency as a disposition that emanates from within, cultivating an organizational dynamic that makes it possible to enact and embrace meaningful and permanent improvement.

For the most part, it seems that most of the noise being made in the national press about sexual assault data and college campuses comes from using transparency to beat institutions into submission. This is particularly apparent in the Huffington Post piece. If the headline, “Private Colleges Keep Sexual Assault Data Secret: A bunch of colleges are withholding sexual assault data, thanks to one group,” doesn’t convey their agenda clearly enough, then the first couple of paragraphs walks the reader through it. The problem in this approach to transparency is that the data too often becomes the rope in a giant tug-of-war between preconceived points of view. Both (or neither) points of view could have parts that are entirely valid, but the nuance critical to actually identifying an effective way forward gets chopped to bits in the heat of the battle. In the end, you just have winners, losers, and a lifeless coil of rope that no one cares about anymore.

Instead, transparency is more likely to lead to effective change when it is a disposition that emanates within the institution’s culture. The folks at HEDS understood this notion when they designed the protocol for conducting the survey and conveying the data. The protocol they developed specifically prohibited institutions from revealing the names of other participant institutions, forcing institutions to focus the implications of their results back on themselves. Certainly, a critical part of this process at any institution is sharing its data with its entire community and collectively addressing the need to improve. But in this situation, transparency isn’t the end goal. Rather, it becomes a part of a process that necessarily leads to an improvement and observable change. To drive this point home, HEDS has put extensive efforts into helping institutions use their data to create change that reduces sexual assault.

At Augustana, we will continue to share our own results across our community as well and tackle this problem head-on. Our own findings point to plenty of issues that will likely improve our campus climate and reduce sexual assault. I’ll write about some of these findings in more detail in the coming weeks. In the meantime, please feel free to send me an email requesting our data. I’ll send you a copy right away. And if you’d like me to bring parts of the data to your students so that they might reflect and learn, I’m happy to do that too.

Make it a good day,

Mark

Welcome back to a smorgasbord of ambiguity!

Every summer I get lonely.  Don’t get me wrong, I love the people I work with in Academic Affairs and in Founders Hall . . . probably more than they love me sometimes.  But the campus just doesn’t feel right unless there is a certain level of manageable chaos, the ebb and flow of folks scurrying between buildings, and a little bit of nervous anticipation in the air.  Believe it or not, I genuinely missed our student who sat in the trees and sang out across the quad all last year!  Where are you, Ellis?!

For those of you who are new to Augustana, I write this column/blog every week to try to drop a little dose of positive restlessness into the campus ether.  I first read the phrase “positive restlessness” in the seminal work by George Kuh, Jillian Kinzie, John Schuh, and Liz Whitt titled Student Success in College. This 2005 book describes the common threads the authors found among 20 colleges and universities that, no matter the profile of students they served or the amount of money squirreled away in their endowment portfolio, consistently outperformed similar institutions in retention and graduation rates.

More important than anything else, the authors found that the culture on each of these campuses seemed energized by a perpetual drive to improve. No matter if it was a massive undertaking or a tiny little tweak, the faculty, staff, and students at these schools seemed almost hungry to get just a little bit better at who they were and how they did what they do every day.  This doesn’t mean that the folks on these campuses were some cultish consortium of maniacal change agents or evangelical sloganeers. But over and over it seemed that the culture at each of the schools featured in this study coalesced around a drive to do the best that they could with the resources that they had and to never let themselves rest on their laurels for too long.

What continues to strike me about this attribute is the degree to which it requires an optimistic willingness to wade into the unknown. If we were to wait until we figured out the failsafe answer to every conundrum, none of us would be where we are now and Augustana would have almost certainly gone under a long time ago.  Especially when it comes to educating, there are no perfect pedagogies or guaranteed solutions. Instead, the best we can do is continually triangulate new information with our own experience to cultivate learning conditions that are best suited for our students. In essence, we are perpetually focused on the process in order to increase the likelihood that we can most effectively influence the product.

The goal of this blog is to present little bits of information that might combine with your expertise to fuel a sense of positive restlessness on our campus.  Sometimes I point out something that we seem to be doing well.  Other times I’ll highlight something that we might improve.  Either way, I’ll try to present this information in way that points us forward with an optimism that we can always make Augustana just a little bit better.

By a lot of different measures, we are a pretty darn good school.  And we have a healthy list of examples of ways in which we have embodied positive restlessness on this campus (if you doubt me, read the accreditation documents that we will be submitting to the Higher Learning Commission later this fall).  We certainly aren’t perfect, but frankly that would be a fool’s errand because perfection is a static concept – and maintaining an effective learning environment across an entire college campus is by definition a perpetually evolving endeavor.

So I raise my coffee mug to all of you and to the deliciously ambiguous future that this academic year holds.  Into the unknown we stride together.

Make it a good day!

Mark

 

So after the first year, can we tell if CORE is making a difference?

Now that we are a little over a year into putting Augustana 2020 in motion, we’ve discovered that assessing the implementation process is deceptively difficult. The problem isn’t that the final metrics to which the plan aspires are too complicated to measure or even too lofty to achieve. Those are goals that are fairly simple to assess – we either hit our marks or we don’t. Instead, the challenge at present lies in devising an assessment framework that tracks implementation, not the end results. Although Augustana 2020 is a relatively short document, in actuality it lays out a complex, multi-layered plan that requires a series of building blocks to be constructed separately, fused together, and calibrated precisely before we can legitimately expect to meet our goals for retention and graduation rates, job acquisition and graduate school acceptance rates, or improved preparation for post-graduate success. Assessing the implementation, especially at such an early point in the process, by using the final metrics to judge our progress would be like judging a car manufacturer’s increased production speed right after the company had added a faster motor to one of the assembly lines. Of course, without having retrofitted or changed out all of the other assembly stages to adapt to this new motor, by itself such a change would inevitably turn production into a disaster.

Put simply, judging any given snapshot of our current state of implementation against the fullness of our intended final product doesn’t really help us build a better mousetrap; it just tells us what we already know (“It’s not done yet!”). During the process of implementation, the focus of assessment is much more useful if it identifies and highlights intermediate measures that give us a more exacting sense of whether we are moving in the right direction. In addition, assessing the process should tell us if the pieces we are putting in place will work together as designed or if we have to made additional adjustments to make sure the whole systems works as it should. This means narrowing our focus on the impact of individual elements on specific student behaviors, testing the fit between pieces that have to work together, and tracking the staying power of experiences that are intended to permanently impact our students’ trajectories.

With all of that said, I thought that it would be fitting to try out this assessment approach on arguably the most prominent element of Augustana 2020 – CORE. Now that CORE is finishing its first year at the physical center of our campus, it seems reasonable to ask whether we have any indicators in place that could assess whether this initiative is bearing the kind of early fruit we had hoped. Obviously, since CORE is designed to function as a part of a four-year plan of student development and preparation, it would be foolhardy to judge CORE’s ultimate effectiveness on some of the Augustana 2020 metrics until at least four years has past. However, we should look to see if there are indications that CORE’s early impact triangulates with the student behaviors or attitudes necessary for improved post-graduate success. This is the kind of data that would be immediately useful to CORE and the entire college. If indicators suggest that we are moving in the right direction, then we can move forward with greater confidence. If the indicators suggest that things aren’t working as we’d hoped, then we can make adjustments before too many other things are locked into place.

In order to find data that suggests impact, we need more than just the numbers of students who have visited CORE this year (even though it is clear that student traffic in the CORE office and at the many CORE events has been impressive). To be fair, these participation patterns could simply be an outgrowth of CORE’s new location at the center of campus (“You’ve got candy, I was just walking by, why not stop in?”). To give us a sense of CORE’s impact, we need to find data where we have comparable before-and-after numbers. At this early juncture, we can’t look at our recent graduate survey data for employment rates six months after graduation since our most recent data comes from students who graduated last spring – before CORE opened.

Yet we may have a few data points that shine some light on CORE’s impact during its first year. To be sure, these data points shouldn’t be interpreted as hard “proof.” Instead, I suggest that they are indicators of directionality and, when put in the presence of other data (be they usage numbers or the preponderance of anecdotes), we can start to lean toward some conclusions about CORE’s impact in its first year.

The first data point we can explore is a comparison of the number of seniors who have already accepted a job offer at the time they complete the senior survey. Certainly the steadily improving economy, Augustana’s existing efforts to encourage students to begin their post-graduate planning earlier, and the unique attributes of this cohort of students could also influence this particular data point. However, if we were to see a noticeable jump in this number, it would be difficult to argue that CORE should get no credit for this increase.

The second data point we could explore would be the proportion of seniors who said they were recommended to CORE or the CEC by other students and faculty. This seems a potentially indicative data point based on the assumption that neither students nor faculty would recommend CORE more often if the reputation and result of CORE’s services were no different than the reputation and results of similar services provided by the CEC in prior years. To add context, we can also look at the proportion of seniors who said that no one recommended CORE or the CEC to them.

These data points all come from the three most recent administrations of the senior survey (including this year’s edition, to which we already have 560 out of a 580 eligible respondents). The 2013 and 2014 numbers are prior to the introduction of CORE, and the 2015 number is after CORE’s first year. I’ve also calculated a proportion that includes all students whose immediate plan after graduation is to work full-time in order to account for the differences in the size of the graduating cohorts.

Seniors with jobs accepted when completing the senior survey –

  • 2013 – 104 of a possible 277 (37.5%)
  • 2014 – 117 of a possible 338 (34.6%)
  • 2015 – 145 of a possible 321 (45.2%)

Proportion of seniors indicating they were recommended to CORE or the CEC by other students –

  • 2013 – 26.9%
  • 2014 – 24.0%
  • 2015 – 33.2%

Proportion of seniors indicating they were recommended to CORE or the CEC by faculty in their major or faculty outside their major, respectively –

  • 2013 – 47.0% and 18.8%
  • 2014 – 48.1% and 20.6%
  • 2015 – 54.6% and 26.0%

Proportion of seniors indicating that no one recommended CORE or the CEC to them –

  • 2013 – 18.0%
  • 2014 – 18.9%
  • 2015 – 14.4%

Taken together, these data points seem to suggest that CORE is making a positive impact on campus.  By no means do these data points imply that CORE should be ultimately judged as a success, a failure, or anything in between at this point. However, this data certainly suggests that CORE is on the right track and may well be making a real difference in the lives of our students.

If you’re not sure what CORE does or how they do it, the best (and probably only) way to get a good answer to that question is to go there yourself, talk to the folks who work there, and see for yourself.  If you’re nice to them, they might even give you some candy!

Make it a good day,

Mark

How many responses did you get? Is that good?

As most of you know by now, the last half of the spring term sometimes feels like a downhill sprint. Except in this case you’re less concerned about how fast you’re going and more worried about whether you’ll get to the finish line without face-planting on the pavement.

Well, it’s no different in the IR Office.  At the moment, we have four large-scale surveys going at once (the recent graduate survey, the senior survey, the freshman survey, and the employee survey), we’ve just finished sending a year’s worth of reports to the Department of Education, and we’re preparing to send all of the necessary data to the arbiter of all things arbitrary, U.S. News College Rankings. That is in addition to all of the individual requests for data gathering and reporting and administrative work that we do every week.

So in the midst of all of this stuff, I wanted to thank everyone who responded to our employee survey as well as everyone who has encouraged others to participate. After last week’s post, a few of you asked how many responses we’ve received so far and how many we need. Those are good questions, but as is my tendency (some might say “my compulsion”) the answer is more complicated than you’d probably prefer.

In essence, we need as many as we can get from as many different types of employees as we can get. But in terms of an actual number, defining “how many responses is enough” can get pretty wonky with formulas and unfamiliar symbols. So I shoot for 60% of an overall population. That means, since Augustana has roughly 500 full-time employees, we would cross that threshold with 300 employee survey responses.

However, that magic 60% applies to any situation where we are looking at the degree to which a set of responses to a particular item can be confidently applied to the overall population. What if we want to look at responses from a certain subgroup of employees (e.g., female faculty)?  In that case, we need to have responses from 60% of the female faculty, something that isn’t necessarily a certainty just because we have 300 out of 500 total responses.

This is why I am constantly hounding everyone about our surveys in order to get as many responses as possible. Because we don’t know all of the subgroups that we might want to analyze when we start collecting data; those possibilities arise during the analysis. And once we find out that we don’t have enough responses to dig into something that looks particularly important, we are flat out of luck.

So this week, I’m asking you to do me a favor.  Ask one person who you don’t necessarily talk to every day if they’ve taken the survey. If they haven’t, encourage them to do it. It might end up making big difference.

Make it a good day,

Mark

The Problem with Aiming for a Culture of Assessment

In recent years I’ve heard a lot of higher ed talking heads imploring colleges and university to adopt a “culture of assessment.” As far as I can tell (at least from a couple of quick Google searches), the phrase has been around for almost two decades and varies considerably in what it actually means. Some folks seem to think it describes a place where everyone uses evidence (some folks use the more slippery term “facts”) to make decisions, while others seem to think that a culture of assessment describes a place where everyone measures everything all the time.

There is a pretty entertaining children’s book called Magnus Maximus, A Marvelous Measurer that tells the story of guy who gets so caught up measuring everything that he ultimately misses the most important stuff in life. In the end he learns “that the best things in life are not meant to be measured, but treasured.” While there are some pretty compelling reasons to think twice about the book’s supposed life lesson (although I dare anyone to float even the most concise post-modern pushback to a five year old at bedtime and see how that goes), the book delightfully illustrates the absurdity of spending one’s whole life focused on measuring if the sole purpose of that endeavor is merely measuring.

In the world of assessment in higher education, I fear that we have made the very mistake that we often tell others they shouldn’t make by confusing the ultimate goal of improvement with the act of measuring. The goal – or “intended outcome” if you want to use the eternally awkward assessment parlance – is that we actually get better at educating every one of our students so that they are more likely to thrive in whatever they choose to do after college. Even in the language of those who argue that assessment is primarily needed to validate that higher education institutions are worth the money (be it public or private money), there is always a final suggestion that institutions will use whatever data they gather to get better somehow. Of course, the “getting better” part seems to always be mysteriously left to someone else. Measuring, in any of its forms is almost useless if that is where most or all of the time and money is invested. If you don’t believe me, just head on down to your local Institutional Research Office and ask to see all of the dusty three-ring binders of survey reports and data books from the last two decades. If they aren’t stacked on a high shelf, they’re probably in a remote storage room somewhere.

Measuring is only one ingredient of the recipe that gets us to improvement. In fact, given the myriad of moving parts that educators routinely deal with (only some of which educators and institutions can actually control), I’m not sure that robust measuring is even the most important ingredient. An institution has no more achieved improvement just because they measure things than a chef bakes a cake by throwing a bag of flour in an oven (yes I know there are such things as flourless tortes … that is kind of my point). Without cultivating and sustaining an organizational culture that genuinely values and prioritizes improvement, measurement is just another thing that we do.

Genuinely valuing improvement means explicitly dedicating the time and space to think through any evidence of mission fulfillment (be it gains on learning outcomes, participation in experiences that should lead to learning outcomes, or the degree to which students’ experiences are thoughtfully integrated toward a realistic whole), rewarding the effort to improve regardless of success or failure, and perpetuating an environment in which everyone cares enough to continually seek out things that might be done just a little bit better.

Peter Drucker is purported to have said that “culture eats strategy for lunch.” Other strategic planning gurus talk about the differences between strategy and tactics. If we want our institutions to actually improve and continually demonstrate that, no matter how much the world changes, we can prepare our students to take adult life by the horns and thrive no matter what they choose to do, then we can’t let ourselves mistakenly think that maniacal measurement magically perpetuates a culture of anything. If anything, we are likely to just make a lot more work for quantitative geeks (like me) while excluding those who aren’t convinced that statistical analysis is the best way to get at “truth.” And we definitely will continue to tie ourselves into all sorts of knots if we pursue a culture of assessment instead of a culture of improvement.

Make it a good day,

Mark