Some anecdotes and data snippets from our first experience with the IDEA online course feedback system

Welcome to Winter Term! Maybe some of you saw the big snowflakes that fell on Sunday morning. Even though I know I am in denial, it is starting to feel like fall might have slipped from our collective grasp over the past weekend.

But on the bright side (can we get some warmth with that light?), during the week-long break between fall and winter term, something happened that had not happened since we switched to the IDEA course feedback system. Last Wednesday morning – only a 48 hours after you had entered your final grades, your IDEA course feedback was already processed and ready to view. All you had to do was log in to your faculty portal and check it out! (You can find the link to the IDEA Online Course Feedback Portal on your Arches faculty page).

I’m sure I will share additional observations and data points from our first experience with the online system this week during one of the three “Navigating your Online IDEA Feedback Report” sessions on Monday, Tuesday, and Thursday starting just after 4 PM in Olin 109. (A not so subtle hint – come to Olin 109 on Monday, Tuesday, or Thursday this week (Nov. 13, 14, and 16) at or just after 4 PM to walk through the online feedback reports and maybe one or two cool tricks with the data).  Bring a laptop if you’ve got one just in case we run out of computer terminals.

But in the meantime, I thought I’d share a couple of snippets that I found particularly interesting from our first online administration.

First, it seems that no news about problems logging in to the system turned out to be extremely good news. I was fully prepped to solve all kinds of connectivity issues and brainstorm all sorts of last-minute solutions. But I only heard from one person about one class having trouble getting on to the system . . . and that was when the internet was down all over campus for about 45 minutes. Otherwise, it appears that folks were able to administer the online course feedback forms in class or get their students to complete them outside of class with very little trouble. Even in the basement of Denkmann! This doesn’t mean that we won’t have some problems in the future, but at least with one term under our collective belt . . . maybe the connectivity issue isn’t nearly as big as we worried it might be.

Second, our overall student response rates were quite strong. Of the 467 course sections that could have administered IDEA online, about 74% of those course sections achieved a response rate of 75% or higher. Furthermore, several instructors tested what might happen if they asked students to complete the IDEA online outside of class (incentivized with an offer of extra credit to the class if the overall response rate reached a specific threshold). I don’t believe that any of these instructors’ classes failed to meet the established thresholds.

In addition, after a preliminary examination of comments that students provided, it appears that students actually may have written more comments with more detail than they previously provided on paper-and-pencil forms. This would seem to corroborate feedback from a few faculty members who indicated that their students were thankful that their comments would now be truly anonymous and no longer potentially identifiable given the instructor’s prior familiarity with the student’s handwriting.

Finally, in response to faculty concerns that the extended student access to their IDEA forms (i.e., students were able to enter data into their response forms until the end of finals no matter when they initially filled out their IDEA forms) might lead to students going back into the system and exacting revenge on instructors in response to a low grade on a final exam or paper, I did a little digging to see how likely this behavior might be. In talking to students about this option during week 10 of the term, I got two responses. Several international students said that they appreciated this flexibility because they had been unable to finish typing their comments in the time allotted in class. Since many international students (particular first-year international students) find that it takes them much longer than domestic students to express complex thoughts in written English. I also got the chance to ask a class of 35(ish) students whether or not they were likely to go back into the IDEA online system and change a response several days after they had completed that form. After giving me a bewildered look for an uncomfortably long time, one student finally blurted out, “Why would we do that?”  Upon further probing, the students said that they couldn’t imagine a situation where they would care enough to take the time to find the student portal and change their responses. When I asked, “Even if something happened at the end of the term like a surprisingly bad grade on a test or a paper that you felt was unfair?” The students responded by saying that by the end of the term they would already know enough to know what they thought of that instructor and that class. Even if they got a surprisingly low grade on a final paper or test, the students said that they would know the nature of that instructor and course long before the final test or paper.

To see if those student’s speculation about their own behavior matches with IDEA’s own data, I talked to the CEO of IDEA to ask what proportion of students go back into the system and change their responses and if that was a question that faculty at other institutions had asked.  He told me that he had heard that concern raised repeatedly since they introduced the online format. As a result, they have been watching that data point closely. Across all of the institutions that use the online system over the last several years, only 0.6% of all students actually go back into the system and edit their responses. He did not know what proportion of that small minority altered their responses in a substantially negative direction.
Since the first of my three training sesssions starts in about an hour, I’m going to stop now.  But so far, it appears that moving to IDEA online has been a pretty positive thing for students and our data. Now I hope we can make the most of it for all of our instructors. So I better get to work prepping for this week!
Make it a good day,
Mark

Big Data, Blindspots, and Bad Statistics

As some of you know, last spring I wrote a contrarian piece for The Chronicle of Higher Education that posed some cautions to unabashedly embracing big data.  Since then, I’ve found two Ted Talks that add to the list of reasons to be suspicious of an overreliance on statistics and big data.

Tricia Wang outlines the dangers of relying on historical data at the expense of human insight when trying to anticipate the future.

Mona Chalabi describes three ways to spot a suspect statistic.

Both of these presenters reinforce the importance of triangulating information from quantitative data, individual or small-group expertise, and human observation. In addition, all of this information can’t eliminate ambiguity. Any assertion of certainty is almost always one more reason to be increasingly skeptical.

So if you think I’m falling victim to either of these criticisms, feel free to call me out!

Make it a good day,

Mark

Something to think about for the next Symposium Day

Symposium Day at Augustana College has grown into something truly impressive.  The quality of the concurrent sessions hosted by both students and faculty present an amazing array of interesting approaches to the theme of the day. The invited speakers continue to draw large crowds and capture the attention of the audience. And we continue to cultivate in the Augustana culture a belief in owning one’s learning experience by hosting a day in which students choose the sessions they attend and talk to each other about their reactions to those sessions.

Ever since its inception, we’ve emphasized the value of integrating Symposium Day participation into course assignments. Last year, we tested the impact of such curricular integration and found that Symposium Day mattered for first-year student growth in a clear and statistically significant way. We also know that graduating classes have increasingly found Symposium Day to be a valuable learning opportunity. Since 2013, the average response to the statement “Symposium Day activities influenced the way I now think about real-world issues,” has risen steadily. In 2017, 46% of seniors agreed or strongly agreed with that statement.

So what more could be written about an idea that has turned out to be so successful? Well, it turns out that when an organization values integration and autonomy, sometimes those values can collide and produce challenging, albeit resolvable, tensions. This year a number of first-year advisors encountered advisees who had assignments from different classes requiring them to be at different presentations simultaneously. Not surprisingly, these students were stressing about how they were going to pull this off and were coming up with all sorts of schemes to be in two places at once.

In some cases, the students didn’t know that they might be able to see a video recording of one of the conflicting presentations (although no one was sure whether that recording would be available before their assignment was due). But in other cases, there was simply no way for the student to attend both sessions.

This presents us all with a dilemma. How do we encourage the highest possible proportion of students that have course assignments that integrate their courses with Symposium Day without creating a situation where students are required to be in two places at once or run around like chickens with their proverbial heads cut off?

One possibility might be some sort of common assignment that originates in the FYI course. Another possibility might reside in establishing some sort of guidelines for Symposium Day assignments so that students don’t end up required by two different classes to be in two different places at the same time. I don’t have a good answer, nor is it my place to come up with one (lucky me!).

But it appears that our success in making Symposium Day a meaningful educational experience for students has created a potential obstacle that we ought to avoid. One student told me that the worst part about the assignments that she had to complete wasn’t that she was frustrated that she had homework. Instead, the worst part for her was that the session she really wanted to see, “just because it looked really interesting,” was also at the same time as the two sessions she was required to attend.

It would be ironic if we managed to undercut the way that Symposium Day participation seems to foster our students’ intrinsic motivation as learners because we got so good at integrating class assignments with Symposium Day.

Something to think about before we start planning for our Winter Term event.

Make it a good day,

Mark

This March, it’s Survey Madness!

Even folks who are barely familiar with social science research know the term “survey fatigue.” It describes a phenomenon, empirically supported now by a solid body of research, in which people who are asked to take surveys seem to have only a finite amount of tolerance for it (shocking, I know). So as a survey gets longer, respondents tend to skip questions or take less time answering them carefully. When the term first emerged, it primarily referred to something that could happen within an individual survey. But now that solicitations to take surveys seem to appear almost everywhere, the concept is appropriately applied in reference to a sort of meta survey fatigue.

But if we want to get better at something, we need information to guide our choices.  We ought to know by now that “winging it” isn’t much of a strategy. So we need to collect data, and oftentimes survey research is the most efficient way to do that.

Therefore, in my never-ending quest to turn negatives into positives, I’m going to launch a new phrase into the pop culture ether. Instead of focusing on the detrimental potential of “survey fatigue,” I’m going to ask that we all dig down and build up our “survey fitness.”

Here’s why . . .

In the next couple of months, you are going to receive a few requests for survey data. Many of you have already received an invitation to participate in the “Great Colleges to Work For” survey. The questions in this survey try to capture a sense of the organization’s culture and employee engagement. For all of you who take pride in your curmudgeonly DNA, I can’t argue your criticism of the name of that survey. But they didn’t ask me when they wrote it, so we’re stuck with it. Nonetheless, the findings actually prove useful. So please take the time to answer honestly if you get an email from them.

The second survey invitation you’ll receive is for a new instrument called The Campus Living, Learning, and Work Environment. It tries to tackle aspects of equity and inclusion across a campus community. One of the reasons I signed on for this study is because it is the first that I know of to survey the entire community – faculty, staff, administration, and students. We have been talking a lot lately about the need for this kind of comprehensive data, and here is our chance to get some.

So if you find yourself getting annoyed at the increased number of survey requests this spring, you can blame it all on me. You are even welcomed to complain to me about all the surveys I’ve sent out this term if that is what it takes to get you to complete them. And if you start to worry about survey fatigue in yourself or others during the next few months, think of it as an opportunity to develop your survey fitness! And thanks for putting up with a few more requests for data than usual. I guarantee that I won’t let the data just sit at the bottom of a hard drive.

Make it a good day,

Mark

Differences in our students’ major experiences by race/ethnicity; WARNING: messy data ahead

It’s great to see the campus bustling again.  If you’ve been away during the two-week break, welcome back!  And if you stuck around to keep the place intact, thanks a ton!

Just in case you’re under the impression that every nugget of data I write about comes pre-packaged with a statistically significant bow on top, today I’d like to share some data findings from our senior survey that aren’t so pretty. In this instance, I’ve focused on data from the nine questions that comprise the section called “Experiences in the Major.” For purposes of brevity, I’ve paraphrased each of the items in the table below, but if you want to see the full text of the question, here’s the link to the 2015-16 senior survey on the IR web page. The table below disaggregates the responses to each of these items by Hispanic, African-American, and Caucasian students. The response options are one through five, and range either from strongly disagree to strongly agree or from never to very often (noted with an *).

Item Hispanic African-American Caucasian
Courses allowed me to explore my interests 3.86 3.82 4.09
Courses seemed to follow in a logical sequence 3.85 3.93 4.11
Senior inquiry brought out my best intellectual work 3.61 4.00 3.78
I received consistent feedback on my writing 3.72 4.14 3.96
Frequency of analyzing in class * 3.85 4.18 4.09
Frequency of applying in class * 3.87 4.14 4.15
Frequency of evaluating in class * 3.76 4.11 4.13
Faculty were accessible and responsive outside of class 4.10 4.21 4.37
Faculty knew how to prepare me for my post-grad plans 3.69 4.00 4.07

Clearly, there are some differences in average scores that jump out right away. The scores from Hispanic students are lowest among the three groups on all but one item. Sometimes there is little discernible difference between African-American and Caucasian students’ score while in other instances the gap between those two groups seems large enough to indicate something worth noting.

So what makes this data messy? After all, shouldn’t we jump to the conclusion that Hispanic students’ major experience needs substantial and urgent attention?

The problem, from the standpoint of quantitative analysis, is that none of the differences conveyed in the table meet the threshold for statistical significance. Typically, that means that we have to conclude that there are no differences between the three groups. But putting these findings in the context of the other things that we know already about differences in student experiences and success across these three groups (i.e., differences in sense of belonging, retention, and graduation) makes a quick dismissal of the findings much more difficult. And a deeper dive into the data both adds more useful insights to the mess.

The lack of statistical significance seems attributable to two factors. First, the number of students/majors in each category (570 responses from Caucasian students, 70 responses from Hispanic students, and 28 responses from African-American students) makes it a little hard to reach statistical significance. The interesting problem is that, in order to increase the number of Hispanic and Black students we would need to enroll more students in those groups, which might in part happen as a result of improving the quality of those students’ experience. But if we adhere to the statistical significance threshold, we would have to conclude that there is no difference between the three groups and would then be less likely to take the steps that might help us improve the experience, which would in turn improve the likelihood of enrolling more students in these two groups and ultimately get us to the place where a quantitative analysis would find statistical significance.

The other factor that seems to be getting in the way is that the standard deviations among Hispanic and African-American students is unusually large. In essence, this means that their responses (and therefore their experiences) are much more widely dispersed across the range of response options, while the responses from white students are more closely packed around the average score.

So we have a small number of non-white students relative to the number of white students and the range of experiences for Hispanic or African-American students seem unusually varied. Both of these finds make it even harder to conclude that “there’s nothing to see here.”

Just in case, I checked to see if the distribution of majors among each group differed. They did not. I also checked to see if there were any other strange differences between these student groups that might somehow affect these data. Although average incoming test score, the proportion of first-generation status, and the proportion of Pell Grant qualifiers differed, these differences weren’t stark enough to explain all of the variation in the table.

So the challenge I’m struggling with in this case of messy data is this:

We know that non-Caucasian students on average indicate a lower sense of belonging than their Caucasian peers. We know that our retention and graduation rates of non-white students are consistently lower than white students. We also know that absolute differences between two groups of .20-.30 are often statistically significant if the number of cases in each group is closer in size and if the standard deviation (aka dispersion) is in an expected range.

As a result, I can’t help thinking that just because a particular analytic finding doesn’t meet the threshold for statistical significance doesn’t necessarily mean that we should discard it outright. At the same time, I’m not comfortable arguing that these findings are rock solid.

In cases like these, one way to inform the inquiry is to look for other data sources with which we might triangulate our findings. So I ask all of you, do any of these findings match with anything you’ve observed or heard from students?

Make it a good day,

Mark

Can I ask a delicate question?

Since this is a crazy week for everyone, I’m going to try to post something that you can contemplate when you get the chance to relax your heart rate and breathe. I hope that you will give me the benefit of the doubt when you read this post, because I can imagine this question might be a delicate one and I raise it because I suspect it might help us more authentically and more honestly navigate through some obviously choppy waters as we make some key decisions about our new semester design.

Sometimes, when we advocate for the value of double majors and similar, or even improved, access to double majors in the new semester system, it seems like the rationale for this argument is grounded in the belief that double-majoring is advantageous for Augustana graduates and, as a corollary, relatively easy access to a double-major is helpful in recruiting strong prospective students. In other instances, it sounds as if we advocate for ease of access to double-majoring because we are afraid that programs with smaller numbers of majors will not survive if we build a system that produces fewer double majors.

Without question, both rationales come from the best of places. Doing all that we can for the sake of our student’s potential future success or the possibility of attracting a stronger and larger pool of future students seems utterly reasonable. Likewise, ensuring the health of all current academic departments, especially those that currently enjoy a smaller number of majors, and therefore ensuring the employment stability of all current faculty, is also utterly reasonable.

Yet I wonder if our endeavor to design the best possible semester system would benefit from parsing these concerns more clearly, examining them as distinct issues, and addressing them separately as we proceed. Because it seems to me that prioritizing double-majoring because it benefits post-graduate success, prioritizing double-majoring because it improves recruiting, and prioritizing double-majoring because it ensures employment stability for faculty is not the same as more directly identifying the factors that maximize our students’ post-graduate success, optimizing our offerings (and the way we communicate them) to maximize our recruiting efforts, and designing a system that maintains employment stability and quality for all of our current faculty members. The first approach asserts a causal relationship and seems to narrow our attention toward a single means to an end. The second approach focuses our attention on a goal while broadening the potential ways by which we might achieve it.

Certainly we can empirically test the degree to which double-majoring increases our student’s post-graduate success or whether a double-major friendly system strengthens our efforts to recruit strong students. We could triangulate our findings with other research on the impact of double-majoring on either post-graduate success or prospective student recruiting and design a system that situates double-majoring to hit that sweet spot for graduates and prospective students.

Likewise, we could (and I would argue, should) design a new semester system that ensures gratifying future employment for all current faculty (as opposed to asking someone with one set of expertise and interests to spend all of their time doing something that has little to do with that expertise and interest). However, it seems to me that we might be missing something important if we assume, or assert, that we are not likely to achieve that goal of employment stability if we do not maintain historically similar proportions of double-majors distributed in historically similar ways.

Those of you who have explored the concept of design thinking know that one of its key elements is an openness to genuinely consider the widest possible range of options before beginning the process of narrowing toward a final product or concept. At Augustana we are trying to build something new, and we are trying to do it in ways that very few institutions have done before. Moreover, we aren’t building it from an infinite array of puzzle pieces; we are building it with the puzzle pieces that we already have. So it seems that we ought not box ourselves prematurely. Instead, we might genuinely help ourselves by opening our collective scope to every possibility that 1) gives our students the best chance for success, 2) gives us the best chance to recruit future students, AND 3) uses every current faculty member’s strengths to accomplish our mission in a new semester system.

Please don’t misunderstand me – I am NOT arguing against double-majors (on the contrary, I am intrigued by the idea). I’m only suggesting that, especially as we start to tackle complicated issues that tie into very real and human worries about the future, we are probably best positioned to succeed, both in process and in final product, to the degree that we directly address the genuine and legitimate concerns that keep us up at night. We are only as good as our people and our relationships with each other. I know we are capable of taking all of this into account as we proceed into the spring. I hope every one of you take some time to relax and enjoy the break between terms so that you can start the spring refreshed and fully able to tackle the complex decisions that we have before us.

Make it a good day,

Mark

 

Time to break out the nerves of steel

When I used to coach soccer, other coaches and I would sarcastically say that if you want to improve team chemistry, start winning. Of course we knew that petty disagreements and personal annoyances didn’t vanish just because your team got on a winning streak. But it was amazing to see how quickly those issues faded into the shadows when a team found themselves basking in a winner’s glow. Conversely, when that glow faded it was equally amazing to see how normally small things could almost instantaneously mushroom into team-wide drama that would suck the life out of the locker room.

Even though one might think that in order to win again we just needed to practice harder or find a little bit of luck, almost always the best way to get back to winning was to get the team chemistry right first. That meant deliberately refocusing everyone on being the best of teammates, despite the steamy magma of hot emotion that might be bubbling up on the inside. In the end, it always became about the choice to be the best of who we aspired to be while staring into the pale, heartless eyes of the persons we could so easily become.

You might think that I’m going to launch into a speech about American values, immigration, and refugees. But actually I’m thinking about the choices that face all of us at Augustana College as we start to sort through the more complicated parts of the design process in our conversion to semesters. Like a lot of complex organisms, a functioning educational environment (especially one that includes a residential component) is much more than a list of elements prioritized from most to least important. Instead, a functioning educational environment – especially one that maximizes its impact – is an ecosystem that thrives because of the relationships between elements rather than the elements themselves. It is the combination of relationships that maintain balance throughout the organism and give it the ability to adapt, survive, adjust, recover, and thrive. If one element dominates the organism, the rest of the elements will eventually die off, ultimately taking that dominant element down with them. But if all the elements foster a robust set of relationships that hold the whole thing together, the organism really does become greater than the sum of its parts.

Likewise, we are designing a new organism that is devoted to exceptional student learning and growth. Moreover, we have to design this organism so that each of the elements can thrive while gaining strength from (and giving strength to) each other. We give ourselves the best chance of getting it right if we keep the image of an ecosystem fresh in our minds and strive to design an ecosystem in which all of the relationships between elements perpetuate resilience and energy.

But in order to collaboratively build something so complex, we have to be transparent and choose to trust. And this is where we need to break out the nerves of steel. Because we all feel the pressure, the anxiety, the unknown, and the fear of that unknown. The danger, of course, is that in the midst of that pressure it would be easy, even human, to grab on to one element that represents certainty in the near-term and lose sight of 1) the relationships that sustain any given element (including the one you might currently be squeezing the air out of), and 2) the critical role of all of those relationships in sustaining the entire organism.

As we embark toward the most challenging parts of this semester conversion design, I hope we can find a way, especially when we feel the enormity of it all bearing down on us, to embody transparency and choose to trust. That will mean willingly deconstructing our deepest concerns, facing them openly, and straight-forwardly solving them together.

Think about where we were a year ago and where we are now. We’ve done a lot of impressive work that can’t be understated. (Based on the phone calls I’ve received from other institutions asking us how we are navigating the conversion to semesters, we might just be the golden child of organizational functionality!). Now, as the more complex challenges emerge and the pressure mounts, let’s remember what got us here, what will get us through this stretch of challenging decisions, and what will get us safely to the other side.

Make it a good day,

Mark

“We all want to belong, yeah …”

I just watched a wonderful TEDx talk by Terrell Strayhorn, Professor of Higher Education at (the) Ohio State University, called “Inalienable Rights: Life, Liberty, and the Pursuit of Belonging.” With enviable ease, Dr. Strayhorn walks his audience through the various factors that impede college persistence and demonstrates why a sense of belonging is so important for student success. He concludes his talk with his remarkably smooth singing voice, crooning, “We all want to belong, yeah . . .”

If you’ve been following my blog over the last year you’ve seen me return to our student data that reveals troubling differences in sense of belonging on campus across various racial and ethnic groups. The growing body of research on belongingness and social identity theory continues to demonstrate that the factors that shape a sense of belonging are extensive. While these complicated findings might gratify the social scientist in me, the optimistic activist part of me has continued to beg for more concrete solutions; things that individuals within a community can do right away to strengthen a sense of membership for anyone in the group who might not be so sure that they belong.

So here are a couple of ideas that poured some of the best kind of fuel onto my fire over the weekend: Micro-Kindness and Micro-Affirmations. Both terms refer to a wonderfully simple yet powerful idea. In essence, both concepts recognize that we live in an imperfect world rife with imperfect interactions and, if we want the community in which we exist to be better than it is (no matter how good or bad it is at present), then individual members of that community have to take action to change it. Applied to the ongoing discussion of microaggressions and their potential impact on individuals within a community (particularly those from traditionally marginalized groups), both ideas assert that there are things that we can do to emphasize to others that we welcome them into our community and reduce the existence of microaggressions. These actions can be as simple as opening a door for someone and smiling at them, making eye contact and saying hello, or engaging in brief but inclusive conversation. Instructors can have a powerful micro-affirmative impact by taking the time to tell a student who might be hesitant or struggling that you know that he or she can succeed in your class.

Researchers at the Higher Education Research Institute at UCLA have found that validating experiences, much like the micro-kindnesses and micro-affirmations described above, appear to have a significant impact in reducing perceptions of discrimination and bias. In fact, after accounting for the negative impact of discrimination and bias on a sense of belonging, interpersonal validations generated by far the largest positive effect on a sense of belonging.

Research on the biggest mistakes that people can make in trying to change behavior has found that trying to eliminate bad behaviors is much less effective than instituting new behaviors. Since individuals often perceive microaggressions to come in situations where a slight was not intended, eradicating everything that might be perceived as a slight or snub seems almost impossible. But if each of us were to make the effort to enact a micro-kindness or a micro-affirmation several times each day, we might set in motion a change in which we

  1. substantially improve upon the community norms within which microaggressions might occur, and
  2. significantly increase a sense a belonging among those most likely to feel like outsiders.

Make it a good day,

Mark

 

Applying a Story Spine to Guide Assessment

As much as I love my assessment compadres, sometimes I worry that the language we use to describe the process of continual improvement sounds pretty stiff. “Closing the loop” sounds too much like teaching a 4 year-old to tie his shoe. Over the years I’ve learned enough about my own social science academic nerdiness to envy those who see the world through an entirely foreign lens. So when I stumbled upon a simple framework for telling a story called a “Story Spine,” it struck me that this framework might spell out the fundamental pieces of assessment in a way that just makes much more sense.

The Story Spine idea can be found in a lot of places on the internet (e.g., Pixar and storytelling), but I found out about it through the world of improv. At its core, the idea is to help improvisers go into a scene with a shared understanding of how a story works so that, no matter what sort of craziness they discover in the course of their improvising, they know that they are all playing out the same meta-narrative.

Simply put, the Story Spine divides a story into a series of sections that each start with the following phrases. As you can tell, almost every story you might think of would fit into this framework.

Once upon a time . . .

And every day . . .

Until one day . . .

Because of that . . .

Because of that . . .

Until finally . . .

And ever since then . . .

These section prompts can also fit into four parts of a cycle that represent the transition from an existing state of balance (“once upon a time” and “every day”), encountering a disruption of the existing balance (“until one day”), through a quest for resolution (“because of that,” “because of that,” and “until finally”), and into a new state of balance (“and ever since then”).

To me, this framework sounds a lot like the assessment loop that is so often trotted out to convey how an individual or an organization engages assessment practices to improve quality. In the assessment loop, we are directed to “ask questions,” “gather evidence,” “analyze evidence,” and “use results.” But to be honest, I like the Story Spine a lot better. Aside from being pretty geeky, the assessment loop starts with a vague implication that trouble exists below the surface and without our knowledge. This might be true, but it isn’t particularly comforting. Furthermore, the assessment loop doesn’t seem to leave enough room for all of the forces that can swoop in and affect our work despite our best intentions. There is a subtle implication that educating is like some sort of assembly line that should work with scientific precision. Finally, the assessment loop usually ends with “using the results” or, at its most complex, some version of “testing the impact of something we’ve added to the mix as a result of our analysis of the evidence.” But in the real world, we are often faced with finding a way to adjust to a new normal – another way of saying that entering a new state of balance is as much a function of our own adjustment as it is the impact of our interventions.

So if you’ve ever wondered if there was a better way to convey the way that we live an ideal of continual improvement, maybe the Story Spine works better. And maybe if we were to orient ourselves toward the future by thinking of the Story Spine as a map for what we will encounter and how we ought to be ready to respond, maybe – just maybe – we will be better able to manage our way through our own stories.

Make it a good day,

Mark

Some comfort thoughts about mapping

I hope you are enjoying the bright sunshine today.  Seeing that we might crack the 70 degree mark by the end of the week makes the sun that much more invigorating!

As you almost certainly know by now, we have been focusing on responding to the suggestions raised in the Higher Learning Commission accreditation report regarding programmatic assessment. The first step in that response has been to gather curricular and learning outcome maps for every major.

So far, we have 32 out of 45 major-to-college outcomes maps and 14 out of 45 courses-to-major outcomes maps.  Look at it as good or look at it as bad – at least we are making progress, and we’ve still got a couple weeks to go before I need to have collected them all. More importantly, I’ve been encouraged by the genuine effort that everyone has made to tackle this task. So thank you to everyone.

Yet as I’ve spoken with many of you, two themes have arisen repeatedly that might be worth sharing across the college and reframing just a bit.

First, many of you have expressed concern that these maps are going to be turned into sticks that are used to poke you or your department later. Second, almost everyone has worried about the inevitable gap between the ideal student’s progress through a major and the often less-ideal realities of the way that different students enter and progress through the major.

To both of those concerns, I’d like to suggest that you think of these maps as a perpetually working document instead of some sort of contract that cannot be changed. The purpose of drawing out these maps is to make explicit the implicit only as a starting point from which your program will constantly evolve. You’ll change things as your students change, as your instructional expertise changes, and as the future for which your program prepares students changes. In fact, probably the worst thing that could happen is a major that never changes anything no matter what changes around it.

The goal at this point isn’t to produce an unimprovable map. Instead, the goal is put a map together that is your best estimate of what you and your colleagues are trying to do right now. From there, you’ll have a shared starting point that will make it a lot easier to identify and implement adjustments that will in turn produce tangible improvement.

So don’t spend too much time on your first draft. Just get something on paper (or pixels) that honestly represents what you are trying to do and send it to me using the templates I’ve already shared with everyone. Then expect that down the road you’ll decide to make a change and produce a second draft. And so on, and so on. It really is that simple.

Make it a good day,

Mark