Thoughts on Teaching at the End of the World

This past semester, I worked with 23 Honors students at UMASS Amherst on a course modestly entitled “The Politics of the End of the World.” In that course, students explored different ways in which the physical sciences, the social sciences, and the humanities had approached ends of the world caused or threatened by disease, nuclear warfare, environmental change, and cultural processes. The students concluded the term by working in groups to prepare podcasts on how various societies in Massachusetts had approached the same ends.

This class was an evolution of an earlier version of the course, which was a one-credit-hour special topics class offered two years ago. Reworking that readings and seminar course into the new, four-credit-hour version formed my project during a year as a Lilly teaching fellow. That UMASS program gave me a year’s worth of seminars, and a course buyout, during which I could refine my understanding of the end of the world and of how to teach at a college level. I want to reflect on why I structured the course as I did and how I would reform that structure in the future.

My vision for the course was a simple one: having students learn by creating. Given that the study of the end of the world as a theoretical concept remains in its infancy, I could not teach the students by having them become acolytes to some dead scribbler or even by introducing them to the work of a community of scholars. Rather, I knew that we would have to knit together disparate communities whose work touched on a common theme, but who did not yet know themselves to be participating in a common endeavor. Moreover, even had such a project existed, I would have still wanted to have the students engage with the subject by creating their own knowledge, rather than copying and repeating someone else’s theories or descriptions.

The choice of a podcast as a final project followed naturally from that vision. Requiring students to create a podcast combined all the advantages of an open canvas with the additional benefit that numerous examples exist showing how to structure such a work. Like a traditional research paper, it would allow students to pose and explore a new argument; unlike those dreary assignments, a podcast—simply by being novel—would spur students to work with more creative (but no less rigorous ways) of performing analyses.

The technical challenge of producing podcasts rather than entering text into a word processor also put the project in a sweet spot. Because podcasts can incorporate interviews, sound effects, music, and other editing, they reflect the mediated information environment in which people, including students, actually exist better than do purely textual assignments. Furthermore, the very complexity of creating such a document provided a structure that would encourage (and all but require) students to master project planning, a division of labor, and creative ways to deliver their message. Yet podcasts remain far less complex than visual media, like an online video or a television program. Every technical challenge inherent in a podcast remains in a visual project but compounded by more complex challenges of shot framing, lighting, and acting. Such challenges can overshadow the goal of creating an argument, rather than a spectacle. A podcast, then, would produce the right mix of complication and simplicity that would help students break out of the 8.5” x 11” box within which so much undergraduate education is crafted to fit.

The path from assignment to course was, unsurprisingly, probably less direct than it appears retrospectively or conceptually. I am unsure that I arrived at the idea of the podcast assignment as a way to spark and assess learning for this course; it may have been a natural advance on the complex group assignments—debates, briefing books, presentations—I had used in earlier courses about foreign policy and international relations. I do know that all of these assignments proceed from a corollary of my belief that students should learn by doing, which is that students are capable of doing far more than most courses assume or allow.

The paradigmatic college course, after all, proceeds from examinations. Tests exist to measure students’ performance relative to some fixed baseline. The questions implicitly posed by this paradigm equate learning with that performance: How much of the material can students recall? How adept are students at applying these theories to new problems? How much of this apparent mastery is due to chance, and how many testing instruments must we have to isolate any measurement or idiosyncratic error from the underlying concept of performance that we wish to identify?

The alternative to this closed paradigm recognizes that students may acquire skills that cannot be so easily measured by multiple-choice or essay questions; that the value of their performance may be greater than the sum of those easily measured parts; that the instructor may not know ex ante which skills constitute the totality of the relevant set; and that students may exceed their instructors’ ability in some of those skills even within the duration of a course. Such an open-ended view proves harder to express to assessment boards, since an instructor can only describe the processes by which assessment will occur rather than specifying the level to which students will be measured.

This open-ended course design more closely approximates the way in which college faculty themselves are assessed in their primary function of research. As annoying as peer review may be, the research enterprise remains fundamentally driven by goals of originality, significance, and relevance that cannot be specified—against what rubric would Einstein’s paper on Brownian motion be scored?—but which can nevertheless be employed in practice. Something similar holds in more important arenas, from commerce to the theater. Novel products and texts cannot be scored in advance of their creation, even if, once they exist, they may be compared to others to arrive at informed judgments of their worth, difficulty, and utility. Even there, however, no standard binds any audience to returning a final assessment that simply sums across various categories; were this possible, of course, judging the Academy Awards or ranking consumer electronics would be as simple as entering some values into a spreadsheet.

In this context, the ideas of learning by doing and that students may do more than we typically assume reach their fullest efflorescence. The purpose of a class in this paradigm is to lay a foundation, provide resources, and then allow students to reach a goal through any feasible strategy they choose. The idea of the instructor as a guardian of eternal verities or hoarder of knowledge vanishes as a consequence of this structural rearrangement. The instructor becomes instead a coach and a resource, an assistant rather than an autocrat, benevolent or otherwise. And knowledge becomes not some treasure to be hoarded but an activity to be practiced or a performance to be assessed and refined.

How well did the course realize the vision? In all, I was pleased. The students did their part, despite some hesitation at the unfamiliar parts of the course (notably, the podcast designed to be released to the world) and even more trepidation caused by the familiar parts (in particular, expectations about grading and poor experiences with earlier group projects). Still, the course could have succeeded better.

One improvement would have been incorporating greater transparency about the underlying objectives of the course and how they related to the course readings, the assignments, and the overarching structure of the course. I’m not a transparency zealot, and in fact I think not everything should be spelled out. To spotlight some concepts is to eliminate things that can only be seen in the dark. One cannot be frightened of a haunted house if a tape recording constantly reminds visitors that they are in no danger. But many other steps could be taken. Talking about why the course readings drew on different disciplines, for instance, or how fictional portrayals of ends of the world should be assessed when juxtaposed with their nonfictional counterparts presents one such opportunity.

Another improvement would have been to incorporate much more specific skill-building exercises—a bit of scaffolding for the bigger edifices. It’s easiest to visualize this for the editors, who worked directly with the audio editing software. I could have designed and assigned exercises on importing, trimming, adjusting levels, and so on; those would be relatively easy. Similarly, talent and others could have been assigned to work with editors to record using telephones and Skype, recording interviews out in the world, using a mixer, and so on. Talent could have worked with writers to figure out how to make scripts more conversational. Some of these I skipped because I thought folks could work them out (and they did!), some of these I skipped because I thought the AV lab could provide instruction in them, and some of them I skipped because I didn’t know they would be necessary. Added to all of that, of course, is the fact that designing new exercises as well as a new class takes time. But I think it would have been worthwhile to do so, and it’s a much easier fix to make in the future for a revision of the class.

Two things I tried that were big risks that paid off were collaborative rubric and grade weighting and, separately, the strategic use of unstructured time. The students worked to develop both the rubric for the podcast and the points weighting for the course, including how much of the final project grade should come from team performance and how much from individual contribution. (I provided guidelines and made the ultimate decisions.) This worked both to make clear what the standards were and also to make the final assessment exercise more collaborative than it would otherwise have been.

Separately, my syllabus for the course included far more “unstructured” time than any I’ve ever written. There were many dates marked “TBD” on the syllabus when we began—perhaps a quarter of the course—and although a few of those were filled in as the term progressed many remained nominally blank. This had a lot of advantages. For a first-time course, it’s hard to know how long some topics will take and where the instructor will need to reinforce some lessons; this allowed me to make course corrections without too much pressure. Second, it naturally put the focus on the group activity rather than the instructor. Third, coordinating meetings outside of class became difficult (a subject for another post) and having ample time within class made it possible to have collaborative work function at a higher level.

A few ambivalent points—good foundations, weak execution. The CATME tool for peer evaluation was a logistical lifesaver, and I think that I could have challenged students to use their rankings as a reflection opportunity more than I do. I also found that students reacted very strongly to poor feedback, and were apt to criticize data showing their weakness as an attack or a bad grade rather than as a spur to have the whole team reflect on why one or another member wasn’t doing as well as they could have been. Often, I found those instances to reflect less apathy or incompetence and more poor communication practices on both sides.

In the future, I suspect that I will require all students to use the same audio editing program; it makes trouble-shooting and exchanging technical information much easier, even if some students have had experience with different programs beforehand. Moreover, Audacity is cross-platform and that makes a huge difference in usability.

Finally, I think that having more use of critiques of real-world podcasts, and breaking those critiques down by role so that researchers were listening for research, talent for production tips, and editors for editing technique, would have really helped focus folks a lot when it came to putting together a finished product. In particular, some questions—like whether one is “allowed” to say that one has doubts or was surprised by a particular fact—would have been answered already by listening to professionals and scholars say just that!

Overall, I’m happy with how the course progressed. It was a big risk but the results, which I look forward to sharing, were heartening. And I think that at least a few students learned at least a few things.