Seeking to have a rich discussion period—which we did indeed have—we limited our talks to about 12 minutes each. My presentation was therefore more evocative than comprehensive, more open-ended than conclusive. There are primary sources I’m still searching for and technical details I’m still sorting out. I welcome feedback, criticism, and leads.
An Account of Randomness in Literary Computing
MLA 2013, Boston
There’s a very simple question I want to ask this evening:
Where does randomness come from?
Randomness has a rich history in arts and literature, which I don’t need to go into today. Suffice it to say that long before Tristan Tzara suggested writing a poem by pulling words out of a hat, artists, composers, and writers have used so-called “chance operations” to create unpredictable, provocative, and occasionally nonsensical work. John Cage famously used chance operations in his experimental compositions, relying on lists of random numbers from Bell Labs to determine elements like pitch, amplitude, and duration (Holmes 107–108). Jackson Mac Low similarly used random numbers to generate his poetry, in particular relying on a book called A Million Random Digits with 100,000 Normal Deviates to supply him with the random numbers (Zweig 85).
Published by the RAND Corporation in 1955 to supply Cold War scientists with random numbers to use in statistical modeling (Bennett 135), the book is still in print—and you should check out the parody reviews on Amazon.com. “With so many terrific random digits,” one reviewer jokes, “it’s a shame they didn’t sort them, to make it easier to find the one you’re looking for.”
This joke actually speaks to a key aspect of randomness: the need to reuse random numbers, so that, say you’re running a simulation of nuclear fission, you can repeat the simulation with the same random numbers—that is, the same probability—while testing some other variable. In fact, most of the early work on random number generation in the United States was funded by either the U.S. Atomic Commission or the U.S. Military (Montfort et al. 128). The RAND Corporation itself began as a research and development arm of the U.S. Air Force.
Now the thing with going down a list of random numbers in a book, or pulling words out of hat—a composition method, by the way, Thom Yorke used for Kid A after a frustrating bout of writer’s block—is that the process is visible. Randomness in these cases produces surprises, but the source itself of randomness is not a surprise. You can see how it’s done.
What I want to ask here today is, where does randomness come from when it’s invisible? What’s the digital equivalent of pulling words out of a hat? And what are the implications of chance operations performed by a machine?
To begin to answer these questions I am going to look at two early works of electronic literature that rely on chance operations. And when I say early works of electronic literature, I mean early, from fifty and sixty years ago. One of these works has been well studied and the other has been all but forgotten.
My first case study is the Strachey Love Letter Generator. Programmed by Christopher Strachey, a close friend of Alan Turing, the Love Letter Generator is likely—as Noah Wardrip-Fruin argues—the first work of electronic literature, which is to say a digital work that somehow makes us more aware of language and meaning-making. Strachey’s program “wrote” a series of purplish prose love letters on the Ferranti Mark I Computer—the first commercially available computer—at Manchester University in 1952 (Wardrip-Fruin “Digital Media” 302):
YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. YOU ARE MY WISTFUL SYMPATHY: MY TENDER LIKING.
M. U. C.
Affectionately known as M.U.C., the Manchester University Computer could produce these love letters at a pace of one per minute, for hours on end, without producing a duplicate.
The “trick,” as Strachey put it in a 1954 essay about the program (29-30), is its two template sentences (Myadjectivenounadverbverb your adjectivenoun and You are my adjectivenoun) in which the nouns, adjectives, and adverbs are randomly selected from a list of words Strachey had culled from a Roget’s thesaurus. Adverbs and adjectives randomly drop out of the sentence as well, and the computer randomly alternates the two sentences.
The Love Letter Generator has attracted—for a work of electronic literature—a great deal of scholarly attention. Using Strachey’s original notes and source code (see figure to the left), which are archived at the Bodleian Library at the University of Oxford, David Link has built an emulator that runs Strachey’s program, and Noah Wardrip-Fruin has written a masterful study of both the generator and its historical context.
As Wardrip-Fruin calculates, given that there are 31 possible adjectives after the first sentence’s opening possessive pronoun “My” and then 20 possible nouns that could that could occupy the following slot, the first three words of this sentence alone have 899 possibilities. And the entire sentence has over 424 million combinations (424,305,525 to be precise) (“Digital Media” 311).
On the whole, Strachey was publicly dismissive of his foray into the literary use of computers. In his 1954 essay, which appeared in the prestigious trans-Atlantic arts and culture journal Encounter (a journal, it would be revealed in the late 1960s, that was primarily funded by the CIA—see Berry, 1993), Strachey used the example of the love letters to illustrate his point that simple rules can generate diverse and unexpected results (Strachey 29-30). And indeed, the Love Letter Generator qualifies as an early example of what Wardrip-Fruin calls, referring to a different work entirely, the Tale-Spin effect: a surface illusion of simplicity which hides a much more complicated—and often more interesting—series of internal processes (Expressive Processing 122).
Wardrip-Fruin coined this term—the Tale-Spin effect—from Tale-Spin, an early story generation system designed by James Mehann at Yale University in 1976. Tale-Spin tended to produce flat, plodding narratives, though there was the occasional existential story:
Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. He was unable to call for help. He drowned.
But even in these suggestive cases, the narratives give no sense of the process-intensive—to borrow from Chris Crawford—calculations and assumptions occurring behind the interface of Tale-Spin.
In a similar fashion, no single love letter reveals the combinatory procedures at work by the Mark I computer.
MY AFFECTION LUSTS FOR YOUR TENDERNESS. YOU ARE MY PASSIONATE DEVOTION: MY WISTFUL TENDERNESS. MY LIKING WOOS YOUR DEVOTION. MY APPETITE ARDENTLY TREASURES YOUR FERVENT HUNGER.
M. U. C.
This Tale-Spin effect—the underlying processes obscured by the seemingly simplistic, even comical surface text—are what draw Wardrip-Fruin to the work. But I want to go deeper than the algorithmic process that can produce hundreds of millions of possible love letters. I want to know, what is the source of randomness in the algorithm? We know Strachey’s program employs randomness, but where does that randomness come from? This is something the program—the source code itself—cannot tell us, because randomness operates at a different level, not at the level of code or software, but in the machine itself, at the level of hardware.
In the case of Strachey’s Love Letter Generator, we must consider the computer it was designed for, the Mark I. One of the remarkable features of this computer was that it had a hardware-based random number generator. The random number generator pulled a string of random numbers from what Turing called “resistance noise”—that is, electrical signals produced by the physical functioning of the machine itself—and put the twenty least significant digits of this number into the Mark I’s accumulator—its primary mathematical engine (Turing). Alan Turing himself specifically requested this feature, having theorized with his earlier Turing Machine that a purely logical machine could not produce randomness (Shiner). And Turing knew—like his Cold War counterparts in the United States—that random numbers were crucial for any kind of statistical modeling of nuclear fission.
I have more to say about randomness in the Strachey Love Letter Generator, but before I do, I want to move to my second case study. This is an early, largely unheralded work called SAGA. SAGA was a script-writing program on the TX-0 computer. The TX-0 was the first computer to replace vacuum tubes with transistors and also the first to use interactive graphics—it even had a light pen.
The TX-0 was built at Lincoln Laboratory in 1956—a classified MIT facility in Bedford, Massachusetts chartered with the mission of designing the nation’s first air defense detection system. After TX-0 proved that transistors could out-perform and outlast vacuum tubes, the computer was transferred to MIT’s Research Laboratory of Electronics in 1958 (McKenzie), where it became a kind of playground for the first generation of hackers (Levy 29-30).
In 1960, CBS broadcast an hour-long special about computers called “The Thinking Machine.” For the show MIT engineers Douglas Ross and Harrison Morse wrote a 13,000 line program in six weeks that generated a climactic shoot-out scene from a Western.
Several computer-generated variations of the script were performed on the CBS program. As Ross told the story years later, “The CBS director said, ‘Gee, Westerns are so cut and dried couldn’t you write a program for one?’ And I was talked into it.”
The TX-0’s large—for the time period—magnetic core memory was used “to keep track of everything down to the actors’ hands.” As Ross explained it, “The logic choreographed the movement of each object, hands, guns, glasses, doors, etc.” (“Highlights from the Computer Museum Report”).
And here, is the actual output from the TX-0, printed on the lab’s Flexowriter printer, where you can get a sense of the way SAGA generated the play:
In the CBS broadcast, Ross explained the narrative sequence as a series of forking paths.
Each “run” of SAGA was defined by sixteen initial state variables, with each state having several weighted branches (Ross 2). For example, one of the initial settings is who sees whom first. Does the sheriff see the robber first or is it the other way around? This variable will influence who shoots first as well.
There’s also a variable the programmers called the “inebriation factor,” which increases a bit with every shot of whiskey, and doubles for every swig straight from the bottle. The more the robber drinks, the less logical he will be. In short, every possibility has its own likely consequence, measured in terms of probability.
The MIT engineers had a mathematical formula for this probability (Ross 2):
But more revealing to us is the procedure itself of writing one of these Western playlets.
First, a random number was set; this number determined the probability of the various weighted branches. The programmers did this simply by typing a number following the RUN command when they launched SAGA; you can see this in the second slide above, where the random number is 51455. Next a timing number established how long the robber is alone before the sheriff arrives (the longer the robber is alone, the more likely he’ll drink). Finally each state variable is read, and the outcome—or branch—of each step is determined.
What I want to call your attention to is how the random number is not generated by the machine. It is entered in “by hand” when one “runs” the program. In fact, launching SAGA with the same random number and the same switch settings will reproduce a play exactly (Ross 2).
In a foundational work in 1996 called The Virtual Muse Charles Hartman observed that randomness “has always been the main contribution that computers have made to the writing of poetry”—and one might be tempted to add, to electronic literature in general (Hartman 30). Yet the two case studies I have presented today complicate this notion. The Strachey Love Letter Generator would appear to exemplify the use of randomness in electronic literature. But—and I didn’t say this earlier—the random numbers generated by the Mark I’s method tended not to be reliably random enough; remember, random numbers often need to be reused, so that the programs that run them can be repeated. This is called pseudo-randomness. This is why books like the RAND Corporation’s A Million Random Digits is so valuable.
But the Mark I’s random numbers were so unreliable that they made debugging programs difficult, because errors never occurred the same way twice. The random number instruction eventually fell out of use on the machine (Campbell-Kelly 136). Skip ahead 8 years to the TX-0 and we find a computer that doesn’t even have a random number generator. The random numbers must be entered manually.
The examples of the Love Letters and SAGA suggest at least two things about the source of randomness in literary computing. One, there is a social-historical source; wherever you look at randomness in early computing, the Cold War is there. The impact of the Cold War upon computing and videogames has been well-documented (see, for example Edwards, 1996 and Crogan, 2011), but few have studied how deeply embedded the Cold War is in the software algorithms and hardware processes themselves of modern computing.
Second, randomness does not have a progressive timeline. The story of randomness in computing—and especially in literary computing—is neither straightforward nor self-evident. Its history is uneven, contested, and mostly invisible. So that even when we understand the concept of randomness in electronic literature—and new media in general—we often misapprehend its source.
Bennett, Deborah. Randomness. Cambridge, MA: Harvard University Press, 1998. Print.
Turing, A.M. “Programmers’ Handbook for the Manchester Electronic Computer Mark II.” Oct. 1952. Web. 23 Dec. 2012.
Wardrip-Fruin, Noah. “Digital Media Archaeology: Interpreting Computational Processes.” Media Archaeology: Approaches, Applications, and Implications. Ed by. Erkki Huhtamo & Jussi Parikka. Berkeley, California: University of California Press, 2011. Print.
—. Expressive Processing: Digital Fictions, Computer Games, and Software Sudies. MIT Press, 2009. Print.
Zweig, Ellen. “Jackson Mac Low: The Limits of Formalism.” Poetics Today 3.3 (1982): 79–86. Web. 1 Jan. 2013.
I’m delighted to announce the publication of10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (MIT Press, 2013). My co-authors are Nick Montfort (who conceived the project), Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark Marino, Michael Mateas, Casey Reas, and Noah Vawter. Published in MIT Press’s Software Studies series, 10 PRINTis about a single line of code that generates a continuously scrolling random maze on the Commodore 64. 10 PRINT is aimed at people who want to better understand the cultural resonance of code. But it’s also about aesthetics, hardware, typography, randomness, and the birth of home computing. 10 PRINT has already attracted attention from Bruce Sterling (who jokes that the title “really rolls off the tongue”), Slate, and Boing Boing. And we want humanists (digital and otherwise) to pay attention to the book as well (after all, five of the co-authors hold Ph.D.’s in literature, not computer science).
Aside from its nearly unpronounceable title, 10 PRINT is an unconventional academic book in a number of ways:
10 PRINT was written by ten authors in one voice. That is, it’s not a collection with each chapter written by a different individual. Every page of every chapter was collaboratively produced, a mind-boggling fact to humanists mired in the model of the single-authored manuscript. A few months before I knew I was going to work on 10 PRINT, I speculated that the future of scholarly publishing was going to be loud, crowded, and out of control. My experience with 10 PRINT bore out that theory—though the end product does not reflect the messiness of the writing process itself, which I’ll address in an upcoming post.
10 PRINT is nominally about a single line of code—the eponymous BASIC program for the Commodore 64 that goes 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. But we use that one line of code as both a lens and a mirror to explore so much more. In his generous blurb for10 PRINT, Matt Kirschenbaum quotes William Blake’s line about seeing the world in a grain of sand. This short BASIC program is our grain of sand, and in it we see vast cultural, technological, social, and economic forces at work.
10 PRINT emerges at the same time that the digital humanities appear to be sweeping across colleges and universities, yet it stands in direct opposition to the primacy of “big data” and “distant reading”—two of the dominant features of the digital humanities. 10 PRINT is nothing if not a return to close reading, to small data. Instead of speaking in terms of terabytes and petabytes, we dwell in the realm of single bits. Instead of studying datasets of unimaginable size we circle iteratively around a single line of code, reading it again and again from different perspectives. Even single characters in that line of code—say, the semicolon—become subject to intense scrutiny and yield surprising finds.
10 PRINT practices making in order to theorize being. My co-author Ian Bogost calls it carpentry. I’ve called it deformative humanities. It’s the idea that we make new things in order to understand old things. In the case of 10 PRINT, my co-authors and I have written a number of ports of the original program that run on contemporaries of the C64, like the Atari VCS, the Apple IIe, and the TRS-80 Color Computer. One of the methodological premises of 10 PRINT is that porting—like the act of translation—reveals new facets of the original source. Porting—again, like translation—also makes visible the broader social context of the original.
In the upcoming days I’ll be posting more about 10 PRINT, discussing the writing process, the challenges of collaborative authorship, our methodological approaches, and of some of the rich history we uncovered by looking at a single line of code.
In the meantime, a gorgeous hardcover edition is available (beautifully designed by my co-author, Casey Reas). Or download a free PDF released under a Creative Commons BY-NC-SA license.
What follows is a comprehensive list of digital humanities sessions at the 2013 Modern Language Association Conference in Boston.
These are sessions that in some way address the influence and impact of digital materials and tools upon language, literary, textual, and media studies, as well as upon online pedagogy and scholarly communication. The 2013 list stands at 66 sessions, a slight increase from 58 sessions in 2012 (and 44 in 2011, and only 27 the year before). Perhaps the incremental increase this year means that the digital humanities presence at the convention is topping out, leveling out at 8% of the 795 total sessions. Or maybe it’s an indicator of growing resistance to what some see as the hegemony of digital humanities. Or it could be that I simply missed some sessions—if so, please correct me in the comments and I’ll add the session to the list.
Presiding: Brian Croxall, Emory Univ.; Adeline Koh, Richard Stockton Coll. of New Jersey
This workshop is an "unconference" on digital pedagogy. Unconferences are participant-driven gatherings where attendees spontaneously generate the itinerary. Participants will propose discussion topics in advance on our Web site, voting on final sessions at the workshop’s start. Attendees will consider what they would like to learn and instruct others about teaching with technology. Preregistration required.
Thursday, 3 January, 8:30–11:30 a.m., Republic A, Sheraton
Presiding: Alison Byerly, Middlebury Coll.; Kathleen Fitzpatrick, MLA; Katherine A. Rowe, Bryn Mawr Coll.
Facilitated discussion about evaluating work in digital media (e.g., scholarly editions, databases, digital mapping projects, born-digital creative or scholarly work). Designed for both creators of digital materials and administrators or colleagues who evaluate those materials, the workshop will propose strategies for documenting, presenting, and evaluating such work. Preregistration required.
Presiding: Trent M. Kays, Univ. of Minnesota, Twin Cities; Lee Skallerup Bessette, Morehead State Univ.
Speakers: Marc Fortin, Queen’s Univ.; Alexander Gil, Univ. of Virginia; Brian Larson, Univ. of Minnesota, Twin Cities; Sophie Marcotte, Concordia Univ.; Ernesto Priego, London, England
Digital humanities are often seen to be a monolith, as shown in recent publications that focus almost exclusively on the United States and English-language projects. This roundtable will bring together digital humanities scholars from seemingly disparate disciplines to show how bridges can be built among languages, cultures, and geographic regions in and through digital humanities.
Presiding: Robert R. Bleil, Coll. of Coastal Georgia; Jennifer Gray, Coll. of Coastal Georgia
Speakers: Susan Cook, Southern New Hampshire Univ.; Christopher Dickman, Saint Louis Univ.; T. Geiger, Syracuse Univ.; Jennifer Gray; Matthew Parfitt, Boston Univ.; James Sanchez, Texas Christian Univ.
Responding: Robert R. Bleil
Nicholas Carr’s 2008 article "Is Google Making Us Stupid?" and his 2010 book The Shallows: What the Internet Is Doing to Our Brains argue that the paradigms of our digital lives have shifted significantly in two decades of living life online. This roundtable unites teachers of composition and literature to explore cultural, psychological, and developmental changes for students and teachers.
Speakers: Robin Bernstein, Harvard Univ.; Lindsay DiCuirci, Univ. of Maryland Baltimore County; Laura Fisher, New York Univ.; Laurie Lambert, New York Univ.; Janice A. Radway, Northwestern Univ.; Joseph Rezek, Boston Univ.
Archivally driven research is changing the methodologies with which we approach the past, the types of questions that we can ask and answer, and the historical voices that are heard and suppressed. The session will address the role of archives, both digital and material, in literary and cultural studies. What risks and rewards do we need to be aware of when we use them?
Thursday, 3 January, 5:15–6:30 p.m., Liberty C, Sheraton
Presiding: Andrew Piper, McGill Univ.
Speakers: Mark Algee-Hewitt, Stanford Univ.; Lindsey Eckert, Univ. of Toronto; Neil Fraistat, Univ. of Maryland, College Park; Matthew Jockers, Univ. of Nebraska, Lincoln; Laura C. Mandell, Texas A&M Univ., College Station; Jeffrey Thompson Schnapp, Harvard Univ.
As part of the ongoing debate about the impact and efficacy of the digital humanities, this roundtable will explore the theoretical, practical, and political implications of the rise of the literary lab. How will changes in the materiality and spatiality of our research and writing change the nature of that research? How will the literary lab impact the way we work?
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Jamie Skye Bianco, Univ. of Pittsburgh; Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Jennifer Laherty, Indiana Univ., Bloomington; Monica McCormick, New York Univ.; Katie Rawson, Emory Univ.
As open-access scholarly publishing matures and movements such as the Elsevier boycott continue to grow, open-access publications have begun to move beyond the simple (but crucial) principle of openness toward an ideal of interactivity. This session will explore innovative examples of open-access scholarly publishing that showcase new types of social, interactive, mixed-media texts.
Presiding: Alex Mueller, Univ. of Massachusetts, Boston
Speakers: Kathleen Fitzpatrick, MLA; Martin Foys, Drew Univ.; Matthew Kirschenbaum, Univ. of Maryland, College Park; Stephen G. Nichols, Johns Hopkins Univ., MD; Kathleen A. Tonry, Univ. of Connecticut, Storrs; Sarah Werner, Folger Shakespeare Library
In this roundtable, scholars of manuscripts, print, and digital media will discuss how contemporary forms of textuality intersect with, duplicate, extend, or draw on manuscript technologies. Panelists seek to push the discussion beyond traditional notions of supersession or remediation to consider the relevance of past textual practices in our analyses of emergent ones.
Presiding: Adeline Koh, Richard Stockton Coll. of New Jersey
Speakers: Moya Bailey, Emory Univ.; Anne Cong-Huyen, Univ. of California, Santa Barbara; Hussein Keshani, Univ. of British Columbia; Maria Velazquez, Univ. of Maryland, College Park
Responding: Alondra Nelson, Columbia Univ.
This panel examines the politics of race, ethnicity, and silence in the digital humanities. How has the digital humanities remained silent on issues of race and ethnicity? How does this silence reinforce unspoken assumptions and doxa? What is the function of racialized silences in digital archival projects?
Speakers: Travis Brown, Univ. of Maryland, College Park; Johanna Drucker, Univ. of California, Los Angeles; Eric Rochester, Univ. of Virginia; Geoffrey Rockwell, Univ. of Alberta; Jentery Sayers, Univ. of Victoria; Susan Schreibman, Trinity Coll. Dublin
Working only with set texts limits the use of many digital tools. What most advances literary research: aiming applications at scholarly primitives or at more culturally embedded activities that may resist generalization? Panelists’ reflections on the challenges of interoperability in a methodologically diverse field will include project snapshots evaluating the potential or perils of such aims.
Presiding: Jason C. Rhody, National Endowment for the Humanities
This workshop will highlight recent awards and outline current funding opportunities. In addition to emphasizing grant programs that support individual and collaborative research and education, the workshop will include information on the NEH’s Office of Digital Humanities. A question-and-answer period will follow.
Friday, 4 January, 1:45–3:00 p.m., Back Bay D, Sheraton
Presiding: Richard A. Grusin, Univ. of Wisconsin, Milwaukee
Speakers: Wendy H. Chun, Brown Univ.; Richard A. Grusin; Patrick Jagoda, Univ. of Chicago; Tara McPherson, Univ. of Southern California; Rita Raley, Univ. of California, Santa Barbara
This roundtable explores the impact of digital humanities on research and teaching in higher education and the question of how digital humanities will affect the future of the humanities in general. Speakers will offer models of digital humanities that are not rooted in technocratic rationality or neoliberal economic calculus but that emerge from and inform traditional practices of humanist inquiry.
Friday, 4 January, 1:45–3:00 p.m., Fairfax A, Sheraton
Presiding: Stephen G. Nichols, Johns Hopkins Univ., MD
Speakers: Karen L. Fresco, Univ. of Illinois, Urbana; Albert Lloret, Univ. of Massachusetts, Amherst; Jacques Neefs, Johns Hopkins Univ., MD
Responding: Timothy L. Stinson, North Carolina State Univ.
This panel explores the resistance of editors to explore digital editions. Questions posed: Do scholarly protocols deliberately resist computational methodologies? Or are we still in a liminal period where print predominates for lack of training in the new technology? Does the problem lie with a failure to encourage digital research by younger scholars?
Presiding: Michael Bérubé, Penn State Univ., University Park
"The Mirror and the LAMP," Matthew Kirschenbaum, Univ. of Maryland, College Park
"Access Demands a Paradigm Shift," Cathy N. Davidson, Duke Univ.
"Resistance in the Materials," Bethany Nowviskie, Univ. of Virginia
The news that digital humanities are the next big thing must come as a pleasant surprise to people who have been working in the field for decades. Yet only recently has the scholarly community at large realized that developments in new media have implications not only for the form but also for the content of scholarly communication. This session will explore some of those implications—for scholars, for libraries, for journals, and for the idea of intellectual property.
Friday, 4 January, 5:15–6:30 p.m., Back Bay D, Sheraton
Presiding: Russell A. Berman, Stanford Univ.
Speakers: Carlos J. Alonso, Columbia Univ.; Lanisa Kitchiner, Howard Univ.; David Laurence, MLA; Bethany Nowviskie, Univ. of Virginia; Elizabeth M. Schwartz, San Joaquin Delta Coll., CA; Sidonie Ann Smith, Univ. of Michigan, Ann Arbor; Kathleen Woodward, Univ. of Washington, Seattle
Doctoral study faces multiple pressures, including profound transformations in higher education and the academic job market, changing conditions for new faculty members, the new media of scholarly communication, and placements in nonfaculty positions. These and other factors question the viability of conventional assumptions regarding doctoral education.
Friday, 4 January, 7:00–8:15 p.m., Back Bay D, Sheraton
Presiding: Peter S. Donaldson, Massachusetts Inst. of Tech.
Global Shakespeares (globalshakespeares.org/) is a participatory multicentric project providing free online access to performances of Shakespeare from many parts of the world. The session features presentations and free lab tours of the MIT HyperStudio.
Presiding: Ryan Cordell, Northeastern Univ.; Katherine Singer, Mount Holyoke Coll.
Speakers: Gert Buelens, Ghent Univ.; Sheila T. Cavanagh, Emory Univ.; Malcolm Alan Compitello, Univ. of Arizona; Gabriel Hankins, Univ. of Virginia; Alexander C. Y. Huang, George Washington Univ.; Kevin Quarmby, Emory Univ.; Lynn Ramey, Vanderbilt Univ.; Matthew Schultz, Vassar Coll.
This digital roundtable aims to give insight into challenges and opportunities for new digital humanists. Instead of presenting polished projects, panelists will share their experiences as developing DH practitioners working through research and pedagogical obstacles. Each participant will present lightning talks and then discuss the projects in more detail at individual tables.
Saturday, 5 January, 8:30–9:45 a.m., Public Garden, Sheraton
Presiding: Ana-Maria Medina, Metropolitan State Coll. of Denver
Speakers: Lois Bacon, EBSCO; Marshall J. Brown, Univ. of Washington, Seattle; Stuart Alexander Day, Univ. of Kansas; Judy Luther, Informed Strategies; Dana D. Nelson, Vanderbilt Univ.; Joseph Paul Tabbi, Univ. of Illinois, Chicago; Bonnie Wheeler, Southern Methodist Univ.
Changes are happening to the scholarly journal, a fundamental institution of our professional life. New modes of communication open promising possibilities, even as financial challenges to print media and education make this time difficult. A panel of editors, publishers, and librarians will address these topics, carrying forward a discussion begun at the 2012 Delegate Assembly meeting.
Speakers: Evelyn Baldwin, Univ. of Arkansas, Fayetteville; Mikhail Gershovich, Baruch Coll., City Univ. of New York; Janice McCoy, Univ. of Virginia; Ilknur Oded, Defense Lang. Inst.; Amanda Phillips, Univ. of California, Santa Barbara; Anastasia Salter, Univ. of Baltimore; Elizabeth Swanstrom, Florida Atlantic Univ.
This electronic roundtable presents games not only as objects of study but also as methods for innovative pedagogy. Scholars will present on their use of board games, video games, authoring tools, and more for language acquisition, peer-to-peer relationship building, and exploring social justice. This hands-on, show-and-tell session highlights assignments attendees can implement.
Presiding: Claudia Cabello-Hutt, Univ. of North Carolina, Greensboro; Marcy Ellen Schwartz, Rutgers Univ., New Brunswick
Speakers: Daniel Balderston, Univ. of Pittsburgh; Maria Laura Bocaz, Univ. of Mary Washington; Claudia Cabello-Hutt; Alejandro Herrero-Olaizola, Univ. of Michigan, Ann Arbor; Veronica A. Salles-Reese, Georgetown Univ.; Marcy Ellen Schwartz; Vicky Unruh, Univ. of Kansas
This roundtable will explore renewed interest in Latin American archives—both traditional and digital—and the intellectual, political, and social implications for our research and teaching. Presenters will address how new technologies (digitalized collections, hypertext manuscripts, etc.) facilitate access to research and offer strategies for introducing students to a variety of materials.
Speakers: Sarah J. Arroyo, California State Univ., Long Beach; R. Scot Barnett, Clemson Univ.; Ron C. Brooks, Oklahoma State Univ., Stillwater; Geoffrey V. Carter, Saginaw Valley State Univ.; Anthony Collamati, Clemson Univ.; Jason Helms, Univ. of Kentucky; Alexandra Hidalgo, Purdue Univ., West Lafayette; Robert Leston, New York City Coll. of Tech., City Univ. of New York
This roundtable will present separate, yet unified, digital writings on laptops. Instead of making a diachronic set of presentations, we will make available a synchronic set, in an art e-gallery format, arranged separately on tables as conceptual art installations. The purpose is to demonstrate how digital technologies can reshape our views of presentations and of what is now called writings.
Saturday, 5 January, 1:45–3:00 p.m., Back Bay D, Sheraton
Presiding: Paul Fyfe, Florida State Univ.; Robert H. Kieft, Occidental Coll.
Speakers: Tanya E. Clement, Univ. of Texas, Austin; Rachel Donahue, Univ. of Maryland, College Park; Kari M. Kraus, Univ. of Maryland, College Park; John Merritt Unsworth, Brandeis Univ.; John A. Walsh, Indiana Univ., Bloomington
This roundtable extends current conversations about reforming graduate training to a burgeoning field of disciplinary crossover and professionalization. Participants will introduce innovative training programs and collaborative projects at the intersections of modern language departments, digital humanities, and library schools or iSchools.
Saturday, 5 January, 1:45–3:00 p.m., Liberty A, Sheraton
Presiding: Elizabeth M. Schwartz, San Joaquin Delta Coll., CA
"Peer Review 2.0: Using Digital Technologies to Transform Student Critiques," Elizabeth Harris McCormick, LaGuardia Community Coll., City Univ. of New York; Lykourgos Vasileiou, LaGuardia Community Coll., City Univ. of New York
"How I Met Your Argument: Teaching through Television," Lanta Davis, Baylor Univ.
"Writing Wikipedia as Postmodern Research Assignment," Matthew Parfitt, Boston Univ.
"Weaning Isn’t Everything: Beyond Postformalism in Composition," Miles McCrimmon, J. Sargeant Reynolds Community Coll., VA
Speakers: David Kim, Univ. of California, Los Angeles; Jennifer Sano-Franchini, Michigan State Univ.; Lee Skallerup Bessette, Morehead State Univ.
Responding: Tara McPherson, Univ. of Southern California
This roundtable addresses how applications and interfaces encode specific cultural assumptions about race and preclude certain groups of people from participating in the digital humanities. Participants present specific digital humanities projects that illustrate the impact of race on access to the programming, cultural, and funding structures in the digital humanities.
Saturday, 5 January, 3:30–4:45 p.m., The Fens, Sheraton
Presiding: Korey Jackson, Univ. of Michigan, Ann Arbor
Speakers: Matt Burton, Univ. of Michigan, Ann Arbor; Korey Jackson; Spencer Keralis, Univ. of North Texas; Jason C. Rhody, National Endowment for the Humanities; Lisa Marie Rhody, Univ. of Maryland, College Park; Michael Ullyot, Univ. of Calgary
This roundtable seeks to query precisely what data can be and do in a humanities context. Charting the migration from individual project to scalable data set, we explore “big data” not simply as a matter of size or number but as a process of granting researchers and educators access to shared information resources.
Presiding: Catherine Elizabeth Ingrassia, Virginia Commonwealth Univ.
Speakers: Joshua Eckhardt, Virginia Commonwealth Univ.; Molly Hardy, Saint Bonaventure Univ.; Laura C. Mandell, Texas A&M Univ., College Station; James Raven, Univ. of Essex
Consistent with the theme of open access, this roundtable explores limitations of proprietary digital archives and emergent alternatives. It will provide an interactive, engaged demonstration of 18thConnect; a historian’s perspective; discussion of British Virginia; and scholarly digital editions of seventeenth-century documents.
Speakers: Amanda L. French, George Mason Univ.; George Williams, Univ. of South Carolina, Spartanburg
This "master class" will focus on integrating two digital tools into the classroom to facilitate student-generated projects: Omeka, for the creation of archives and exhibits, and WordPress, for the creation of blogs and Web sites. We will discuss what kinds of assignments work with each tool, how to get started, and how to evaluate assignments. Bring a laptop (not a tablet) for hands-on work.
Sunday, 6 January, 8:30–9:45 a.m., Beacon A, Sheraton
Presiding: Alexander Reid, Univ. at Buffalo, State Univ. of New York
Speakers: Heather Duncan, Univ. at Buffalo, State Univ. of New York; Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Eileen Joy, Southern Illinois Univ., Edwardsville; Richard E. Miller, Rutgers Univ., New Brunswick; Daniel Schweitzer, Univ. at Buffalo, State Univ. of New York
Responding: Alexander Reid
As our profession seeks to understand electronic publishing, the emergence of middle-state publishing (e.g., blogs, Twitter) adds another layer of complexity to the issue. The roundtable participants will discuss their use of social media for scholarship and how middle-state publishing alters scholarly work and the ethical and professional concerns that arise.
Presiding: Yohei Igarashi, Colgate Univ.; Lauren A. Neefe, Stony Brook Univ., State Univ. of New York
Speakers: Miranda Jane Burgess, Univ. of British Columbia; Mary Helen Dupree, Georgetown Univ.; Kevis Goodman, Univ. of California, Berkeley; Yohei Igarashi; Celeste G. Langan, Univ. of California, Berkeley; Maureen Noelle McLane, New York Univ.; Tom Mole, McGill Univ.
A roundtable of scholars discusses and defines “Romantic media studies,” one of the most vibrant approaches to Romantic literature today. Spanning British, German, and transatlantic Romanticisms, the exchange considers Romantic-era media while reflecting on methods of reading for media, mediations, and networks as well as on the relation between Romantic criticism and the digital humanities.
Speakers: Katherine E. Gossett, Iowa State Univ.; Erik Hanson, Loyola Univ., Chicago; Matthew Jockers, Univ. of Nebraska, Lincoln; Steven E. Jones, Loyola Univ., Chicago; Bethany Nowviskie, Univ. of Virginia; Sarah Storti, Univ. of Virginia
This roundtable explores the urgent necessity of reforming graduate training in the humanities, particularly in the light of the opportunities afforded by digital platforms, collaborative work, and an expanded mission for graduates. Presenters include graduate students and faculty mentors who are creating the institutional and disciplinary conditions for renovated graduate curricula to succeed.
Sunday, 6 January, 1:45–3:00 p.m., Liberty A, Sheraton
Presiding: Mark Sample, George Mason Univ.
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Kathleen Fitzpatrick, MLA; Frank Kelleter, Univ. of Göttingen; Kirstyn Leuner, Univ. of Colorado, Boulder; Jason Mittell, Middlebury Coll.; Ted Underwood, Univ. of Illinois, Urbana
This roundtable considers the value and challenges of serial scholarship, that is, research published in serialized form online through a blog, forum, or other public venue. Each of the participants will give a lightning talk about his or her stance toward serial scholarship, while the bulk of the session time will be reserved for open discussion.
One cannot help but observe the predominance of cupcakes in modern America. Why the cupcake, and why now, at this particular historical moment?
What the fuck is up with all the cupcakes?
Within five minutes of my home there are two bakeries specializing in cupcakes. Two bakeries two hundred yards from each other. They sell cupcakes, and that’s about it. Cupcakes.
Go to a kid’s birthday party and if you survive the bowling or the bouncy castle or the laser tag with the mewling mess of Other People’s Children shouting and screaming, you and your kid will be rewarded with a cupcake. No cake, maybe not even any candles. Cupcakes, that’s it.
Theoretically they come frosted or plain, but plain is such an outright disappointment to everyone, it’s almost embarrassing, so frosted it is. Topped with swirling piles of sugar and fat, the cupcakes come bearing equally saccharine names like Red Velvet Elvis and Cloud 9 and, no shitting you, Blueberry Bikini Buster.
My friends, my very smart friends in academia who study the latest trends in culture and technology, I have a question. You can talk about the spatial turn and the computational turn all you want, but can someone fucking explain the cupcake turn to me?
I have my own theory, and it goes like this: cupcakes match—and attempt to assuage—our cultural anxieties of the moment.
Cupcakes are models of…
It’s not a whole cake. It’s a miniature cake. A cake in a fucking cup. A cupcake is a model of modesty. And it’s the best kind of modesty, because it paradoxically suggests extravagance. Cupcakes are rich. And expensive. You could buy two dozen Twinkies for the price of a single caramel apple spice gourmet cupcake.
By the very nature of their production, cupcakes are made in multiples. A 3×3 tray of 9 cupcakes or 4×4 tray of 16 cupcakes, it doesn’t matter. Cupcakes are serial cakes. Mass produced but conveying a sense of homestyle goodness. Cupcakes are the perfect homeopathic antidote for the industrially-produced food we mostly consume. Fordism never tasted so sickly sweet.
On the surface, gourmet cupcakes are artisanal desserts. For all their seriality, cupcakes still contain minute variations in flavor and toppings. Yet underneath, the base model remains the same. Cupcakes embody the postmodern ideal of the manufactured good that has been injected with artificial difference, in order to conjure a sense of individuality. Cupcakes are indie desserts. And like hipsters, cupcakes are pretty much all the same. Cupcake sprinkles and hipster scarves serve the same purpose, turning the plainly ordinary into the veiled ordinary.
Ontologically speaking, just what the hell are cupcakes anyway? A cupcake’s not really a cake. A distant cousin to the muffin, maybe. Is it a pastry for the 21st century United States, a kind of American croissant, full of gooey American exceptionalism? The cupcake itself doesn’t even know what it is. It’s a hybrid form, a Frankencaken. But in a culture frightened by change, blurred borders, and boundary crossings, the cupcake makes all those scary things palatable. As long as it comes in little accordion-pleated paper cup.
Austerity, seriality, artistry, hybridity, that’s what cupcakes are all about. The perfect food for our post-industrial, indie vibe Great Recession. Enjoy them while they last.
Crimson Velveteen photograph courtesy of Flickr user Gina Guillotine / Creative Commons Licensed
(This is the text of my five minute position statement on the role of computational literacy in computers and writing. I delivered this statement during a “town hall” meeting at the annual Computers and Writing Conference, hosted at North Carolina State University on May 19, 2012.)
I want to briefly run through five basic statements about computational literacy. These are literally 5 statements in BASIC, a programming language developed at Dartmouth in the 1960s. As some of you might know, BASIC is an acronym for Beginner’s All-Purpose Symbolic Instruction Code, and the language was designed in order to help all undergraduate students at Dartmouth—not just science and engineering students—use the college’s time-sharing computer system.
Each BASIC statement I present here is a fully functioning 1-line program. I want to use each as a kind of thesis—or a provocation of a thesis—about the role of computational literacy in computers and writing, and in the humanities more generally.
10 PRINT 2+3
I’m beginning with this statement because it’s a highly legible program that nonetheless highlights the mathematical, procedural nature of code. But this program is also a piece of history: it’s the first line of code in the user manual of the first commercially available version of BASIC, developed for the first commercially available home computer, the Altair 8800. The year was 1975 and this BASIC was developed by a young Bill Gates and Paul Allen. And of course, their BASIC would go on to be the foundation of Microsoft. It’s worth noting that although Microsoft BASIC was the official BASIC of the Altair 8800 (and many home computers to follow), an alternative version, called Tiny BASIC, was developed by a group of programmers in San Francisco. The 1976 release of Tiny BASIC included a “copyleft” software license, a kind of predecessor to contemporary open source software licenses. Copyleft emphasized sharing, an idea at the heart of the original Dartmouth BASIC.
10 PRINT “HELLO WORLD”
If BASIC itself was a program that invited collaboration, then this—customarily one of the first programs a beginner learns to write—highlights the way software looks outward. Hello, world. Computer code is writing in public, a social text. Or, what Jerry McGann calls a “social private text.” As McGann explains, “Texts are produced and reproduced under specific social and institutional conditions, and hence…every text, including those that may appear to be purely private, is a social text.”[1. McGann, Jerome. The Textual Condition. Princeton, NJ: Princeton University Press, 1991, p. 21.]
10 PRINT “GO TO STATEMENT CONSIDERED HARMFUL”: GOTO 10
My next program is a bit of an insider’s joke. It’s a reference to a famous 1968 diatribe by Edsger Dijkstra called “Go To Statement Considered Harmful.” Dijkstra argues against using the goto command, which leads to what critics call spaghetti code. I’m not interested in that specific debate, so much as I like how this famous injunction implies an evaluative audience, a set of norms, and even an aesthetic priority. Programming is a set of practices, with its own history and tensions. Any serious consideration of code—any serious consideration of computers—in the humanities must reckon with these social elements of code.
10 REM PRINT “GOODBYE CRUEL WORLD”
The late German media theorist Frederich Kittler has argued that, as Alexander Galloway put it, “code is the only language that does what it says.”[2. Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press, 2006, p. 6] Yes, code does what it says. But it also says things it does not do. Like this one-line program which begins with REM, short for remark, meaning this is a comment left by a programmer, which the computer will not execute. Comments in code exemplify what Mark Marino has called the “extra-functional significance” of code, meaning-making that goes beyond the purely utilitarian commands in the code.[3. Marino, Mark C. “Critical Code Studies.” Electronic Book Review (2006). <http://www.electronicbookreview.com/thread/electropoetics/codology>.]
Without a doubt, there is much even non-programmers can learn not by studying what code does, but by studying what it says, and what it evokes.
10 PRINT CHR$(205.5+RND(1));:GOTO 10
Finally, here’s a program that highlights exactly how illegible code can be. Very few people could look at this program for the Commodore 64 and figure out what it does. This example suggests there’s a limit to the usefulness of the concept of literacy when talking about code. And yet, when we run the program, it’s revealed to be quite simple, though endlessly changing, as it creates a random maze across the screen.
So I’ll end with a caution about relying on the word literacy. It’s a word I’m deeply troubled by, loaded with historical and social baggage and it’s often misused as a gatekeeping concept, an either/or state; one is either literate or illiterate.
In my own teaching and research I’ve replaced my use of literacy with the idea of competency. I’m influenced here by the way teachers of a foreign language want their students to use language when they study abroad. They don’t use terms like literacy or fluency, they talk about competency. Because the thing with competency is, it’s highly contextualized, situated, and fluid. Competency means knowing the things that are required in order to do the other things you need to do. It’s not the same for everyone, and it varies by place, time, and circumstance.
Translating this experience to computers and writing, competency means reckoning with computation at the level appropriate for what you want to get out of it—or put into it.
The hornbook was not a book, but a small wooden board with a handle. A sheet of vellum inscribed with a lesson—typically the alphabet and the Lord’s Prayer—was attached to one side and covered by a thin, transparent layer of horn or mica. Historians don’t know much about hornbooks, other than they were important tools for primary education in the 16th and 17th centuries in England, Germany, Holland, and by way of the Puritans, the American colonies.
Shakespeare mentions a “Hornebook” in Love’s Labor Lost, and it’s not unlikely that Shakespeare himself first learned his letters on a hornbook. In 1916 the book antiquarian George Arthur Plimpton, whose knowledge of the hornbook has never been surpassed, pointed to a woodcut in Gregor Reisch’s magisterial Margarita Philosophica (1503) to illustrate the fundamental role of the hornbook in the early modern curriculum:
A boy stands outside the Tower of Knowledge (each level representing progressively heightened domains of learning, from the grammar of Donatus and Priscian on the lower levels, to the science and philosophy of Cicero, Aristotle, Seneca and Pliny on the upper levels). To enter the Tower of Knowledge the boy need only accept the hornbook from his teacher and master it. As Plimpton puts it, the hornbook was “the key to unlock the treasures of learning” (4).
But the hornbook wasn’t simply a metaphorical key. It answered a very real concern of material culture at the time. Parchment, and later, paper, was simply too costly to be put in the hands of young learners. With its vellum primer protected like a laminated lesson, the sturdy hornbook was a hardware solution to a social problem. On some hornbooks the vellum could slide out from underneath the translucent horn and be replaced by other lessons. The hornbook in this way was a kind of 17th century iPad. Much more durable than paper, hornbooks were apparently passed down between students, sibling to sibling, generation to generation.
What’s surprising about hornbooks, given their durability and symbolic as well as literal value in early modern education, is how few have survived into the 20th and 21st centuries. Plimpton lamented in 1916 that “the British Museum has only three, and the Bodleian Library at Oxford one” (5). In fact, the most exhaustive collection of hornbooks is probably Plimpton’s own collection, amassed over years and now housed at Columbia University.
What applications and documents might be included on a “Digital Humanities Creator Stick,” a collection of tools that could fit on a USB flash drive, allowing students, teachers, researchers, and anyone else to work on digital humanities projects. An individual would plug the stick into any computer and instantly have access to what she needs to get work done. Unplug the stick and she takes those tools with her.
In my mind I’ve been comparing George’s digital humanities creator stick (or DH jump drive, as Roger Whitson described it) to a hornbook. Like the hornbook in Reisch’s Tower of Knowledge, it provides entry into a world that might otherwise be closed to the newcomer. Even the paddle shape of the hornbook resembles a USB flash drive.
Likening a digital humanities jump drive to an 17th century hornbook requires a certain amount of historical and technological blindness, but I’d like to entertain the comparison briefly, in order to find out if the fate of the hornbook gives us any insight into a similar kind of tool for the digital humanities. The hornbook arose during a particular historical moment to address a particular social problem. I called it a piece of hardware earlier, but really, it was a platform, in much the same way the Nintendo Wii is a platform. The wooden board itself and the translucent horn overlay were the hardware, while the vellum or paper lesson was the software. As platform studies has shown, however, hardware and software alone do not comprise the sum total of any technological platform. Use and social context are as much a part of the platform as the physical object itself.
And yet we don’t actually know what students did with their hornbooks or how they used them. We have glimpses—a few illustrations, some mentions in literature, a handful of advertisements. But the full scope of how teachers and students had meaningful (or not so meaningful) interactions with the tool and in what environments is lost to us. We simply don’t know.
This missing social element of the hornbook makes me think of the DH creator stick. The THATCamp Piedmont session prompted a lively discussion. But I know George was initially frustrated with the direction of this conversation, which trended toward the abstract, ranging from questions about the digital divide to issues surrounding digital fluency. George had wanted—and the collaborative notes generated during the session reflect this—to focus more concretely on assembling a definite list of tools and documentation that could be put on a USB flash drive. At one point in the session (probably after I had introduced a Lego versus Ikea approach to getting stated in the digital humanities), George compared assembling a digital humanities toolkit to a homeowner putting together his or her first toolbox. You know you need a hammer, a screwdriver, and a few other common tools. Before you head to the hardware store you don’t need to philosophize about the nature of home itself. A homeowner doesn’t ask, What is a home? A homeowner goes out and buys a hammer.
I appreciated the analogy and George’s efforts to ground the discussion. I’m not so sure, though, that we in the digital humanities have figured out what our “home” is, much less what our essential tools are. I am not ready to foreclose the discussion about what we ought to be doing with our tools, which surely would influence what those tools are. I am not ready, to use the hornbook as a metaphor, to affix a standard lesson underneath a protective laminate of horn.
And to be sure, I know very few digital humanists who would argue differently. I doubt that anyone who was in the THATCamp session would want to declare this or that set of tools to be the canonical tools of DH, or this or that set of practices to be the only valid approach to digital humanities work. This openness is one way a digital hornbook differs from the historical hornbook, which was clearly meant to be the first step in a rigidly prescribed way of thinking. I can think of some humanists a hornbook might appeal to in this regard, but no digital humanists.
A digital hornbook would avoid the monologic authority of a historical hornbook by not only including a variety of tools but also including a range of documentation and pedagogical material. George mentioned several times in the session that while we had compiled a great list of tools, we hadn’t thought about the kind of guides or tutorials that should be included on the DH stick. He’s right. It wasn’t until toward the end of the session that we added some guides—mostly standard, official documentation of the various tools and services on the list. And then, at last, a few more substantive, scholarly perspectives on the digital humanities found their way onto the ideal DH Creator Stick: A Companion to Digital Humanities, Hacking the Academy, and Debates in the Digital Humanities.
It’s the presence of these last two texts in particular that finally make the DH Creator Stick more than an inert catalog of portable apps. Hacking the Academy and Debates in the Digital Humanities fill a role that none of the other tools or guides on the proposed list do. They fill an absence that mirrors the unknown social life of the hornbook, for they tell us about the social life of the digital humanities. They model the digital humanities in action. And they do so by presenting a multiplicity of voices, a range of concerns, and most important to broadening the digital humanities audience, an ongoing and reiterative invitation to students, teachers, and young scholars to consider the impact of the digital on the questions that humanists ask of the world.
It’s crucial to have documents like these available for students and novice practitioners, but I would go farther. What we most need to include on a digital hornbook for newcomers to the digital humanities is precisely that which can never be distilled digitally. It’s an attitude, an ethos. Perhaps the closest we might come is dumping the entire contents of Day of DH onto the flash drive, all four years of blog posts about what people actually do during their daily work. This material might give aspiring digital humanists (not to mention humanists) a better entry point into the discipline than any set of tools or tutorials. Even then, though, the diverse principles that inspire us may not shine through. I’ve labeled my own approach deformative humanities, but there are many other ways to conceive of the spirit that motivates digital humanists. In any case, the real challenge we face is capturing and relaying this attitude. A 4GB (or 8GB or 16GB or 32GB) flash drive can hold a fantastic number of applications and documents, but it’s a capacity that may mislead us into thinking that the DH jump stick would be a powerful tool in and of itself. The prescribed lesson on a hornbook teaches us very little, apart from telling us what society deemed necessary to enter the Tower of Knowledge. The contents of a flash drive similarly teach us very little, apart from demonstrating what a group of people—we digital humanists—valued at a certain moment in the early 21st century. Centuries from now I would not want a media archaeologist bemoaning the utter lack of context surrounding a flash drive housed in Special Collections that was obliquely labeled “DH Creator Stick.” The digital humanities is people and practices, not tools and documentation. The real question is, can our tools and documentation convey this?
I’ve gone on record as saying that the digital humanities is not about building. It’s about sharing. I stand by that declaration. But I’ve also been thinking about a complementary mode of learning and research that is precisely the opposite of building things. It is destroying things.
I want to propose a theory and practice of a Deformed Humanities. A humanities born of broken, twisted things. And what is broken and twisted is also beautiful, and a bearer of knowledge. The Deformed Humanities is an origami crane—a piece of paper contorted into an object of startling insight and beauty.
I come to the Deformed Humanities (DH) by way of a most traditional route—textual scholarship. In 1999 Lisa Samuels and Jerry McGann published an essay about the power of what they call “deformance.” This is a portmanteau that combines the words performance and deform into an interpretative concept premised upon deliberately misreading a text, for example, reading a poem backwards line-by-line.
As Samuels and McGann put it, reading backwards “short circuits” our usual way of reading a text and “reinstalls the text—any text, prose or verse—as a performative event, a made thing” (Samuels & McGann 30). Reading backwards revitalizes a text, revealing its constructedness, its seams, edges, and working parts.
In many ways this idea of textual transformation as an interpretative maneuver is nothing new. Years before Samuels and McGann suggested reading backward as the paradigmatic deformance, the influential composition professor Peter Elbow suggested reading a poem backwards as a way to “breathe life into a text” (Elbow 201).
Still, Samuels and McGann point out that “deformative scholarship is all but forbidden, the thought of it either irresponsible or damaging to critical seriousness” (Samuels & McGann 34–35). Yet deformance has become a key methodology of the branch of digital humanities that focuses on text analysis and data-mining.
This is an argument that Steve Ramsay makes in Reading Machines. Computers let us practice deformance quite easily, taking apart a text—say, by focusing on only the nouns in an epic poem or calculating the frequency of collocations between character names in a novels.
Deformance is a Hedge
But however much deformance sounds like a progressive interpretative strategy, it actually reinscribes more conventional acts of interpretation. Samuels and McGann suggest—and many digital humanists would agree—that “we are brought to a critical position in which we can imagine things about the text that we did not and perhaps could not otherwise know” (36). And this is precisely what is wrong with the idea of deformance: it always circles back to the text.
Even the word itself—deformance—seems to be a hedge. The word is much more indebted to the socially acceptable activity of performance than the stigmatized word deformity. It reminds me of a scene in Alison Bechdel’s graphic memoir Fun Home, where the adult narrator Alison comments upon her teenage self’s use of the word “horrid” in her diary. “How,” Bechdel muses, “horrid has a slightly facetious tone that strikes me as Wildean. It appears to embrace the actual horror…then at the last second nimbly sidesteps it” (Bechdel 174). In a similar fashion, deformance appears to embrace the actual deformity of a text and then at the last possible moment sidesteps it. The end result of deformance as most critics would have it is a sense of renewal, a sense of de-forming only to re-form.
To evoke a key figure motivating the playfulness Samuels and McGann want to bring to language, deformance takes Humpty Dumpty apart only to put Humpty Dumpty back together again.
And this is where I differ.
I don’t want to put Humpy Dumpty back together.
Let him lie there, a cracked shell oozing yolk. He is broken. And he is beautiful. The smell, the colors, the flow, the texture, the mess. All of it, it is unavailable until we break things. And let’s not soften our critical blow by calling it deformance. Name it what it is, a deformation.
In my vision of the Deformed Humanities, there is little need to go back to the original. We work—in the Stallybrass sense of the word—not to go back to the original text with a revitalized perspective, but to make an entirely new text or artifact.
The deformed work is the end, not the means to the end.
The Deformed Humanities is all around us. I’m only giving it a name. Mashups, remixes, fan fiction, they are all made by breaking things, with little regard for preserving the original whole. With its emphasis on exploring the insides of things, the Deformed Humanities shares affinities with Ian Bogost’s notion of carpentry, the practice of making philosophical and scholarly inquiries by constructing artifacts rather than writing words. In Alien Phenomenology, Or, What It’s Like to Be a Thing, Bogost describes carpentry as “making things that explain how things make their world” (93). Bogost goes on to highlight several computer programs he’s built in order to think like things—such as I am TIA, which renders the Atari VCS’s “view” of its own screen, an utterly alien landscape compared to what players of the Atari see on the screen. Where carpentry and the Deformed Humanities diverge is in the materials being used. Carpentry aspires to build from scratch, whereas the Deformed Humanities tears apart existing structures and uses the scraps.
For a long while I’ve told colleagues who puzzle over my own seemingly disparate objects of scholarly inquiry that “I study systems that break other systems.” Systems that break other systems is the thread that connects my work with electronic literature, graphic novels, videogames, code studies, and so on. Yet I had never thought about my own work as deformative until earlier this year. And it took someone else to point it out. This was my colleague Tom Scheinfeldt, the managing director of the Roy Rosenzweig Center for History and New Media. In February, Scheinfeldt gave a talk at Brown University in which he argued that the game-changing element of the digital humanities was its performative aspect.
Scheinfeldt uses Babe Ruth as an analogy. Ruth wasn’t merely the homerun king. He essentially invented homeruns as a strategy, transforming the game. As Scheinfeldt puts it, “the change Ruth made wasn’t engendered by him being able to bunt or steal more effectively than, say, Ty Cobb…it was engendered by making bunting and stealing irrelevant, by doing something completely new.”
Scheinfeldt then picks up on Ramsay’s use of “deformance” to suggest that what’s game-changing about digital technology is the way it allows us “to make and remake” texts in order “to produce meaning after meaning.”
Hacking the Accident
As an example, Scheinfeldt mentions a project of mine, which I had never thought about in terms of deformance. This was a digital project and e-book I made last fall called Hacking the Accident.
Hacking the Accident is a deformed version of Hacking the Academy, an edited collection forthcoming by the digitalculturebooks imprint of the University of Michigan Press. Hacking the Academy is a scholarly book about the disruptive potential of the digital humanities, crowdsourced in one week and edited by Dan Cohen and Tom Scheinfeldt.
Taking advantage of the generous BY-NC Creative Commons license of the book, I took the entire contents of Hacking the Academy, some thirty something essays by leading thinkers in the digital humanities, and subjected them to the N+7 algorithm used by the Oulipo writers. This algorithm replaces every noun—every person, place, or thing—in Hacking the Academy with the person, place, or thing—mostly things—that comes seven nouns later in the dictionary.
The results of N+7 would seem absolutely nonsensical, if not for the disruptive juxtapositions, startling evocations, and unexpected revelations that ruthless application of the algorithm draws out from the original work. Consider the opening substitution of Hacking the Academy, sustained throughout the entire book: every instance of the word academy is literally an accident.
Other strange transpositions occur. Every fact is a fad and print is a prison. Instructors are insurgents and introductions are invasions. Questions become quicksand. Universities, uprisings. Scholarly associations wither away to scholarly asthmatics. Disciplines are fractured into discontinuities. Writing, the thing that absorbs our lives in the humanities, writing, the thing that we produce and consume endlessly and desperately, writing, the thing upon which our lives of letters is founded—writing, it is mere “yacking” in Hacking the Accident.
These are merely the single word exchanges, but there are longer phrases that are just as striking. Print-based journals turn out as prison-based joyrides, for example. I love that The Chronicle of Higher Education always appears as The Church of Higher Efficiency; it’s as if the newspaper was calling out academia for what it has become—an all-consuming, totalizing quest for efficiency and productivity, instead of a space of learning and creativity.
Consider the deformed opening lines of Cohen’s and Scheinfeldt’s introduction, which quotes from their original call for papers:
Can an allegiance edit a joyride? Can a lick exist without bookmarks? Can stunts build and manage their own lecture mandrake playgrounds? Can a configuration be held without a prohibition? Can Twitter replace a scholarly sofa?
At the most obvious level, the work is a parody of academic discourse, amplifying the already jargon-heavy language of academia with even more incomprehensible language. But one level down there is a kind of Bakhtinian double-voiced discourse at work, in which the original intent is still there, but infused with meanings hostile to that intent—the print/prison transposition is a good example of this.
I’m convinced that Hacking the Accident is not merely a novelty. It’d be all too easy to dismiss the work as a gag, good for a few amusing quotes and nothing more. But that would overlook the several levels in which Hacking the Accident acts as a kind of intervention into academia. A deformation of the humanities. A deformation that doesn’t strive to put the humanities back together and reestablish the integrity of a text, but rather, a deformation that is a departure, leading us somewhere new entirely.
The Deformed Humanities—though most may not call it that—will prove to be the most vibrant and generative of all the many strands of the humanities. It is a legitimate mode of scholarship, a legitimate mode of doing and knowing. Precisely because it relies on undoing and unknowing.
A column in the Chronicle of Higher Education by former Idaho State University provost and official Stanley Fish biographer Gary Olson has been making waves this weekend. Entitled “How Not to Reform Humanities Scholarship,” Olson’s column is really about scholarly publishing, not scholarship itself.
Or maybe not. I don’t know. Olson conflates so many issues and misrepresents so many points of view that it’s difficult to tease out a single coherent argument, other than a misplaced resistance to technological and institutional change. Nonetheless, I want to call attention to a troubling generalization that Olson is certainly not the first to make. Criticizing the call (by the MLA among others) to move away from single-authored print monographs, Olson writes that a group of anonymous deans and department chairs have expressed concern to him that “graduate students and young faculty members—all members of the fast-paced digital world—are losing their capacity to produce long, in-depth, sustained projects (such as monographs).”
Here is the greatest conflation in Olson’s piece: mistaking form for content. As if “long, in-depth” projects are only possible in monograph form. And the corollary assumption: that “long, in-depth” peer-reviewed monographs are automatically worthwhile.
Olson goes on to summarize the least interesting and most subjective aspect of Maryanne Wolf’s otherwise fascinating study of the science of reading, Proust and the Squid:
…one disadvantage of the digital age is that humans are rapidly losing their capacity for deep concentration—the type of cognitive absorption essential to close, meditative reading and to sustained, richly complex writing. That loss is especially deleterious to humanities scholars, whose entire occupation depends on that very level of cognitive concentration that now is so endangered.
Here again is that conflation of form and content. According to Olson, books encourage deep concentration for both their writers and readers, while digital media foster the opposite of deep concentration, what Nicholas Carr would call shallow concentration. I don’t need to spend time refuting this argument. See Matthew Battles’ excellent Reading Isn’t Just a Monkish Pursuit. Or read my GMU colleague Dan Cohen’s recent post on Reading and Believing and Alan Jacob’s post on Making Reading Hard. Cohen and Jacob both use Daniel Kahneman’s Thinking, Fast and Slow, which offers a considerably more nuanced take on reading, distraction, and understanding than Olson.
But Olson is mostly talking about writing, not reading. Writing a book, in Olson’s view, is all about “deep concentration” and “richly complex writing.” But why should length have anything to do with concentration and complexity? There’s many a book-length monograph (i.e. a book) that is too long, too repetitive, and frankly, too complex—which is a euphemism for obscure and convoluted.
And why, too, should “cognitive concentration” correspond to duration? Recalling the now ancient Stephen Wright joke, “There’s a fine line between fishing and just standing on the shore like an idiot.” The act of writing is mostly standing on the shore like an idiot. And Olson is asking us to stand there even longer?
I am not saying that I don’t value concentration. In fact, I value concentration and difficult thinking above almost all else. But I want to suggest here—as I have elsewhere—that we stop idealizing the act of concentration. And to go further, I want to uncouple concentration from time. Whether we’re writing or reading, substantive concentration can come in small or large doses.
The act of writing is mostly standing on the shore like an idiot. And Olson is asking us to stand there even longer?
There’s a cultural prejudice against tweeting and blogging in the humanities, something Dan Cohen is writing about in his next book (posted in draft form, serially, on his blog). The bias against blogs is often attributed to issues of peer review and legitimacy, but as Kathleen Fitzpatrick observed in an address at the MLA (and posted on her blog), much of the bias is due to the length of a typical blog post—which is much shorter than a conventional journal article. Simply stated, time is used as a measure of worth. When you’re writing a blog post, there’s less time standing on the shore like an idiot. And for people like Olson, that’s a bad thing.
I want to build on something Fitzpatrick said in her address. She argues that a blog “provides an arena in which scholars can work through ideas in an ongoing process of engagement with their peers.” It’s that concept of ongoing process that is particularly important to me. Olson thinks that nothing fosters deep concentration like writing a book. But writing a scholarly blog is an ongoing process, a series of posts, each one able to build on the previous post’s ideas and comments. Even if the posts are punctuated by months of silence, they can still be cumulative. Writing on a blog—or building other digital projects for that matter—can easily accommodate and even facilitate deep concentration. Let’s call it serial concentration: intense moments of speculation, inquiry, and explanation distributed over a period of time. This kind of serial concentration is particularly powerful because it happens in public. We are not huddled over a manuscript in private, waiting until the gatekeepers have approved our ideas before we share them, in a limited, almost circumspect way. We share our ideas before they’re ready. Because hand-in-hand with serial concentration comes serial revision. We write in public because we are willing to rewrite in public.
I can’t imagine a more rigorous way of working.
(Digital Typography Woodcut courtesy of Donald Knuth, provenance unknown)
These are my notes “Building and Sharing (When You’re Supposed to be Teaching,” a lightning talk I gave on Tuesday as part of CUNY’s Digital Humanities Initiative. Shannon Mattern (The New School) and I were on a panel called “DH in the Classroom.” Shannon’s enormously inspirational lightning talk was titled Beyond the Seminar Paper, and mine too focused on alternative assignments for students. Our two talks were followed by a long Q&A session, in which I probably learned more from the audience than they did from me. I’ll intersperse my notes with my slides, though you might also want to view the full Prezi (embedded at the end of this post).
I’d like to thank Matt for inviting me to talk tonight, and to all of you too, for coming out this gorgeous evening. I’m extremely flattered to be here—especially since I don’t think I have any earth-shattering thoughts about the digital humanities in the classroom. There are dozens and dozens of people who could be up here speaking, and I know some of them are here in this room right now.
A lot of what I do in my classroom doesn’t necessary count as “digital humanities”—I certainly don’t frame it that way to my students. If anything, I simply say that we’ll be doing things in our classes they’ve never done before in college, let alone a literature class. And literature is mostly what I teach. Granted I teach literature classes that lend themselves to digital work—electronic literature classes, postmodern fiction, and media studies classes that likewise focus on close readings of texts, such as my videogame studies classes. But even in these classes, I think my students are surprised by how much our work focuses on building and sharing.
If I change point of view of the title of my talk to my students’ perspectives, it might look something like this:
Building and sharing when we’re supposed to be writing. And at the end of this sentence comes one of the greatest unspoken assumptions both students and faculty make regarding this writing:
It’s writing for an audience of one—usually me, the instructor, us, the instructors. This is what counts as an audience to my students. They rarely think of themselves as writing for an audience beyond me. They rarely think of their own classmates as an audience. They often don’t even think of themselves as their own audience. They write for us, their professors and instructors.
So the “sharing” part of my title comes from my ongoing effort—not always successful—to extend my students’ sense of audience. I’ll give some examples of this sharing in a few minutes, but before that I want to address the first part of my title: the idea of building.
Those of you who know me are probably surprised that I’m emphasizing “building” as a way to integrate the digital humanities in the classroom. One of the most popular things I’ve written in the past year is a blog post decrying the hack versus yack split that routinely crops in debates about the definition of digital humanities.
In this post, I argued that the various divides in the digital humanities, which often arise from institutional contexts and professional demands generally beyond our control—these divides are a distracting sideshow to the true power of the digital humanities, which has nothing to do with production of either tools or research. The heart of the digital humanities is not the production of knowledge; it’s the reproduction of knowledge.
The promise of the digital is not in the way it allows us to ask new questions because of digital tools or because of new methodologies made possible by those tools. The promise is in the way the digital reshapes the representation, sharing, and discussion of knowledge.
And I truly believe that this transformative power of the digital humanities belongs in the classroom. Classrooms were made for sharing. So, where does the “building” part of my pedagogy come up? How can I suddenly turn around and claim that building is important when I just said, in a blog post that has shown up on the syllabus of at least three different undergraduate introduction to the digital humanities courses?
Well, let me explain what I mean by building. Building, for me, means to work. Let me explain that.
In an issue of the PMLA from 2007 there’s a fantastic series of short essays by Ed Folsom, Jerry McGann, Peter Stallybrass, Kate Hayles, and others about the role of databases in literary studies. Folsom’s essay leads, and in it he describes what he calls the “epic transformation” of the online Walt Whitman Archive, which Folsom co-edits, along with Ken Price, into a database (1571). All of the other essays in some way respond to either the particulars of the digital Walt Whitman Archive, or more generally, to the impact of archival databases on research and knowledge production. It’s a great batch of essays, pre-dating by several years the prevalence of the term “digital humanities”—but that’s not why I mentioning these essays right now.
I’m mentioning them because Peter Stallybrass’s essay has the provocative title “Against Thinking,” which helps to explain why I mean by working, which Stallybrass explicitly argues stands opposed to thinking.
Thinking, according to Stallybrass is hard and painful. It’s boring, repetitious, and I love this—it’s indolent (1583).
On the other hand, working is easy, exciting, a process of discovery. It’s challenging.
This distinction between thinking and working informs Stallybrass’s undergraduate pedagogy, the way he trains his students to work with archival materials and the STC. In Stallybrass’s mind, students—and in fact, all need to do less thinking and more working. “When you’re thinking,” Stallybrass writes, “you’re usually staring at a blank sheet of paper or a blank screen, hoping that something will emerge from your head and magically fill that space. Even if something ‘comes to you,’ there’s no reason to believe that it is of interest, however painful the process has been” (1584).
Stallybrass goes on to say that “the cure for the disease called thinking is work” (1584). In Stallybrass’s field of Renaissance and Early Modern literature, much of that work has to do with textual studies, discovering variants, paying attention to the material form of the book, and so on. In my own teaching, I’ve attempted to replace thinking with building—sometimes with words, sometimes without. And I want to run through a few examples right now.
In general, these examples fall into two categories:
[And here my planned comments dissolved into a brief tour of some of the ways I incorporate building and sharing into my classes. The collaborative construction category is more self-evident: group projects aimed at building exhibits or formulating knowledge, such as my Omeka-based Portal Exhibit and the current cross-campus Renetworking of House of Leaves. I described my creative analysis category as an antidote to critical thinking—a hazardous term with an all but meaningless definition. In this category I included mapping projects and game design projects that were alternatives to traditional papers. I concluded my lightning talk by noting that students who pursued these creative analysis projects spent far more time on their work than those who wrote papers, and while their end results were often modest, these students were far more engaged in their work than students who wrote papers.]
Folsom, Ed. “Database as Genre: The Epic Transformation of Archives.” PMLA 122.5 (2007): 1571-1579. Print.
This is a comprehensive list of digital humanities sessions scheduled for the 2012 Modern Language Association Conference in Seattle, Washington. The 2012 list stands at 58 sessions, up from 44 last year (and 27 the year before). If the trend continues, within the decade it will no longer make sense to compile this list; it’ll be easier to list the sessions that don’t in some way relate in to the influence and impact of digital materials and tools upon language, literary, textual, and media studies.
It’s possible I may have missed a session or two; if so, let me know in the comments and I’ll add the panel to the list. Note that there’s also a pre-convention Getting Started in the Digital Humanities with DHCommons workshop; but because this workshop is application-only, it does not appear in the official MLA program.
You may also want to follow the MLA Tweetup Twitter account for updates on various spontaneous and planned meet-ups in Seattle.
[UPDATE 13 January 2012: I’ve begun adding links to presentations and papers if they’ve been posted online.]
Thursday, January 5
Pre-Convention Digital Humanities Project Mixer
1-4 pm in Convention Center, rooms 3A & 3B
Projects looking for collaborators and collaborators looking for projects, come mix and mingle in this informal project poster session that offers a face-to-face DHCommons experience. Representatives from projects looking for collaborators or just wanting to get the word out will share information and materials about their projects. This forum will also offer great opportunities for one-on-one conversations about pursuing projects in the digital humanities. If you would like to share your project, please sign up here, but otherwise there is no need to register.
This event is open to all MLA participants.
1. Evaluating Digital Work for Tenure and Promotion: A Workshop for Evaluators and Candidates
8:30–11:30 a.m., Willow A, Sheraton
Presiding: Alison Byerly, Middlebury Coll.; Katherine A. Rowe, Bryn Mawr Coll.; Susan Schreibman, Trinity Coll. Dublin
The workshop will provide materials and facilitated discussion about evaluating work in digital media (e.g., scholarly editions, databases, digital mapping projects, born-digital creative or scholarly work). Designed for both creators of digital materials (candidates for tenure and promotion) and administrators or colleagues who evaluate those materials, the workshop will propose strategies for documenting, presenting, and evaluating such work. Preregistration required.
9. Large Digital Libraries: Beyond Google Books
12:00 noon–1:15 p.m., 611, WSCC
Presiding: Michael Hancher Univ. of Minnesota, Twin Cities
Speakers: Tanya E. Clement, Univ. of Maryland, College Park; Amanda L. French, George Mason Univ.; George Oates, Open Library; Glenn Roe, Univ. of Chicago; Andrew M. Stauffer, Univ. of Virginia; Jeremy York, HathiTrust Digital Library
Aside from Google Books, the two principal repositories for digitized books are Open Library and HathiTrust Digital Library; Digital Public Library of America is now in its planning stage. What are the merits and prospects of these three projects? How can they be improved? What role should scholars play in their improvement? These questions will be addressed by participants in each project and by others experienced in the digital humanities.
12. Transmedia Stories and Literary Games
12:00 noon–1:15 p.m., 615, WSCC
“Hundred Thousand Billion Fingers: Oulipian Games and Serial Players,” Patrick LeMieux Duke Univ.
“Make Love, Not Warcraft: Virtual Worlds and Utopia,” Stephanie Boluk Vassar Coll.
“Oscillation: Transmedia Storytelling and Narrative Theory by Design,” Patrick Jagoda Univ. of Chicago
Presiding: Sean Scanlan, New York City Coll. of Tech., City Univ. of New York
“Making Online Peer Review Interactive: Sticky Notes and Highlighters,” Cheryl E. Ball, Illinois State Univ.
“The Bearable Light of Openness: Renovating Obsolete Peer-Review Bottlenecks,” Aaron J. Barlow, New York City Coll. of Tech., City Univ. of New York
“The Law Review Approach: What the Humanities Can Learn,” Allen Mendenhall, Auburn Univ., Auburn
41. Social Networks, Jewish Identity, and New Media
1:45–3:00 p.m. University, Sheraton
Presiding: Jonathan S. Skolnik Univ. of Massachusetts, Amherst
“Social Networking, Jewish Identity, and New Jewish Ritual: Tattooed Jews on Facebook,” Erika Meitner Virginia Polytechnic Inst. and State Univ.
“Electronic Apikoros: Searching for the Nineteenth-Century Origins of Contemporary Satire in the Jewish Blogosphere,” Ashley Aronsen Passmore Texas A&M Univ., College Station
“From MySpace to MyJewishSpace: The Role of the Internet in the Self-Definition of New Jews in Austria and Germany,” Andrea Reiter Univ. of Southampton
47. Old Books and New Tools
1:45–3:00 p.m. 606, WSCC
Presiding: Sarah Werner Folger Shakespeare Library
Speakers: Katherine D. Harris, San José State Univ.; Jeffrey Knight, Univ. of Washington, Seattle; Matt Thomas, Univ. of Iowa; Whitney Trettien, Duke Univ.; Meg Worley, Palo Alto, CA
This roundtable will consider how the categories of old books and new tools might illuminate each other. Speakers will provide individual reflections on their experiences with old books and new tools before opening up the conversation to the theoretical and practical concerns driving the use and interactions of the two.
Presiding: Kathleen Woodward Univ. of Washington, Seattle
“Emergent Projects, Processes, and Stories,” Sidonie Ann Smith Univ. of Michigan, Ann Arbor
“Learning Collaboratories, Now and in the Future,” Curtis Wong Microsoft Research
“It’s the Data, Stupid!,” Ed Lazowska Univ. of Washington, Seattle
“How to Crowdsource Thinking,” Cathy N. Davidson Duke Univ.
Scholars from the human, natural, and computational sciences will address the future of higher education in a digital age. They will identify problems in higher education today and provide recommendations for what is needed as we go forward. What pressure does this information age exert on the current ways we think about higher education? How does a conversation across the computational sciences and the humanities address, ease, or exacerbate that pressure?
87. Digital Literary Studies: When Will It End?
3:30–4:45 p.m. 304, WSCC
Presiding: David A. Golumbia Virginia Commonwealth Univ.
“Digital Birth, Digital Adoption, Digital Disownment: Reconceiving Computational Textuality,” John David Zuern Univ. of Hawai’i, Manoa
“Digital Literary Studies circa 1954: Lacan’s Machines and Shannon’s Minds,” Bernard Dionysius Geoghegan Northwestern Univ.
“Digital Anamnesis,” Benjamin J. Robertson Univ. of Colorado, Boulder
121. Writing the Jasmine Revolution and Tahrir Square: Graffiti, Film, Collage, Poetry
5:15–6:30 p.m., Cedar, Sheraton
Presiding: Kathryn Lachman Univ. of Massachusetts, Amherst
“Tagging the Jasmine Revolution: Social Media and Graffiti in the Tunisian Uprising,” David Fieni Cornell Univ.
“Quand la révolution filmique anticipe la révolution populaire,” Mirvet Médini Kammoun Institut Supérieur des Beaux-Arts de Tunis
“The Women’s Manifesto: Thinking Egypt 2011 Transnationally,” Basuli Deb Univ. of Nebraska, Lincoln
“Poetic Responses to the North African Revolutions,” Mahdia Benguesmia Univ. of Batna
125. What’s Still Missing? What Now? What Next? Digital Archives in American Literature
5:15–6:30 p.m., 608, WSCC
Presiding: Brad Evans, Rutgers Univ., New Brunswick
Speakers: Donna M. Campbell, Washington State Univ., Pullman; Julia H. Flanders, Brown Univ.; Kenneth M. Price, Univ. of Nebraska, Lincoln; Oya Rieger, Cornell Univ.; Robert Scholes, Brown Univ.; Jeremy York, HathiTrust Digital Library
This roundtable has two goals: (1) to provide a forum for reflection on the first twenty years of the digital archive, especially as it relates to American materials, which might include consideration of what is still missing and of methodologies for making use of what is there now, and (2) to offer an opportunity for researchers who have become dependent on the archive to talk with major players in its production, in the hope of fostering new avenues for cooperation.
150. Digital Humanities and Internet Research
7:00–8:15 p.m., 613, WSCC
Presiding: John Jones Univ. of Texas, Dallas
“Creating a Conceptual Search Engine and Multimodal Corpus for Humanities Research,” Robin A. Reid Texas A&M Univ., Commerce
“What the Digital Can’t Remember,” John Jones
“Toward a Rhetoric of Collaboration: An Online Resource for Teaching and Learning Research,” Jennifer Sano-Franchini Michigan State Univ.
161. The Webs We Weave: Online Pedagogy in Community Colleges
7:00–8:15 p.m., 615, WSCC
Presiding: Linda Weinhouse, Community Coll. of Baltimore County, MD
“Blended Learning: The Best of Both Worlds?,” Pamela Sue Hardman, Cuyahoga Community Coll., Western Campus, OH
“Magic in the Web,” Michael R. Best, Univ. of Victoria; Jeremy Ehrlich, Univ. of Victoria
“The Digital-Dialogue Journal: Tool for Enhanced Classic Communication,” Bette G. Hirsch, Cabrillo Coll., CA
“Delivering Literary Studies in the Twenty-First Century: The Relevance of Online Pedagogies,” Kristine Blair, Bowling Green State Univ.
Friday, January 6
187. Digital Humanities and Hispanism
8:30–9:45 a.m. Grand A, Sheraton
Presiding: Kyra A. Kietrys Davidson Coll.
Speakers: Mike Blum, Coll. of William and Mary; Francie Cate-Arries, Coll. of William and Mary; Kyra A. Kietrys; Kathy Korcheck, Central Coll.; William Anthony Nericcio, San Diego State Univ.; Rocío Quispe-Agnoli, Michigan State Univ.; Amaranta Saguar García, Univ. of Oxford, Lady Margaret Hall; David A. Wacks, Univ. of Oregon
Demonstrations by Hispanists who use technology in their scholarship and teaching. The presenters include a graduate student; junior and senior Latin American, Peninsular, and comparativist colleagues whose work spans medieval to contemporary times; and an academic technologist. After brief presentations of the different digital tools, the audience will circulate among the stations to participate in interactive demonstrations.
202. The Presidential Forum: Language, Literature, Learning
“Of Degraded Tongues and Digital Talk: Race and the Politics of Language,” Imani Perry, Princeton Univ.
“Learning to Unlearn,” Judith Halberstam, Univ. of Southern California
“Borrowing Privileges: Dreaming in Foreign Tongues,” Bala Venkat Mani, Univ. of Wisconsin, Madison
“Teaching Literature and the Bitter Truth about Starbucks,” Christopher Freeburg, Univ. of Illinois, Urbana
The forum addresses three fundamental points of orientation for our profession: language, in its various materialities; literature, broadly understood; and learning, especially student learning and our educational missions. The language and literature classroom has to serve the needs of today’s students. How do changing understandings of identity, performance, and media translate into transformations in teaching and learning?
215. Digital South, Digital Futures
10:15–11:30 a.m. 606, WSCC
Presiding: Vincent J. Brewton Univ. of North Alabama
“Documenting the American South,” Natalia Smith Univ. of North Carolina, Chapel Hill
“Space, Place, and Image: Mapping Farm Securities Administration (FSA) Photographs and the Photogrammar Project,” Lauren Tilton Yale Univ.
“Southern Spaces: The Development of a Digital Southern Studies Journal,” Frances Abbott Emory Univ.
“Mapping a New Deal for New Orleans Artists,” Michael Mizell-Nelson Univ. of New Orleans
217. Reconfiguring the Scholarly Editor: Textual Studies at the University of Washington, Seattle
10:15–11:30 a.m., 613, WSCC
Presiding: Míceál Vaughan, Univ. of Washington, Seattle
“Neither Editor nor Librarian: The Interventions Required in the New Context of Texts in the Digital World,” Joseph Tennis, Univ. of Washington, Seattle
“Revealing a Coronation Tribute: Decoding the Hidden Aural and Visual Symbols,” JoAnn Taricani, Univ. of Washington, Seattle
“Mapping Editors,” Meg Roland, Marylhurst Univ.
“The Editor as Curator: Early Histories of Collected Works Editions in English,” Jeffrey Knight, Univ. of Washington, Seattle
249. Building Digital Humanities in the Undergraduate Classroom
12:00 noon–1:15 p.m. Grand A, Sheraton
Presiding: Kathi Inman Berens Univ. of Southern California
Speakers: Kathryn E. Crowther, Georgia Inst. of Tech.; Brian Croxall, Emory Univ.; Maureen Engel, Univ. of Alberta; Paul Fyfe, Florida State Univ.; Kathi Inman Berens; Janelle A. Jenstad, Univ. of Victoria; Charlotte Nunes, Univ. of Texas, Austin; Heather Zwicker, Univ. of Alberta
This electronic roundtable assumes that “building stuff” is foundational to the digital humanities and that the technical barriers to participation can be low. When teaching undergraduates digital humanities, simple tools allow students to focus on the simultaneous practices of building and interpreting. This show-and-tell presents projects of variable technical complexity that foster robust interpretation.
259. Representation in the Shadow of New Media Technologies
12:00 noon–1:15 p.m., 304, WSCC
Presiding: Lan Dong Univ. of Illinois, Springfield
“Web Video and Ethnic Media: Linking Representation and Distribution,” Aymar Jean Christian Univ. of Pennsylvania
“Among Friends: Comparing Social Networking Functions in the Baltimore Sun and Baltimore Afro-American in 1904 and 1933,” Daniel Greene Univ. of Maryland, College Park
“Digital Trash Talk: The Rhetoric of Instrumental Racism as Procedural Strategy,” Lisa Nakamura Univ. of Illinois, Urbana
276. Getting Funded in the Humanities: An NEH Workshop
1:30–3:30 p.m. 3B, WSCC
Presiding: Jason C. Rhody National Endowment for the Humanities
This workshop will highlight recent awards and outline current funding opportunites. In addition to emphasizing grant programs that support individual and collaborative research and education, the workshop will include information on the NEH’s Office of Digital Humanities. A question-and-answer period will follow.
301. Reconfiguring Publishing
1:45–3:00 p.m., Grand A, Sheraton
Presiding: Carolyn Guertin Univ. of Texas, Arlington; William Thompson Western Illinois Univ.
Speakers: James Copeland, Ugly Duckling Presse; Gail E. Hawisher, Univ. of Illinois, Urbana; James MacGregor, Public Knowledge Project; Rita Raley, Univ. of California, Santa Barbara; Avi Santo, Old Dominion Univ.; Cynthia L. Selfe, Ohio State Univ., Columbus; Raymond G. Siemens, Univ. of Victoria
This session intends not to bury publishing but to raise awareness of its transformations and continuities as it reconfigures itself. New platforms are causing publishers to return to their roots as booksellers while booksellers are once again becoming publishers. Open-access models of publishing are creating new models for content creation and distribution as small print-focused presses are experiencing a renaissance. Come see!
315. The New Dissertation: Thinking outside the (Proto-)Book
3:30–4:45 p.m., 606, WSCC
Presiding: Kathleen Woodward, Univ. of Washington, Seattle
Speakers: David Damrosch, Harvard Univ.; Kathleen Fitzpatrick, MLA; Richard E. Miller, Rutgers Univ., New Brunswick; Sidonie Ann Smith, Univ. of Michigan, Ann Arbor; Kathleen Woodward
In 2010 the Executive Council appointed a working group to explore the state of the doctoral dissertation: How can it adapt to digital innovation, open access, new concepts of “authorship”? What counts as scholarship in the world today? How do we address the national problems of cost and time to degree? This roundtable will offer members of the working group an opportunity to make the case that as we shift the terminology from scholarly publication to scholarly communication we need to expand the forms of the dissertation and to reconceptualize what the dissertation is and how it can prepare graduates for academic careers in the coming decades.
332. Digital Narratives and Gaming for Teaching Language and Literature
3:30–4:45 p.m. Aspen, Sheraton
Presiding: Barbara Lafford Arizona State Univ.
“Narrative Expression and Scientific Method in Online Gaming Worlds,” Steven Thorne Portland State Univ.
“Designing Narratives: A Framework for Digital Game-Mediated L2 Literacies Development,” Jonathon Reinhardt Univ. of Arizona; Julie Sykes Univ. of New Mexico, Albuquerque
“Close Playing, Paired Playing: A Practicum,” Edmond Chang Univ. of Washington, Seattle; Timothy Welsh Loyola Univ., New Orleans
Responding: Dave McAlpine Univ. of Arkansas, Little Rock
343. The Cultural Place of Nineteenth-Century Poetry
3:30–4:45 p.m., 611, WSCC
Presiding: Charles P. LaPorte, Univ. of Washington, Seattle
“Lyric and Music at the Fin de Siècle: The Cultural Place of Song,” Emily M. Harrington, Penn State Univ., University Park
“Olympics 2012 and Victorian Poetry for All Time,” Margaret Linley, Simon Fraser Univ.
349. Digital Pedagogy
5:15–6:30 p.m., Grand A, Sheraton
Presiding: Katherine D. Harris San José State Univ.
Speakers: Sheila T. Cavanagh, Emory Univ.; Elizabeth Chang, Univ. of Missouri, Columbia; Lori A. Emerson, Univ. of Colorado, Boulder; Adeline Koh, Richard Stockton Coll. of New Jersey; John Lennon, Univ. of South Florida Polytechnic; Kevin Quarmby, Shakespeare’s Globe Trust; Katherine Singer, Mount Holyoke Coll.; Roger Whitson, Georgia Inst. of Tech.
Discussions about digital projects and digital tools often focus on research goals. For this electronic roundtable, we will instead demonstrate how these digital resources, tools, and projects have been integrated into undergraduate and graduate curricula.
378. Old Labor and New Media
5:15–6:30 p.m., 608, WSCC
Presiding: Alison Shonkwiler Rhode Island Coll.
“America Needs Indians: Representations of Native Americans in Counterculture Narrative and the Roots of Digital Utopianism,” Lisa Nakamura Univ of Illinois, Urbana
“The Eyes of Real Labor and the Illusions of Virtual Reality,” Matt Goodwin Univ. of Massachusetts, Amherst
“Digital Voices: Representations of Migrant Workers in Dubai and Los Angeles,” Anne Cong-Huyen Univ. of California, Santa Barbara
Responding: Seth Perlow Cornell Univ.
Saturday, January 7
410. Reconfiguring the Literary: Narratives, Methods, Theories
8:30–9:45 a.m. 608, WSCC
Presiding: Susan Schreibman Trinity Coll., Dublin
Speakers: Alison Booth, Univ. of Virginia; Mark Stephen, Byron Univ. of Sydney; Øyvind Eide, Univ. of Oslo; Alexander Gil, Univ. of Virginia; Rita Raley, Univ. of California, Santa Barbara
425. Composing New Partnerships in the Digital Humanities
8:30–9:45 a.m., 606, WSCC
Presiding: Catherine Jean Prendergast Univ. of Illinois, Urbana
Speakers:Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Catherine Jean Prendergast; Alexander Reid, Univ. at Buffalo, State Univ. of New York; Spencer Schaffner, Univ. of Illinois, Urbana; Annette Vee, Univ. of Pittsburgh
The objective of this roundtable is to facilitate interactions between digital humanists and writing studies scholars who, despite shared interests in digital authorship, intellectual property, peer review, classroom communication, and textual revision, have often failed to collaborate. An extended period for audience involvement has been designed to seed partnerships beyond the conference.
428. Technology and Chinese Literature and Language
10:15–11:30 a.m., Boren, Sheraton
Presiding: Xiaoping Song Norwich Univ.
“Adaptation: Rewriting Modern Chinese Literary Masterpieces,” Paul Manfredi Pacific Lutheran Univ.
“Technology in Chinese Instruction: A Web-Based Extensive Reading Program,” Helen Heling Shen Univ. of Iowa
“Technology and Teaching Chinese Literature in Translation,” Keith Dede Lewis and Clark Coll.
“Text-Image-Imagined Words: An Approach to Teaching Chinese Literature,” Xiaoping Song
The speakers will discuss the preservation of texts as a core purpose of libraries, engaging questions regarding the tasks of deciding what materials to preserve and when and which to let go: best practices; institutional and collective roles for the preservation of materials in various formats; economics and governance structures of preserving materials; issues of tools, standards, and platforms for digital materials.
450. Digital Faulkner: William Faulkner and Digital Humanities
10:15–11:30 a.m. 615, WSCC
Presiding: Steven Knepper Univ. of Virginia
Speakers: Keith Goldsmith, Vintage Books; John B. Padgett Brevard, Coll.; Noel Earl Polk, Mississippi State Univ.; Stephen Railton, Univ. of Virginia; Peter Stoicheff, Univ. of Saskatchewan
A roundtable on digital humanities and its implications for teaching and scholarship on the work of William Faulkner.
467. The Future of Teaching
12:00 noon–1:15 p.m., Grand C, Sheraton
Presiding: Priscilla B. Wald, Duke Univ.
“Gaming the Humanities Classroom,” Patrick Jagoda, Univ. of Chicago
“Intimacy in Three Acts,” Margaret Rhee, Univ. of California, Berkeley
“One Course, One Project,” Jentery Sayers, Univ. of Victoria
“The Meta Teacher,” Bulbul Tiwari, Stanford Univ.
This session features innovative advanced doctoral students and junior scholars who are making their mark as scholars and as teachers using new interactive, multimedia technologies of writing and publishing in their research and classrooms. The panelists cross the boundaries of the humanities, arts, sciences, and technology and are committed to new forms of scholarship and pedogogy. They practice the virtues of open, public, digitally accessible thinking and represent the vibrancy of our profession. Fiona Barnett, Duke Univ., will coordinate live Twitter feeds and other input during the session.
468. Networks, Maps, and Words: Digital-Humanities Approaches to the Archive of American Slavery
482. Of Kings’ Treasuries and the E-Protean Invasion: The Evolving Nature of Scholarly Research
12:00 noon–1:15 p.m., 613, WSCC
Presiding: Jude V. Nixon, Salem State Univ.
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Harriett Green, Univ. of Illinois, Urbana; Dean J. Smith, Project MUSE; Pierre A. Walker, Salem State Univ.
This roundtable addresses the veritable explosion of emerging technologies (Google Books, Wikipedia, and e-readers) currently available to faculty members to enhance their scholarly research and how these resources are altering fundamentally the method of scholarly research. The session also wishes to examine access to these technologies and how they interact with the traditional research library and the still meaningful role, if any, it plays in scholarly research.
487. Context versus Convenience: Teaching Contemporary Business Communication through Digital Media
12:00 noon–1:15 p.m., 306, WSCC
Presiding: Mahli Xuan Mechenbier, Kent State Univ.
“Reenvisioning and Renovating the Twenty-First-Century Business Communication Classroom,” Lara Smith-Sitton, Georgia State Univ.
“Contextualizing Conventions: Technology in Business Writing Classrooms,” Suanna H. Davis, Houston Community Coll., Central Coll., TX
“Teaching Business Communication through Simulation Games,” Katherine V. Wills, Indiana Univ.–Purdue Univ., Columbus
490. Reconfiguring the Scholarly Edition
12:00 noon–1:15 p.m., 611, WSCC
Presiding: Susan Schreibman, Trinity Coll. Dublin
Speakers: Michael R. Best, Univ. of Victoria; John Bryant, Hofstra Univ.; Alexander Gil, Univ. of Virginia; Elizabeth Grove-White, Univ. of Victoria; Grant Simpson, Indiana Univ., Bloomington; John A. Walsh, Indiana Univ., Bloomington
New theories of editing have broadened the approaches available to editors of scholarly editions. Noteworthy amongst these are the changes brought about by editing for digital publication. New methods for digital scholarship, forms of editions, theories informing digital publication, and tools offer exciting alternatives to traditional notions of the scholarly edition.
513. Principles of Exclusion: The Future of the Nineteenth-Century Archive
1:45–3:00 p.m., 611, WSCC
Presiding: Lloyd P. Pratt, Univ. of Oxford, Linacre Coll.
“Missing Links; or, Girls of Today, Archives of Tomorrow,” William A. Gleason, Princeton Univ.
“Anonymity, Authorship, and Digital Archives in American Literature,” Elizabeth Lorang, Univ. of Nebraska, Lincoln
“Dashed Hopes: Small-Scale Digital Archives of the 1990s,” Amy Earhart, Texas A&M Univ., College Station
532. Reading Writing Interfaces: Electronic Literature’s Past and Present
1:45–3:00 p.m. 613, WSCC
Presiding: Marjorie Luesebrink Irvine Valley Coll., CA
“Early Authors of E-Literature, Platforms of the Past,” Dene M. Grigar Washington State Univ., Vancouver
“Seven Types of Interface in the Electronic Literature Collection Volume Two,” Marjorie Luesebrink ; Stephanie Strickland New York, NY
539. #alt-ac: Alternative Paths, Pitfalls, and Jobs in the Digital Humanities
3:30–4:45 p.m. 3B, WSCC
Presiding: Sara Steger Univ. of Georgia
Speakers:Brian Croxall, Emory Univ.; Julia H. Flanders, Brown Univ.; Jennifer Howard, Chronicle of Higher Education; Matthew Jockers, Stanford Univ.; Shana Kimball, Univ. of Michigan, Ann Arbor; Bethany Nowviskie, Univ. of Virginia; Lisa Spiro, National Inst. for Tech. in Liberal Education
This roundtable brings together various perspectives on alternative academic careers from professionals in digital humanities centers, libraries, publishing, and humanities labs. Speakers will discuss how and whether digital humanities is especially suited to fostering non-tenure-track positions and how that translates to the role of alt-ac in digital humanities and the academy. Related session: “#alt-ac: The Future of ‘Alternative Academic’ Careers” (595).
566. Ending the Edition
3:30–4:45 p.m., 303, WSCC
Presiding: Carol DeBoer-Langworthy, Brown Univ.
“Mary Moody Emerson’s Almanacks: Digital Editions and Imagined Endings,” Noelle A. Baker, Neenah, WI
“Closing the Book on a Multigenerational Edition: Harvard’s The Collected Works of Ralph Waldo Emerson,” Ronald A. Bosco, Univ. at Albany, State Univ. of New York; Joel Myerson, Univ. of South Carolina, Columbia
“‘Letting Go’: The Final Volumes of the Cambridge Fitzgerald Edition,” James L. W. West, Penn State Univ., University Park
581. Digital Humanities versus New Media
5:15–6:30 p.m., 611, WSCC
” Everything Old Is New Again: The Digital Past and the Humanistic Future,” Alison Byerly Middlebury Coll.
“As Study or as Paradigm? Humanities and the Uptake of Emerging Technologies,” Andrew Pilsch Penn State Univ., University Park
“Digital Tunnel Vision: Defining a Rhetorical Situation,” David Robert Gruber North Carolina State Univ.
“Digital Humanities Authorship as the Object of New Media Studies,” Victoria E. Szabo Duke Univ.
595. #alt-ac: The Future of “Alternative Academic” Careers
5:15–6:30 p.m., 3B, WSCC
Presiding: Bethany Nowviskie, Univ. of Virginia
Speakers: Donald Brinkman, Microsoft Research; Neil Fraistat, Univ. of Maryland, College Park; Robert Gibbs, Univ. of Toronto; Charles Henry, Council on Library and Information Resources; Bethany Nowviskie; Jason C. Rhody, National Endowment for the Humanities; Elliott Shore, Bryn Mawr Coll.
In increasing numbers, scholars are pursuing careers as “alternative academics”—embracing hybrid and non-tenure-track positions in libraries, presses, humanities and cultural heritage organizations, and digital labs and centers. Speakers represent organizations helping to craft alternatives to the traditional academic career. Related session: “#alt-ac: Alternative Paths, Pitfalls, and Jobs in the Digital Humanities” (539).
603. Innovative Pedagogy and Research in Technical Communication
5:15–6:30 p.m., 615, WSCC
Presiding: William Klein Univ. of Missouri, St. Louis
1. “The New Normal of Public Health Research by Technical Communication Professionals,” Thomas Barker Texas Tech Univ.
2. “Teaching the New Paradigm: Social Media inside and outside the Classroom,” William Magrino Rutgers Univ., New Brunswick; Peter B. Sorrell Rutgers Univ., New Brunswick
3. “Technical and Rhetorical Communication through DIY (Do-It-Yourself) Digital Video,” Crystal VanKooten Univ. of Michigan, Ann Arbor
Is there gravity in digital worlds? Moving beyond both lamentations and celebrations of the putatively free-floating informatic empyrean, this roundtable will explore the ways in which representations in myriad digital platforms—verbal, visual, musical, cinematic—might bear the weight of materiality, presence, and history and the ways in which bodies—both human and hardware—might be recruited for or implicated in the effort.
730. New Media Narratives and Old Prose Fiction
1:45–3:00 p.m., 310, WSCC
Presiding: Amy J. Elias Univ. of Tennessee, Knoxville
“New Media: Its Use and Abuse for Literature and for Life,” Joseph Paul Tabbi Univ. of Illinois, Chicago
“Contrasts and Convergences of Electronic Literature,” Dene M. Grigar Washington State Univ., Vancouver
“Computing Language and Poetry,” Nick Montfort Massachusetts Inst. of Tech.
736. Close Playing: Literary Methods and Video Game Studies
This roundtable moves beyond the games-versus-stories dichotomy to explore the full range of possible literary approaches to video games. These approaches include the theoretical and methodological contributions of reception studies, reader-response theory, narrative theory, critical race and gender theory, disability studies, and textual scholarship.
Speakers: Mark Algee-Hewitt, McGill Univ.; Alison Booth, Univ. of Virginia; Amanda Gailey, Univ. of Nebraska, Lincoln; Laura C. Mandell, Texas A&M Univ., College Station
Roundtable on the theoretical, practical, and institutional issues surrounding the transformation of print-era texts into digital forms for scholarly use. What forms of editing need to be done, and by whom? What new research questions are becoming possible? How will the global digital library change professional communication? What is the future of the academic research library? How can we make sustainable digital textual resources for literary studies?
On September 8, the DigitalCultureBooks imprint of the University of Michigan Library and University Michigan Press released the online edition of Hacking the Academy. Conceived of by Dan Cohen and Tom Scheinfeldt at GMU’s Roy Rosenzweig Center for History and New Media, Hacking the Academy is an experiment in publishing. It’s a crowdsourced book, in which contributors had merely one week (May 21-28, 2010) to come up with and submit material. It’s also heavily edited, with Dan and Tom shaping the mass—the mess?—of possible submissions into a cohesive, coherent work. Authors include professors, graduate students, journalists, archivists, and alternate academic career visionaries. (Full disclosure: I’m in there too.)
As Jason Jones notes on ProfHacker, the book has been coming out in stages. From the first, a unedited, raw collection of all the submissions was aggregated at Hacking the Academy. Now, the edited online volume has been released. In 2012, a print edition will be available as well.
Because the book is being published under a Creative Commons non-commercial license, anybody is free to share or remix the work, as long the original authors and editors are properly attributed. This license is another way the book is an experiment—a major academic publisher is giving free license to anyone to shape, circulate, or reimagine the book.
Here, then, is my initial contribution to the Hacking the Academy ecology: I’ve compiled and formatted the online edited volume into ebook form, suitable for reading on Kindles, Nooks, iPads, and so on. (Hat tip to my colleague Mills Kelly for giving me the idea.)
Mark Z. Danielewski’s debut novel House of Leaves (2000) presents a paradox to the literary scholar working within the digital humanities. On one hand the massive, labyrinthine novel offers so many ambiguities and playful metaleptic moments that it would seem to be a literary critic’s dream text, endlessly interpretable, boundlessly intertextual. On the other hand, with its layers of footnotes, metacommentary, and self-conscious invocation of literary theory, the novel seems to preemptively foreclose any and all possible interpretative moves.
Indeed, as Danielewski himself has said, “I have yet to hear an interpretation of House of Leaves that I had not anticipated”[1. McCaffery, Larry, and Sinda Gregory. “Haunted House—An Interview with Mark Z. Danielewksi.” Critique: Studies in Contemporary Fiction 44.2 (2003) : 106.] While we should take Danielewski’s proclamation as a kind of reverse echo of Warhol’s disingenuous denial of any intention in his artwork, the fact remains that the novel’s hyperconscious awareness of itself, combined with the important scholarly criticism from the likes of Katherine Hayles and Mark Hansen, as well as the continuing exegetical flood generated by legions of fans online—take all this together and it seems impossible that there is anything new to say about House of Leaves.
What do you say about a book that attempts to say everything about itself?
How do you say something new about a text that has been discussed, dissected, and picked over by thousands of persistent and incisive minds?
And how might a digital humanities sensibility reinvigorate a thoroughly worked-over literary text?
Jessica Pressman has convincingly argued that House of Leaves is a “networked” novel in at least two senses: the novel is acutely aware of digital networks and the circulation of knowledge online; and ideal readers will adapt a “networked reading strategy” as they make sense of the book, encountering companion works such as the album Haunted by Poe (aka Danielewski’s sister) and Danielewski’s own The Whalestoe Letters.[2. Pressman, Jessica. “House of Leaves: Reading the Networked Novel.” Studies in American Fiction 34.1 (2006) : 107-128.] To these two forms of the networked novel, my ProfHacker colleague Brian Croxall recently suggested another: reading the novel as part of a network. That is, Brian wants his upcoming undergraduate class at Emory University to read House of Leaves alongside of other classes at other institutions.
After some wrangling of syllabi and schedules, Brian Croxall has gathered a group of us who are teaching House of Leaves at the same time toward the end of October: Paul Benzon (Temple University), Erin Templeton (Converse College), Zach Whalen (University of Mary Washington), and myself at George Mason. The context for each of our classes is different: Paul’s class is a capstone course on literature, media, and the archive; Erin’s is an honors course on the contemporary novel; Zach’s is a senior seminar; Brian’s is a digital humanities class; and my own course is post-print fiction. Because of the varied courses involved (not to mention diverse institutional contexts), we see great value in reading House of Leaves as a network.
Right now we are in the process of determining what kind of assignments our five classes can share. They range from the simple to the complex, including the following:
Blog Commenting. If we’re all using course blogs, we can have each class read and comment on the posts of the other classes. This is the easiest assignment to implement and has the benefit of being asynchronous.
A Group Blog/Tumble Log. All of our students (which will likely add up to 50-60 students) join one massive group blog or tumble log for the three weeks that we’re reading House of Leaves. There’s some flexibility in how dialogic this group effort would be. Students could use Tumblr simply to post quotes and images related to the book, or we could have a full-fledged blog, with layers of comments.
Constructive Class projects. I suggested converting my mapping House of Leaves assignment into a studio-type project, in which one class presents two or three group projects to the other classes, and those classes “critique” them. This assignment requires more overhead than the other options, and involves a greater commitment from the students (versus commenting on blogs, where the stakes are lower).
In addition to these fairly predictable shared assignments, I’ve been thinking about a more radical one, which tackles head-on the challenge any teacher of House of Leaves faces: the forum. The House of Leave forum is a massive discussion board of tens of thousands of posts, ranging from the puerile to the brilliant. And, since this is House of Leaves, sometimes both at once. Nearly every puzzle, every ambiguity, every nuance of House of Leaves has had a discussion thread devoted to it. The forum is so overwhelming that it can easily suck the air out of a literary reading of House of Leaves. And, as Rita Raley noted in a message to Richard Grusin and myself, the forum has an unassailable authority about it:[blackbirdpie url=”http://twitter.com/#!/ritaraley/status/37364736030941184″]
Just as House of Leaves presents a paradox to the literary scholar, so too does the forum present a dilemma to the digital humanist. On one hand, the forum is a vast crowd-sourced literary interpretation, complete with user-generated concordances, digitizations, and transcriptions—all of the familiar tools of digital humanists. On the other hand, this work is produced by non-scholars. By fans. Amateurs. It’s not vetted in a way recognizable to most academics (though it’s foolish to believe that the forum does not have its own system of peer review and prestige ranking).
Different professors have different ways of dealing with the forum. Some ban their students from reading it outright. Others acknowledge the forum in an offhand way, hoping that their students never investigate for themselves the breadth and depth of the discussions. In either of these cases, I think the tiptoeing around the forum is due to pedagogical expediencies rather than a high-brow dismissal of low-brow work.
There is a third option, of course, which is to face the forum head-on and incorporate the discussion threads into the class (which certainly fits the networked reading strategy Pressman describes).
And now, I want to propose a fourth option.
Renetworking the Novel
Let’s bring the forum to the fore by starting it anew.
That is, I propose starting the forum from scratch. In our classes we’ll explicitly (and temporarily) forbid students from reading the House of Leave forum. Instead, we create an alternate forum of our own, seeded with a few initial threads that appeared in the original forum. The idea is to recreate the forum, and see how its trajectory would play out ten years later, in the context of a literature class. The 50-60 students from the five classes seems a manageable number to launch a new iteration of the forum; enough to generate a sense of “there” there, but not such an overwhelming number that keeping up with the forum becomes unmanageable (though that would in fact replicate the feel of the original forum).
After three weeks of intensive cross-class use of the renetworked forum, the final step would be to lift the ban on reading the official forum, giving students the opportunity to compare the alternate forum with the original, and draw some conclusions from that comparison.
The five of us haven’t decided yet what kind of shared assignment we’ll use. And my post here is not meant to sway anybody; it’s more of a thought experiment. What would happen if we truly renetworked an already networked novel? And do so using the same modes that made the novel networked in the first place? What would we learn? About House of Leaves? About networks? About crowdsourcing? About the blurry lines between academics and fans?
Every scholarly community has its disagreements, its tensions, its divides. One tension in the digital humanities that has received considerable attention is between those who build digital tools and media and those who study traditional humanities questions using digital tools and media. Variously framed as do vs. think, practice vs. theory, or hack vs. yack, this divide has been most strongly (and provocatively) formulated by Stephen Ramsay. At the 2011 annual Modern Language Association convention in Los Angeles, Ramsay declared, “If you are not making anything, you are not…a digital humanist.”
I’m going to step around Ramsay’s argument here (though I recommend reading the thoughtful discussion that ensued on Ramsay’s blog). I mention Ramsay simply as an illustrative example of the various tensions within the digital humanities. There are others too: teaching vs. research, universities vs. liberal arts colleges, centers vs. networks, and so on. I see the presence of so many divides—which are better labeled as perspectives—as a sign that there are many stakeholders in the digital humanities, which is a good thing. We’re all in this together, even when we’re not.
I’ve always believed that these various divides, which often arise from institutional contexts and professional demands generally beyond our control, are a distracting sideshow to the true power of the digital humanities, which has nothing to do with production of either tools or research. The heart of the digital humanities is not the production of knowledge; it’s the reproduction of knowledge. I’ve stated this belief many ways, but perhaps most concisely on Twitter: [blackbirdpie url=”http://twitter.com/samplereality/statuses/26563304351″]The promise of the digital is not in the way it allows us to ask new questions because of digital tools or because of new methodologies made possible by those tools. The promise is in the way the digital reshapes the representation, sharing, and discussion of knowledge. We are no longer bound by the physical demands of printed books and paper journals, no longer constrained by production costs and distribution friction, no longer hampered by a top-down and unsustainable business model. And we should no longer be content to make our work public achingly slowly along ingrained routes, authors and readers alike delayed by innumerable gateways limiting knowledge production and sharing.
I was riffing on these ideas yesterday on Twitter, asking, for example, what’s to stop a handful of of scholars from starting their own academic press? It would publish epub books and, when backwards compatibility is required, print-on-demand books. Or what about, I wondered, using Amazon Kindle Singles as a model for academic publishing. Imagine stand-alone journal articles, without the clunky apparatus of the journal surrounding it. If you’re insistent that any new publishing venture be backed by an imprimatur more substantial than my “handful of scholars,” then how about a digital humanities center creating its own publishing unit?
It’s with all these possibilities swirling in my mind that I’ve been thinking about the MLA’s creation of an Office of Scholarly Communication, led by Kathleen Fitzpatrick. I want to suggest that this move may in the future stand out as a pivotal moment in the history of the digital humanities. It’s not simply that the MLA is embracing the digital humanities and seriously considering how to leverage technology to advance scholarship. It’s that Kathleen Fitzpatrick is heading this office. One of the founders of MediaCommons and a strong advocate for open review and experimental publishing, Fitzpatrick will bring vision, daring, and experience to the MLA’s Office of Scholarly Communication.
I have no idea what to expect from the MLA, but I don’t think high expectations are unwarranted. I can imagine greater support of peer-to-peer review as a replacement of blind review. I can imagine greater emphasis placed upon digital projects as tenurable scholarship. I can imagine the breadth of fields published by the MLA expanding. These are all fairly predictable outcomes, which might have eventually happened whether or not there was a new Office of Scholarly Communication at the MLA.
But I can also imagine less predictable outcomes. More experimental, more peculiar. Equally as valuable though—even more so—than typical monographs or essays. I can imagine scholarly wikis produced as companion pieces to printed books. I can imagine digital-only MLA books taking advantage of the native capabilities of e-readers, incorporating videos, songs, dynamic maps. I can image MLA Singles, one-off pieces of downloadable scholarship following the Kindle Singles model. I can imagine mobile publishing, using smartphones and GPS. I can imagine a 5,000-tweet conference backchannel edited into the official proceedings of the conference backchannel.
There are no limits. And to every person who objects, But, wait, what about legitimacy/tenure/cost/labor/& etc, I say, you are missing the point. Now is not the time to hem in our own possibilities. Now is not the time to base the future on the past. Now is not the time to be complacent, hesitant, or entrenched in the present.
William Gibson has famously said that “the future is already here, it’s just not very evenly distributed.” With the digital humanities we have the opportunity to distribute that future more evenly. We have the opportunity to distribute knowledge more fairly, and in greater forms. The “builders” will build and the “thinkers” will think, but all of us, no matter where we fall on this false divide, we all need to share. Because we can.
(Radiohead Crowd photograph courtesy of Flickr user Samuel Stroube / Creative Commons Licensed]
[This is the text, more or less, of the talk I delivered at the 2011 biennial meeting of the Society for Textual Scholarship, which took place March 16-18 at Penn State University. I originally planned on talking about the role of metadata in two digital media projects—a topic that would have fit nicely with STS’s official mandate of investigating print and digital textual culture. But at the last minute (i.e. the night before), I changed the focus of my talk, turning it into a thinly-veiled call for digital textual scholarship (primarily the creation of digital editions of print works) to rethink everything it does. (Okay, that’s an exaggeration. But I do argue that there’s a lot the creators of digital editions of texts should learn from born-digital creative projects.)
Also, it was the day after St. Patrick’s Day. And the fire alarm went off several times during my talk.
None of these events are related.]
The Poetics of Metadata and the Potential of Paradata
in We Feel Fine and The Whale Hunt
I once made fun of the tendency of academics to begin their papers by apologizing in advance for the very same papers they were about to begin. I’m not exactly going to apologize for this paper. But I do want to begin by saying that this is not the paper I came to give. I had that paper, it was written, and it was a good paper. It was the kind of paper I wouldn’t have to apologize for.
But, last night, I trashed it.
I trashed that paper. Call it the Danny Boy effect, I don’t know. But it wasn’t the paper I felt I needed to deliver, here, today.
Throughout the past two days I’ve detected a low level background hum in the conference rooms, a kind of anxiety about digital texts and how we interact with them. And I wanted to acknowledge that anxiety, and perhaps even gesture toward a way forward in my paper. So, I rewrote it. Last night, in my hotel room. And, well, it’s not exactly finished. So I want to apologize in advance, not for what I say in the paper, but for all the things I don’t say.
My original talk had positioned two online works by the new media artist Jonathan Harris as two complementary expressions of metadata. I had a nice title for that paper. I even coined a new word in my title.
But this title doesn’t work anymore.
I have a new title. It’s a bit more ambitious.
But at least I’ve still got that word I coined.
It’s a lovely word. And truth be told, just between you and me, I didn’t coin it. In the social sciences, paradata refers to data about the data collection process itself—say the date or time of a survey, or other information about how a survey was conducted. But there are other senses of the prefix “para” I’m trying to evoke. In textual studies, of course, para-, as in paratext, is what Genette calls the threshold of the text. I’m guessing I don’t have to say anything more about paratext to this audience.
But there’s a third notion of “para” that I want to play with. It comes from the idea of paracinema, which Jeffrey Sconce first described in 1996. Paracinema is a kind of “reading protocol” that valorizes what most audiences would otherwise consider to be cinematic trash. The paracinematic aesthetic redeems films that are so bad that they actually become worth watching—worth enjoying—and it does so in a confrontational way that seeks to establish a counter-cinema.
Following Sconce’s work, the videogame theorist Jesper Juul has wondered if there can be such a thing as paragames—illogical, improbable, and unreasonably bad games. Such games, Juul suggests, might teach us about our tastes and playing habits, and what the limits of those tastes are. And even more, such paragames might actually revel in their badness, becoming fun to play in the process.
Trying to tap into these three different senses of “para,” I’ve been thinking about paradata. And I’ve got to tell you, so far, it’s a mess. (And this part of my paper was actually a mess in the original version of my paper as well). My concept of paradata is a big mess and it may not mean anything at all.
This is what I have so far: paradata is metadata at a threshold, or paraphrasing Genette, data that exists in a zone between metadata and not metadata. At the same time, in many cases it’s data that’s so flawed, so imperfect that it actually tells us more than compliant, well-structured metadata does.
So let me turn now to We Feel Fine, a massive, ongoing digital databased storytelling project rich with metadata—and possibly, paradata.
We Feel Fine is an astonishing collection of tens of thousands of sentences extracted from tens of thousands of blog posts, all containing the phrase “I feel” or “I am feeling.” It was designed by new media artist Jonathan Harris and the computer scientist Sep Kamvar and launched in May 2006.
The project is essentially an automated script that visits thousands of blogs every minute, and whenever the script detects the words “I feel” or “I am feeling,” it captures that sentence and sends it to a database. As of early this year, the project has harvested 14 million expressions of emotions from 2.5 million people. And the site has done this at a rate of 10,000 to 15,000 “feelings” a day.
Let me repeat that: every day approximately 10,000 new entries are added to We Feel Fine.
The heart of the project appears to be the multifaceted interface that has six so-called “movements”—six ways of visualizing the data collected by We Feel Fine’s crawler.
The default movement is Madness, a swarm of fifteen-hundred colored circles and squares, each one representing a single sentence from a blog post, a single “feeling.” The circles contain text only, while the squares include images associated with the respective blog post.
The colors of the particles signify emotional valence, with shades of yellow representing more positive emotions, red signaling anger. Blue is associated with sad feelings, and so on. This graphic, by the way, comes from the book version of We Feel Fine.
The book came out in 2009. In it, Harris and Kamvar curate hundreds of the most compelling additions to We Feel Fine, as well as analyze the millions of blog posts they’ve collected with with extensive data visualizations—graphs, and charts, and diagrams.
The book is an amazing project in and of itself and deserves its own separate talk. It raises important questions about archives, authorship, editorial practices, the material differences between a dynamic online project and a static printed work, and so on. I’ll leaves aside these questions right now; instead, I want to turn to the site itself. Let’s look at the Madness movement in action.
(And here I went online and interacted with the site. Why don’t you do that too, and come back later?)
(Also, right about here a fire alarm went off. Which, semantically, makes no sense. The alarm turned on, but I said it went off.)
(I can’t reproduce the sound of that particular fire alarm going off. I bet you have some sort of alarm on your phone or something you could make go off, right?)
(No? You don’t? Or you’re just as confused about on and off as I am? Then enjoy this short video intermission, which interrupts my talk, which I’m writing and which you’re reading, about as intrusively as the alarms interrupted my panel.)
(Okay. Back to my talk, which I’m writing, and which you’re reading.)
In the Madness movement you can click on any single circle, and the “feeling” will appear at the top of the screen. Another click on that feeling will drill down to the original blog post in its original context. So what’s important here is that a single click transitions from the general to the particular, from the crowd to the individual. You can also click on the squares to show “feelings” that have an image associated with them. And you have the option to “save” these images, which sends them to a gallery, just about the only way you can be sure to ever find any given image in We Feel Fine again.
At the top of the screen are are six filters you can use to narrow down what appears in the Madness movement. Working right to left, you can search by date, by location, the weather at that location at the time of the original blog post, the age of the blogger, the gender of the blogger, and finally, the feeling itself that is named in the blog post. While every item in the We Feel Fine database will have the feeling and date information attached to it, the age, gender, location, and weather fields are populated only for those items in which that information is publicly available—say a LiveJournal or Blogger profile that lists that information, or a Flickr photo that’s been geotagged.
What I want to call your attention to before I run through the other five movements of We Feel Fine is that these filters depend upon metadata. By metadata, I mean the descriptive information the database associates with the original blog post. This metadata not only makes We Feel Fine browsable, it makes it possible. The metadata is the data. The story—if there is one to be found in We Feel Fine—emerges only through the metadata.
You can manipulate the other five movements using these filters. At first, for example, the Murmurs movement displays a reverse chronological streaming, like movie credits, of the most recent emotions. The text appears letter-by-letter, as if it were being typed. This visual trick heightens the voyeuristic sensibility of We Feel Fine and makes it seem less like a database and more like a narrative, or even more to the point, like a confessional.
The Montage movement, meanwhile, organizes the emotions into browsable photo galleries:
By clicking on a photo and selecting save, you can add photos to a permanent “gallery.” Because the database grows so incredibly fast, this is the only way to ensure that you’ll be able to find any given photograph again in the future. There’s a strong ethos of ephemerality in We Feel Fine. To use one of Marie-Laure Ryan’s metaphors for a certain kind of new media, We Feel Fine is a kaleidoscope, an assemblage of fragments always in motion, never the same reading or viewing experience twice. We have little control over the experience. It’s only through manipulating the filters that we can hope to bring even a little coherency to what we read.
The next of the five movements is the Mobs movement. Mobs provides five separate data visualization of the most recent fifteen-hundred feelings. One of the most interesting aspects of the Mobs movement is that it highlights those moments when the filters don’t work, or at least not very well, because of missing metadata.
For instance, clicking the Age visualizations tells us that 1,223 (of the most recent 1,500) feelings have no age information attached to them. Similarly, the Location visualization draws attention to the large number of blog posts that lack any metadata regarding their location.
Unlike many other massive datamining projects, say, Google’s Ngram Viewer, We Feel Fine turns its missing metadata into a new source of information. In a kind of playful return of the repressed, the missing metadata is colorfully highlighted—it becomes paradata. The null set finds representation in We Feel Fine.
The Metrics movement is the fourth movement. And it shows what Kamvar and Harris call the “most salient” feelings, by which they mean “the ways in which a given population differs from the global average.”
Right now, for example, we see that “Crazy” is trending 3.8 times more than normal, while people are feeling “alive” 3.1 times more than usual. (Good for them!). Here again we see an ability to map the local against the global. It addresses what I see as one of the problems of large-scale data visualization projects, like the ones that Lev Manovich calls “cultural analytics.”
Ngram and the like are not forms of distant reading. There’s distant reading, and then there’s simply distance, which is all they offer. We Feel Fine mediates that distance, both visually, and practically.
(And here I was going to also say the following, but I was already in hot water at the conference for my provocations, so I didn’t say it, but I’ll write it here: Cultural analytics echo a totalitarian impulse for precise vision and control over broad swaths of populations.)
And finally, the Mounds movement, which simply shows big piles of emotion, beginning with whatever feeling is the most common at the moment, and moving on down the line towards less common emotions. The Mounds movement is at once the least useful visualization but also the most playful, with its globs that jiggle as you move your cursor over them.
(Obviously you can’t see it above, in the static image but…) The mounds convey what game designers call “juiciness.” As Jesper Juul characterizes juiciness, it’s “excessive positive feedback in response to the player’s actions.” Or, as one game designer puts it, a juicy game “will bounce and wiggle and squirt…it feels alive and responds to everything that you do.”
Harris’s work abounds with juicy, playful elements, and they’re not just eye candy. They are part of the interface, part of the design, and they make We Feel Fine welcoming, inviting. You want to spend time with it. Those aren’t characteristics you’d normally associate with a database. And make no mistake about it. We Feel Fine is a database. All of these movements are simply its frontend—a GUI Java applet written in Processing that obscures a very deliberate and structured data flow.
The true heart of We Feel Fine is not the responsive interface, but the 26,000 lines of code running on 5 different servers, and the MySQL database that stores the 10,000 new feelings collected each and every day. In their book, Kamvar and Harris provide an overview of the dozen or so main components that make up We Feel Fine’s backend.
It begins with a URL server that maintains the list of URLs to be crawled and the crawler itself, which runs on a single dedicated server.
Pages retrieved by the crawler are sent to the “Feeling Indexer,” which locates the words “feel” or “feeling” in the blog post. The adjective following “feel” or “feeling” is matched against the “emotional lexicon”—a list of 2,178 feelings that are indexed by We Feel Fine. If the emotion is not in the lexicon, it won’t be saved. That emotion is dead to We Feel Fine. But if the emotion does match the index, the script extracts the sentence with that feeling and any other information available (this is where the gender, location, and date data are parsed).
Next there’s the actual MySQL database, which stores the following fields for each data item: the extracted sentence, the feeling, the date, time, post URL, weather, and gender, age, and location information.
Then there’s an open API server and several other client applications. And finally, we reach the front end.
Now, why have I just taken this detour into the backend of We Feel Fine?
Because, if we pay attention to the hardware and software of We Feel Fine, we’ll notice important details that might otherwise escape us. For example, I don’t know if you noticed from the examples I showed earlier, but all of the sentences in We Feel Fine are stripped of their formatting. This is because the Perl code in the backend converts all of the text to lowercase, removes any HTML tags, and eliminates any non-alphanumeric characters:
The algorithm tampers with the data. The code mediates the raw information. In doing so, We Feel Fine makes both an editorial and aesthetic statement.
In fact, once we understand some of the procedural logic of We Feel Fine, we can discover all sorts of ways that the database proves itself to be unreliable.
I’ve already mentioned that if you express a feeling that is not among the 2,178 emotions tabulated, then your feeling doesn’t count. But there’s also the tricky language misdirection the algorithm pulls off, in which the same “feeling” is interpreted by the machine to be the same, no matter how it is used in the sentence. In this way, the machine exhibits the same kind of “naïve empiricism” (using Johanna Drucker’s dismissive phrase) that some humanists do interpreting quantitative data.
And finally, consider many of the images in the Montage movement. When there are multiple images on a blog page, the crawler only grabs the biggest one—and not biggest in dimensions, but biggest in file size, because that’s easier for the algorithm to detect—and this image often ends up being the header image for the blog, rather than connected to the actual feeling itself, as in this example.
The star pattern happens to be a sidebar image, rather than anything associated with the actual blog post that states the feeling:
So We Feel Fine forces associations. In experimental poetry or electronic literature communities, these kinds of random associations are celebrated. The procedural creation of art, literature, or music has a long tradition.
But in a database that seeks to be a representative “almanac of human emotions”? We’re in new territory there.
But in fact, it is representative, in the sense that human emotions are fungible, ephemeral, disjunctive, and, let’s face, sometimes random.
Let me bring this full circle, by returning to the revised title of my talk. I mentioned at the beginning that I felt this low-grade but pervasive concern about digital work these past few days at STS. I’ve heard questions like Are we doing everything we can to make digital editions accessible, legible, readable, and teachable? Where are we failing, some people have wondered. Why are we failing? Or at least, Why have we not yet reached the level of success that many of the very same people at this conference were predicting ten or fifteen or, dare I say it, twenty years ago?
Maybe because we’re doing it wrong.
I want to propose that we can learn a lot from We Feel Fine as we exit out the far end of what some media scholars have called the Gutenberg Parenthesis.
What can we learn from We Feel Fine?
Imagine if textual scholars built their digital editions and archives using these four principles.
Think about We Feel Fine and what makes work. Most importantly, We Feel Fine is a compelling reading experience. It’s not daunting. There’s a playful balance between interactivity and narrative coherence.
Secondly, and this goes back to my idea of paradata. Harris and Kamvar are not afraid to corrupt the source data, or to create metadata that blurs the line between metadata and not-metadata. They are not afraid to play with their sources, and for the most part, they are up front about how they’re playing with them.
This relates to the third feature of We Feel Fine that we should learn from. It’s open. Some of the source code is available. The list of emotions is available. There’s an open API, which anyone can use to build their own application on top of We Feel Fine, or more generally extract data from.
And finally, it’s juicy. I admit, this is probably not a term many textual scholars use in their research, but it’s essential for the success of We Feel Fine. The text responds to you. It’s alive in your hands, and I don’t think there’s much more we could ever ask from a text.
Drucker, Johanna. 2010. “Humanistic Approaches to the Graphical Expression of Interpretation” presented at the Hyperstudio: Digital Humanities at MIT, May 20, Cambridge, MA. http://mitworld.mit.edu/video/796.
Genette, Gerard. 1997. Paratexts: Thresholds of Interpretation. Cambridge: Cambridge University Press.
This special issue of DHQ invites essays that consider the study of literature and the category of the literary to be an essential part of the digital humanities. We welcome essays that consider how digital technologies affect our understanding of the literary— its aesthetics, its history, its production and dissemination processes, and also the traditional practices we use to critically analyze it. We also seek critical reflections on the relationships between traditional literary hermeneutics and larger-scale humanities computing projects. What is the relationship between literary study and the digital humanities, and what should it be? We welcome essays that approach this topic from a wide range of critical perspectives and that focus on diverse objects of study from antiquity to the present as well as born-digital forms.
Please submit an abstract of no more than 1,000 words and a short CV to Jessica Pressman and Lisa Swanstrom at <DHQliterary@gmail.com> by Feb. 15, 2011. We will reply by March 15, 2011 and request that full-length papers of no more than 9,000 words be submitted by *July 15, 2011*.