Here is a list of more or less digitally-oriented sessions at the upcoming Modern Language Association convention. These sessions address digital culture, digital tools, and digital methodology, played out across the domains of research, pedagogy, and scholarly communication. If I’ve overlooked a session, let me know in the comments. You might also be interested in my short reflection on how the 2015 program stacks up against previous MLA programs. Continue reading “Digital Humanities at MLA 2015”→
Since 2009 I’ve been compiling an annual list of more or less digitally-oriented sessions at the Modern Language Association convention. This is the list for 2015. These sessions address digital culture, digital tools, and digital methodology, played out across the domains of research, teaching, and scholarly communication. For the purposes of my annual lists I clump these varied approaches and objects of study into a single contested term, the digital humanities (DH).
DH sessions at the 2015 convention make up 7 percent of overall sessions, down from a 9 percent high last year. Here’s what the trend looks like over the past 6 MLA conventions (there was no convention in 2010, the year the conference switched from late December to early January): Continue reading “Digital Humanities and the MLA”→
This is a list of digitally-inflected sessions at the 2014 Modern Language Association Convention (Chicago, January 9-12). These sessions in some way address digital tools, objects, and practices in language, literary, textual, cultural, and media studies. The list also includes sessions about digital pedagogy and scholarly communication. The list stands at 78 entries, making up less than 10% of the total 810 convention slots. Please leave a comment if this list is missing any relevant sessions. Continue reading “Digital Humanities at MLA 2014”→
I recently proposed a sequence of lightning talks for the next Modern Language Association convention in Chicago (January 2014). The participants are tackling a literary issue that is not at all theoretical: the future of electronic literature. I’ve also built in a substantial amount of time for an open discussion between the audience and my participants—who are all key figures in the world of new media studies. And I’m thrilled that two of them—Dene Grigar and Stuart Moulthrop—just received an NEH grant dedicated to a similar question, which is documenting the experience of early electronic literature.
Electronic literature can be broadly conceived as literary works created for digital media that in some way take advantage of the unique affordances of those technological forms. Hallmarks of electronic literature (e-lit) include interactivity, immersiveness, fluidly kinetic text and images, and a reliance on the procedural and algorithmic capabilities of computers. Unlike the avant garde art and experimental poetry that is its direct forebear, e-lit has been dominated for much of its existence by a single, proprietary technology: Adobe’s Flash. For fifteen years, many e-lit authors have relied on Flash—and its earlier iteration, Macromedia Shockwave—to develop their multimedia works. And for fifteen years, readers of e-lit have relied on Flash running in their web browsers to engage with these works.
Flash is dying though. Apple does not allow Flash in its wildly popular iPhones and iPads. Android no longer supports Flash on its smartphones and tablets. Even Adobe itself has stopped throwing its weight behind Flash. Flash is dying. And with it, potentially an entire generation of e-lit work that cannot be accessed without Flash. The slow death of Flash also leaves a host of authors who can no longer create in their chosen medium. It’s as if a novelist were told that she could no longer use a word processor—indeed, no longer even use words. Continue reading “Electronic Literature after Flash (MLA14 Proposal)”→
Attention artists, creators, theorists, teachers, curators, and archivists of electronic literature!
I’m putting together an e-lit roundtable for the Modern Language Association Convention in Chicago next January. The panel will be “Electronic Literature after Flash” and I’m hoping to have a wide range of voices represented. See the full CFP for more details. Abstracts due March 15, 2013.
Seeking to have a rich discussion period—which we did indeed have—we limited our talks to about 12 minutes each. My presentation was therefore more evocative than comprehensive, more open-ended than conclusive. There are primary sources I’m still searching for and technical details I’m still sorting out. I welcome feedback, criticism, and leads.
An Account of Randomness in Literary Computing
MLA 2013, Boston
There’s a very simple question I want to ask this evening:
Where does randomness come from?
Randomness has a rich history in arts and literature, which I don’t need to go into today. Suffice it to say that long before Tristan Tzara suggested writing a poem by pulling words out of a hat, artists, composers, and writers have used so-called “chance operations” to create unpredictable, provocative, and occasionally nonsensical work. John Cage famously used chance operations in his experimental compositions, relying on lists of random numbers from Bell Labs to determine elements like pitch, amplitude, and duration (Holmes 107–108). Jackson Mac Low similarly used random numbers to generate his poetry, in particular relying on a book called A Million Random Digits with 100,000 Normal Deviates to supply him with the random numbers (Zweig 85).
Published by the RAND Corporation in 1955 to supply Cold War scientists with random numbers to use in statistical modeling (Bennett 135), the book is still in print—and you should check out the parody reviews on Amazon.com. “With so many terrific random digits,” one reviewer jokes, “it’s a shame they didn’t sort them, to make it easier to find the one you’re looking for.”
This joke actually speaks to a key aspect of randomness: the need to reuse random numbers, so that, say you’re running a simulation of nuclear fission, you can repeat the simulation with the same random numbers—that is, the same probability—while testing some other variable. In fact, most of the early work on random number generation in the United States was funded by either the U.S. Atomic Commission or the U.S. Military (Montfort et al. 128). The RAND Corporation itself began as a research and development arm of the U.S. Air Force.
Now the thing with going down a list of random numbers in a book, or pulling words out of hat—a composition method, by the way, Thom Yorke used for Kid A after a frustrating bout of writer’s block—is that the process is visible. Randomness in these cases produces surprises, but the source itself of randomness is not a surprise. You can see how it’s done.
What I want to ask here today is, where does randomness come from when it’s invisible? What’s the digital equivalent of pulling words out of a hat? And what are the implications of chance operations performed by a machine?
To begin to answer these questions I am going to look at two early works of electronic literature that rely on chance operations. And when I say early works of electronic literature, I mean early, from fifty and sixty years ago. One of these works has been well studied and the other has been all but forgotten.
My first case study is the Strachey Love Letter Generator. Programmed by Christopher Strachey, a close friend of Alan Turing, the Love Letter Generator is likely—as Noah Wardrip-Fruin argues—the first work of electronic literature, which is to say a digital work that somehow makes us more aware of language and meaning-making. Strachey’s program “wrote” a series of purplish prose love letters on the Ferranti Mark I Computer—the first commercially available computer—at Manchester University in 1952 (Wardrip-Fruin “Digital Media” 302):
YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. YOU ARE MY WISTFUL SYMPATHY: MY TENDER LIKING.
M. U. C.
Affectionately known as M.U.C., the Manchester University Computer could produce these love letters at a pace of one per minute, for hours on end, without producing a duplicate.
The “trick,” as Strachey put it in a 1954 essay about the program (29-30), is its two template sentences (Myadjectivenounadverbverb your adjectivenoun and You are my adjectivenoun) in which the nouns, adjectives, and adverbs are randomly selected from a list of words Strachey had culled from a Roget’s thesaurus. Adverbs and adjectives randomly drop out of the sentence as well, and the computer randomly alternates the two sentences.
The Love Letter Generator has attracted—for a work of electronic literature—a great deal of scholarly attention. Using Strachey’s original notes and source code (see figure to the left), which are archived at the Bodleian Library at the University of Oxford, David Link has built an emulator that runs Strachey’s program, and Noah Wardrip-Fruin has written a masterful study of both the generator and its historical context.
As Wardrip-Fruin calculates, given that there are 31 possible adjectives after the first sentence’s opening possessive pronoun “My” and then 20 possible nouns that could that could occupy the following slot, the first three words of this sentence alone have 899 possibilities. And the entire sentence has over 424 million combinations (424,305,525 to be precise) (“Digital Media” 311).
On the whole, Strachey was publicly dismissive of his foray into the literary use of computers. In his 1954 essay, which appeared in the prestigious trans-Atlantic arts and culture journal Encounter (a journal, it would be revealed in the late 1960s, that was primarily funded by the CIA—see Berry, 1993), Strachey used the example of the love letters to illustrate his point that simple rules can generate diverse and unexpected results (Strachey 29-30). And indeed, the Love Letter Generator qualifies as an early example of what Wardrip-Fruin calls, referring to a different work entirely, the Tale-Spin effect: a surface illusion of simplicity which hides a much more complicated—and often more interesting—series of internal processes (Expressive Processing 122).
Wardrip-Fruin coined this term—the Tale-Spin effect—from Tale-Spin, an early story generation system designed by James Mehann at Yale University in 1976. Tale-Spin tended to produce flat, plodding narratives, though there was the occasional existential story:
Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. He was unable to call for help. He drowned.
But even in these suggestive cases, the narratives give no sense of the process-intensive—to borrow from Chris Crawford—calculations and assumptions occurring behind the interface of Tale-Spin.
In a similar fashion, no single love letter reveals the combinatory procedures at work by the Mark I computer.
MY AFFECTION LUSTS FOR YOUR TENDERNESS. YOU ARE MY PASSIONATE DEVOTION: MY WISTFUL TENDERNESS. MY LIKING WOOS YOUR DEVOTION. MY APPETITE ARDENTLY TREASURES YOUR FERVENT HUNGER.
M. U. C.
This Tale-Spin effect—the underlying processes obscured by the seemingly simplistic, even comical surface text—are what draw Wardrip-Fruin to the work. But I want to go deeper than the algorithmic process that can produce hundreds of millions of possible love letters. I want to know, what is the source of randomness in the algorithm? We know Strachey’s program employs randomness, but where does that randomness come from? This is something the program—the source code itself—cannot tell us, because randomness operates at a different level, not at the level of code or software, but in the machine itself, at the level of hardware.
In the case of Strachey’s Love Letter Generator, we must consider the computer it was designed for, the Mark I. One of the remarkable features of this computer was that it had a hardware-based random number generator. The random number generator pulled a string of random numbers from what Turing called “resistance noise”—that is, electrical signals produced by the physical functioning of the machine itself—and put the twenty least significant digits of this number into the Mark I’s accumulator—its primary mathematical engine (Turing). Alan Turing himself specifically requested this feature, having theorized with his earlier Turing Machine that a purely logical machine could not produce randomness (Shiner). And Turing knew—like his Cold War counterparts in the United States—that random numbers were crucial for any kind of statistical modeling of nuclear fission.
I have more to say about randomness in the Strachey Love Letter Generator, but before I do, I want to move to my second case study. This is an early, largely unheralded work called SAGA. SAGA was a script-writing program on the TX-0 computer. The TX-0 was the first computer to replace vacuum tubes with transistors and also the first to use interactive graphics—it even had a light pen.
The TX-0 was built at Lincoln Laboratory in 1956—a classified MIT facility in Bedford, Massachusetts chartered with the mission of designing the nation’s first air defense detection system. After TX-0 proved that transistors could out-perform and outlast vacuum tubes, the computer was transferred to MIT’s Research Laboratory of Electronics in 1958 (McKenzie), where it became a kind of playground for the first generation of hackers (Levy 29-30).
In 1960, CBS broadcast an hour-long special about computers called “The Thinking Machine.” For the show MIT engineers Douglas Ross and Harrison Morse wrote a 13,000 line program in six weeks that generated a climactic shoot-out scene from a Western.
Several computer-generated variations of the script were performed on the CBS program. As Ross told the story years later, “The CBS director said, ‘Gee, Westerns are so cut and dried couldn’t you write a program for one?’ And I was talked into it.”
The TX-0’s large—for the time period—magnetic core memory was used “to keep track of everything down to the actors’ hands.” As Ross explained it, “The logic choreographed the movement of each object, hands, guns, glasses, doors, etc.” (“Highlights from the Computer Museum Report”).
And here, is the actual output from the TX-0, printed on the lab’s Flexowriter printer, where you can get a sense of the way SAGA generated the play:
In the CBS broadcast, Ross explained the narrative sequence as a series of forking paths.
Each “run” of SAGA was defined by sixteen initial state variables, with each state having several weighted branches (Ross 2). For example, one of the initial settings is who sees whom first. Does the sheriff see the robber first or is it the other way around? This variable will influence who shoots first as well.
There’s also a variable the programmers called the “inebriation factor,” which increases a bit with every shot of whiskey, and doubles for every swig straight from the bottle. The more the robber drinks, the less logical he will be. In short, every possibility has its own likely consequence, measured in terms of probability.
The MIT engineers had a mathematical formula for this probability (Ross 2):
But more revealing to us is the procedure itself of writing one of these Western playlets.
First, a random number was set; this number determined the probability of the various weighted branches. The programmers did this simply by typing a number following the RUN command when they launched SAGA; you can see this in the second slide above, where the random number is 51455. Next a timing number established how long the robber is alone before the sheriff arrives (the longer the robber is alone, the more likely he’ll drink). Finally each state variable is read, and the outcome—or branch—of each step is determined.
What I want to call your attention to is how the random number is not generated by the machine. It is entered in “by hand” when one “runs” the program. In fact, launching SAGA with the same random number and the same switch settings will reproduce a play exactly (Ross 2).
In a foundational work in 1996 called The Virtual Muse Charles Hartman observed that randomness “has always been the main contribution that computers have made to the writing of poetry”—and one might be tempted to add, to electronic literature in general (Hartman 30). Yet the two case studies I have presented today complicate this notion. The Strachey Love Letter Generator would appear to exemplify the use of randomness in electronic literature. But—and I didn’t say this earlier—the random numbers generated by the Mark I’s method tended not to be reliably random enough; remember, random numbers often need to be reused, so that the programs that run them can be repeated. This is called pseudo-randomness. This is why books like the RAND Corporation’s A Million Random Digits is so valuable.
But the Mark I’s random numbers were so unreliable that they made debugging programs difficult, because errors never occurred the same way twice. The random number instruction eventually fell out of use on the machine (Campbell-Kelly 136). Skip ahead 8 years to the TX-0 and we find a computer that doesn’t even have a random number generator. The random numbers must be entered manually.
The examples of the Love Letters and SAGA suggest at least two things about the source of randomness in literary computing. One, there is a social-historical source; wherever you look at randomness in early computing, the Cold War is there. The impact of the Cold War upon computing and videogames has been well-documented (see, for example Edwards, 1996 and Crogan, 2011), but few have studied how deeply embedded the Cold War is in the software algorithms and hardware processes themselves of modern computing.
Second, randomness does not have a progressive timeline. The story of randomness in computing—and especially in literary computing—is neither straightforward nor self-evident. Its history is uneven, contested, and mostly invisible. So that even when we understand the concept of randomness in electronic literature—and new media in general—we often misapprehend its source.
Bennett, Deborah. Randomness. Cambridge, MA: Harvard University Press, 1998. Print.
Turing, A.M. “Programmers’ Handbook for the Manchester Electronic Computer Mark II.” Oct. 1952. Web. 23 Dec. 2012.
Wardrip-Fruin, Noah. “Digital Media Archaeology: Interpreting Computational Processes.” Media Archaeology: Approaches, Applications, and Implications. Ed by. Erkki Huhtamo & Jussi Parikka. Berkeley, California: University of California Press, 2011. Print.
—. Expressive Processing: Digital Fictions, Computer Games, and Software Sudies. MIT Press, 2009. Print.
Zweig, Ellen. “Jackson Mac Low: The Limits of Formalism.” Poetics Today 3.3 (1982): 79–86. Web. 1 Jan. 2013.
What follows is a comprehensive list of digital humanities sessions at the 2013 Modern Language Association Conference in Boston.
These are sessions that in some way address the influence and impact of digital materials and tools upon language, literary, textual, and media studies, as well as upon online pedagogy and scholarly communication. The 2013 list stands at 66 sessions, a slight increase from 58 sessions in 2012 (and 44 in 2011, and only 27 the year before). Perhaps the incremental increase this year means that the digital humanities presence at the convention is topping out, leveling out at 8% of the 795 total sessions. Or maybe it’s an indicator of growing resistance to what some see as the hegemony of digital humanities. Or it could be that I simply missed some sessions—if so, please correct me in the comments and I’ll add the session to the list.
Presiding: Brian Croxall, Emory Univ.; Adeline Koh, Richard Stockton Coll. of New Jersey
This workshop is an "unconference" on digital pedagogy. Unconferences are participant-driven gatherings where attendees spontaneously generate the itinerary. Participants will propose discussion topics in advance on our Web site, voting on final sessions at the workshop’s start. Attendees will consider what they would like to learn and instruct others about teaching with technology. Preregistration required.
Thursday, 3 January, 8:30–11:30 a.m., Republic A, Sheraton
Presiding: Alison Byerly, Middlebury Coll.; Kathleen Fitzpatrick, MLA; Katherine A. Rowe, Bryn Mawr Coll.
Facilitated discussion about evaluating work in digital media (e.g., scholarly editions, databases, digital mapping projects, born-digital creative or scholarly work). Designed for both creators of digital materials and administrators or colleagues who evaluate those materials, the workshop will propose strategies for documenting, presenting, and evaluating such work. Preregistration required.
Presiding: Trent M. Kays, Univ. of Minnesota, Twin Cities; Lee Skallerup Bessette, Morehead State Univ.
Speakers: Marc Fortin, Queen’s Univ.; Alexander Gil, Univ. of Virginia; Brian Larson, Univ. of Minnesota, Twin Cities; Sophie Marcotte, Concordia Univ.; Ernesto Priego, London, England
Digital humanities are often seen to be a monolith, as shown in recent publications that focus almost exclusively on the United States and English-language projects. This roundtable will bring together digital humanities scholars from seemingly disparate disciplines to show how bridges can be built among languages, cultures, and geographic regions in and through digital humanities.
Presiding: Robert R. Bleil, Coll. of Coastal Georgia; Jennifer Gray, Coll. of Coastal Georgia
Speakers: Susan Cook, Southern New Hampshire Univ.; Christopher Dickman, Saint Louis Univ.; T. Geiger, Syracuse Univ.; Jennifer Gray; Matthew Parfitt, Boston Univ.; James Sanchez, Texas Christian Univ.
Responding: Robert R. Bleil
Nicholas Carr’s 2008 article "Is Google Making Us Stupid?" and his 2010 book The Shallows: What the Internet Is Doing to Our Brains argue that the paradigms of our digital lives have shifted significantly in two decades of living life online. This roundtable unites teachers of composition and literature to explore cultural, psychological, and developmental changes for students and teachers.
Speakers: Robin Bernstein, Harvard Univ.; Lindsay DiCuirci, Univ. of Maryland Baltimore County; Laura Fisher, New York Univ.; Laurie Lambert, New York Univ.; Janice A. Radway, Northwestern Univ.; Joseph Rezek, Boston Univ.
Archivally driven research is changing the methodologies with which we approach the past, the types of questions that we can ask and answer, and the historical voices that are heard and suppressed. The session will address the role of archives, both digital and material, in literary and cultural studies. What risks and rewards do we need to be aware of when we use them?
Thursday, 3 January, 5:15–6:30 p.m., Liberty C, Sheraton
Presiding: Andrew Piper, McGill Univ.
Speakers: Mark Algee-Hewitt, Stanford Univ.; Lindsey Eckert, Univ. of Toronto; Neil Fraistat, Univ. of Maryland, College Park; Matthew Jockers, Univ. of Nebraska, Lincoln; Laura C. Mandell, Texas A&M Univ., College Station; Jeffrey Thompson Schnapp, Harvard Univ.
As part of the ongoing debate about the impact and efficacy of the digital humanities, this roundtable will explore the theoretical, practical, and political implications of the rise of the literary lab. How will changes in the materiality and spatiality of our research and writing change the nature of that research? How will the literary lab impact the way we work?
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Jamie Skye Bianco, Univ. of Pittsburgh; Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Jennifer Laherty, Indiana Univ., Bloomington; Monica McCormick, New York Univ.; Katie Rawson, Emory Univ.
As open-access scholarly publishing matures and movements such as the Elsevier boycott continue to grow, open-access publications have begun to move beyond the simple (but crucial) principle of openness toward an ideal of interactivity. This session will explore innovative examples of open-access scholarly publishing that showcase new types of social, interactive, mixed-media texts.
Presiding: Alex Mueller, Univ. of Massachusetts, Boston
Speakers: Kathleen Fitzpatrick, MLA; Martin Foys, Drew Univ.; Matthew Kirschenbaum, Univ. of Maryland, College Park; Stephen G. Nichols, Johns Hopkins Univ., MD; Kathleen A. Tonry, Univ. of Connecticut, Storrs; Sarah Werner, Folger Shakespeare Library
In this roundtable, scholars of manuscripts, print, and digital media will discuss how contemporary forms of textuality intersect with, duplicate, extend, or draw on manuscript technologies. Panelists seek to push the discussion beyond traditional notions of supersession or remediation to consider the relevance of past textual practices in our analyses of emergent ones.
Presiding: Adeline Koh, Richard Stockton Coll. of New Jersey
Speakers: Moya Bailey, Emory Univ.; Anne Cong-Huyen, Univ. of California, Santa Barbara; Hussein Keshani, Univ. of British Columbia; Maria Velazquez, Univ. of Maryland, College Park
Responding: Alondra Nelson, Columbia Univ.
This panel examines the politics of race, ethnicity, and silence in the digital humanities. How has the digital humanities remained silent on issues of race and ethnicity? How does this silence reinforce unspoken assumptions and doxa? What is the function of racialized silences in digital archival projects?
Speakers: Travis Brown, Univ. of Maryland, College Park; Johanna Drucker, Univ. of California, Los Angeles; Eric Rochester, Univ. of Virginia; Geoffrey Rockwell, Univ. of Alberta; Jentery Sayers, Univ. of Victoria; Susan Schreibman, Trinity Coll. Dublin
Working only with set texts limits the use of many digital tools. What most advances literary research: aiming applications at scholarly primitives or at more culturally embedded activities that may resist generalization? Panelists’ reflections on the challenges of interoperability in a methodologically diverse field will include project snapshots evaluating the potential or perils of such aims.
Presiding: Jason C. Rhody, National Endowment for the Humanities
This workshop will highlight recent awards and outline current funding opportunities. In addition to emphasizing grant programs that support individual and collaborative research and education, the workshop will include information on the NEH’s Office of Digital Humanities. A question-and-answer period will follow.
Friday, 4 January, 1:45–3:00 p.m., Back Bay D, Sheraton
Presiding: Richard A. Grusin, Univ. of Wisconsin, Milwaukee
Speakers: Wendy H. Chun, Brown Univ.; Richard A. Grusin; Patrick Jagoda, Univ. of Chicago; Tara McPherson, Univ. of Southern California; Rita Raley, Univ. of California, Santa Barbara
This roundtable explores the impact of digital humanities on research and teaching in higher education and the question of how digital humanities will affect the future of the humanities in general. Speakers will offer models of digital humanities that are not rooted in technocratic rationality or neoliberal economic calculus but that emerge from and inform traditional practices of humanist inquiry.
Friday, 4 January, 1:45–3:00 p.m., Fairfax A, Sheraton
Presiding: Stephen G. Nichols, Johns Hopkins Univ., MD
Speakers: Karen L. Fresco, Univ. of Illinois, Urbana; Albert Lloret, Univ. of Massachusetts, Amherst; Jacques Neefs, Johns Hopkins Univ., MD
Responding: Timothy L. Stinson, North Carolina State Univ.
This panel explores the resistance of editors to explore digital editions. Questions posed: Do scholarly protocols deliberately resist computational methodologies? Or are we still in a liminal period where print predominates for lack of training in the new technology? Does the problem lie with a failure to encourage digital research by younger scholars?
Presiding: Michael Bérubé, Penn State Univ., University Park
"The Mirror and the LAMP," Matthew Kirschenbaum, Univ. of Maryland, College Park
"Access Demands a Paradigm Shift," Cathy N. Davidson, Duke Univ.
"Resistance in the Materials," Bethany Nowviskie, Univ. of Virginia
The news that digital humanities are the next big thing must come as a pleasant surprise to people who have been working in the field for decades. Yet only recently has the scholarly community at large realized that developments in new media have implications not only for the form but also for the content of scholarly communication. This session will explore some of those implications—for scholars, for libraries, for journals, and for the idea of intellectual property.
Friday, 4 January, 5:15–6:30 p.m., Back Bay D, Sheraton
Presiding: Russell A. Berman, Stanford Univ.
Speakers: Carlos J. Alonso, Columbia Univ.; Lanisa Kitchiner, Howard Univ.; David Laurence, MLA; Bethany Nowviskie, Univ. of Virginia; Elizabeth M. Schwartz, San Joaquin Delta Coll., CA; Sidonie Ann Smith, Univ. of Michigan, Ann Arbor; Kathleen Woodward, Univ. of Washington, Seattle
Doctoral study faces multiple pressures, including profound transformations in higher education and the academic job market, changing conditions for new faculty members, the new media of scholarly communication, and placements in nonfaculty positions. These and other factors question the viability of conventional assumptions regarding doctoral education.
Friday, 4 January, 7:00–8:15 p.m., Back Bay D, Sheraton
Presiding: Peter S. Donaldson, Massachusetts Inst. of Tech.
Global Shakespeares (globalshakespeares.org/) is a participatory multicentric project providing free online access to performances of Shakespeare from many parts of the world. The session features presentations and free lab tours of the MIT HyperStudio.
Presiding: Ryan Cordell, Northeastern Univ.; Katherine Singer, Mount Holyoke Coll.
Speakers: Gert Buelens, Ghent Univ.; Sheila T. Cavanagh, Emory Univ.; Malcolm Alan Compitello, Univ. of Arizona; Gabriel Hankins, Univ. of Virginia; Alexander C. Y. Huang, George Washington Univ.; Kevin Quarmby, Emory Univ.; Lynn Ramey, Vanderbilt Univ.; Matthew Schultz, Vassar Coll.
This digital roundtable aims to give insight into challenges and opportunities for new digital humanists. Instead of presenting polished projects, panelists will share their experiences as developing DH practitioners working through research and pedagogical obstacles. Each participant will present lightning talks and then discuss the projects in more detail at individual tables.
Saturday, 5 January, 8:30–9:45 a.m., Public Garden, Sheraton
Presiding: Ana-Maria Medina, Metropolitan State Coll. of Denver
Speakers: Lois Bacon, EBSCO; Marshall J. Brown, Univ. of Washington, Seattle; Stuart Alexander Day, Univ. of Kansas; Judy Luther, Informed Strategies; Dana D. Nelson, Vanderbilt Univ.; Joseph Paul Tabbi, Univ. of Illinois, Chicago; Bonnie Wheeler, Southern Methodist Univ.
Changes are happening to the scholarly journal, a fundamental institution of our professional life. New modes of communication open promising possibilities, even as financial challenges to print media and education make this time difficult. A panel of editors, publishers, and librarians will address these topics, carrying forward a discussion begun at the 2012 Delegate Assembly meeting.
Speakers: Evelyn Baldwin, Univ. of Arkansas, Fayetteville; Mikhail Gershovich, Baruch Coll., City Univ. of New York; Janice McCoy, Univ. of Virginia; Ilknur Oded, Defense Lang. Inst.; Amanda Phillips, Univ. of California, Santa Barbara; Anastasia Salter, Univ. of Baltimore; Elizabeth Swanstrom, Florida Atlantic Univ.
This electronic roundtable presents games not only as objects of study but also as methods for innovative pedagogy. Scholars will present on their use of board games, video games, authoring tools, and more for language acquisition, peer-to-peer relationship building, and exploring social justice. This hands-on, show-and-tell session highlights assignments attendees can implement.
Presiding: Claudia Cabello-Hutt, Univ. of North Carolina, Greensboro; Marcy Ellen Schwartz, Rutgers Univ., New Brunswick
Speakers: Daniel Balderston, Univ. of Pittsburgh; Maria Laura Bocaz, Univ. of Mary Washington; Claudia Cabello-Hutt; Alejandro Herrero-Olaizola, Univ. of Michigan, Ann Arbor; Veronica A. Salles-Reese, Georgetown Univ.; Marcy Ellen Schwartz; Vicky Unruh, Univ. of Kansas
This roundtable will explore renewed interest in Latin American archives—both traditional and digital—and the intellectual, political, and social implications for our research and teaching. Presenters will address how new technologies (digitalized collections, hypertext manuscripts, etc.) facilitate access to research and offer strategies for introducing students to a variety of materials.
Speakers: Sarah J. Arroyo, California State Univ., Long Beach; R. Scot Barnett, Clemson Univ.; Ron C. Brooks, Oklahoma State Univ., Stillwater; Geoffrey V. Carter, Saginaw Valley State Univ.; Anthony Collamati, Clemson Univ.; Jason Helms, Univ. of Kentucky; Alexandra Hidalgo, Purdue Univ., West Lafayette; Robert Leston, New York City Coll. of Tech., City Univ. of New York
This roundtable will present separate, yet unified, digital writings on laptops. Instead of making a diachronic set of presentations, we will make available a synchronic set, in an art e-gallery format, arranged separately on tables as conceptual art installations. The purpose is to demonstrate how digital technologies can reshape our views of presentations and of what is now called writings.
Saturday, 5 January, 1:45–3:00 p.m., Back Bay D, Sheraton
Presiding: Paul Fyfe, Florida State Univ.; Robert H. Kieft, Occidental Coll.
Speakers: Tanya E. Clement, Univ. of Texas, Austin; Rachel Donahue, Univ. of Maryland, College Park; Kari M. Kraus, Univ. of Maryland, College Park; John Merritt Unsworth, Brandeis Univ.; John A. Walsh, Indiana Univ., Bloomington
This roundtable extends current conversations about reforming graduate training to a burgeoning field of disciplinary crossover and professionalization. Participants will introduce innovative training programs and collaborative projects at the intersections of modern language departments, digital humanities, and library schools or iSchools.
Saturday, 5 January, 1:45–3:00 p.m., Liberty A, Sheraton
Presiding: Elizabeth M. Schwartz, San Joaquin Delta Coll., CA
"Peer Review 2.0: Using Digital Technologies to Transform Student Critiques," Elizabeth Harris McCormick, LaGuardia Community Coll., City Univ. of New York; Lykourgos Vasileiou, LaGuardia Community Coll., City Univ. of New York
"How I Met Your Argument: Teaching through Television," Lanta Davis, Baylor Univ.
"Writing Wikipedia as Postmodern Research Assignment," Matthew Parfitt, Boston Univ.
"Weaning Isn’t Everything: Beyond Postformalism in Composition," Miles McCrimmon, J. Sargeant Reynolds Community Coll., VA
Speakers: David Kim, Univ. of California, Los Angeles; Jennifer Sano-Franchini, Michigan State Univ.; Lee Skallerup Bessette, Morehead State Univ.
Responding: Tara McPherson, Univ. of Southern California
This roundtable addresses how applications and interfaces encode specific cultural assumptions about race and preclude certain groups of people from participating in the digital humanities. Participants present specific digital humanities projects that illustrate the impact of race on access to the programming, cultural, and funding structures in the digital humanities.
Saturday, 5 January, 3:30–4:45 p.m., The Fens, Sheraton
Presiding: Korey Jackson, Univ. of Michigan, Ann Arbor
Speakers: Matt Burton, Univ. of Michigan, Ann Arbor; Korey Jackson; Spencer Keralis, Univ. of North Texas; Jason C. Rhody, National Endowment for the Humanities; Lisa Marie Rhody, Univ. of Maryland, College Park; Michael Ullyot, Univ. of Calgary
This roundtable seeks to query precisely what data can be and do in a humanities context. Charting the migration from individual project to scalable data set, we explore “big data” not simply as a matter of size or number but as a process of granting researchers and educators access to shared information resources.
Presiding: Catherine Elizabeth Ingrassia, Virginia Commonwealth Univ.
Speakers: Joshua Eckhardt, Virginia Commonwealth Univ.; Molly Hardy, Saint Bonaventure Univ.; Laura C. Mandell, Texas A&M Univ., College Station; James Raven, Univ. of Essex
Consistent with the theme of open access, this roundtable explores limitations of proprietary digital archives and emergent alternatives. It will provide an interactive, engaged demonstration of 18thConnect; a historian’s perspective; discussion of British Virginia; and scholarly digital editions of seventeenth-century documents.
Speakers: Amanda L. French, George Mason Univ.; George Williams, Univ. of South Carolina, Spartanburg
This "master class" will focus on integrating two digital tools into the classroom to facilitate student-generated projects: Omeka, for the creation of archives and exhibits, and WordPress, for the creation of blogs and Web sites. We will discuss what kinds of assignments work with each tool, how to get started, and how to evaluate assignments. Bring a laptop (not a tablet) for hands-on work.
Sunday, 6 January, 8:30–9:45 a.m., Beacon A, Sheraton
Presiding: Alexander Reid, Univ. at Buffalo, State Univ. of New York
Speakers: Heather Duncan, Univ. at Buffalo, State Univ. of New York; Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Eileen Joy, Southern Illinois Univ., Edwardsville; Richard E. Miller, Rutgers Univ., New Brunswick; Daniel Schweitzer, Univ. at Buffalo, State Univ. of New York
Responding: Alexander Reid
As our profession seeks to understand electronic publishing, the emergence of middle-state publishing (e.g., blogs, Twitter) adds another layer of complexity to the issue. The roundtable participants will discuss their use of social media for scholarship and how middle-state publishing alters scholarly work and the ethical and professional concerns that arise.
Presiding: Yohei Igarashi, Colgate Univ.; Lauren A. Neefe, Stony Brook Univ., State Univ. of New York
Speakers: Miranda Jane Burgess, Univ. of British Columbia; Mary Helen Dupree, Georgetown Univ.; Kevis Goodman, Univ. of California, Berkeley; Yohei Igarashi; Celeste G. Langan, Univ. of California, Berkeley; Maureen Noelle McLane, New York Univ.; Tom Mole, McGill Univ.
A roundtable of scholars discusses and defines “Romantic media studies,” one of the most vibrant approaches to Romantic literature today. Spanning British, German, and transatlantic Romanticisms, the exchange considers Romantic-era media while reflecting on methods of reading for media, mediations, and networks as well as on the relation between Romantic criticism and the digital humanities.
Speakers: Katherine E. Gossett, Iowa State Univ.; Erik Hanson, Loyola Univ., Chicago; Matthew Jockers, Univ. of Nebraska, Lincoln; Steven E. Jones, Loyola Univ., Chicago; Bethany Nowviskie, Univ. of Virginia; Sarah Storti, Univ. of Virginia
This roundtable explores the urgent necessity of reforming graduate training in the humanities, particularly in the light of the opportunities afforded by digital platforms, collaborative work, and an expanded mission for graduates. Presenters include graduate students and faculty mentors who are creating the institutional and disciplinary conditions for renovated graduate curricula to succeed.
Sunday, 6 January, 1:45–3:00 p.m., Liberty A, Sheraton
Presiding: Mark Sample, George Mason Univ.
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Kathleen Fitzpatrick, MLA; Frank Kelleter, Univ. of Göttingen; Kirstyn Leuner, Univ. of Colorado, Boulder; Jason Mittell, Middlebury Coll.; Ted Underwood, Univ. of Illinois, Urbana
This roundtable considers the value and challenges of serial scholarship, that is, research published in serialized form online through a blog, forum, or other public venue. Each of the participants will give a lightning talk about his or her stance toward serial scholarship, while the bulk of the session time will be reserved for open discussion.
Here (finally) is the talk I gave at the 2012 MLA Convention in Seattle. I was on Lori Emerson’s Reading Writing Interfaces: E-Literature’s Past and Present panel, along with Dene Grigar, Stephanie Strickland, and Marjorie Luesebrink. Lori’s talk on e-lit’s stand against the interface-free aesthetic worked particularly well with my own talk, which focused on Erik Loyer’s Strange Rain. I don’t offer a reading of Strange Rain so much as I use the piece as an entry point to think about interfaces—and my larger goal of reframing our concept of interfaces.
Today I want to talk about Strange Rain, an experiment in digital storytelling by the new media artist Erik Loyer.
Strange Rain came out in 2010 and runs on Apple iOS devices—the iPhone, iPod Touch, and iPad. As Loyer describes the work, Strange Rain turns your iPad into a “skylight on a rainy day.” You can play Strange Rain in four different modes. In the wordless mode, dark storm clouds shroud the screen, and the player can touch and tap its surface, causing columns of rain to pitter patter down upon the player’s first-person perspective. The raindrops appear to splatter on the screen, streaking it for a moment, and then slowly fade away. Each tap also plays a note or two of a bell-like celesta.
The other modes build upon this core mechanic. In the “whispers” mode, each tap causes words as well as raindrops to fall from the sky.
The “story” mode is the heart of Strange Rain. Here the player triggers the thoughts of Alphonse, a man standing in the rain, pondering a family tragedy.
And finally, with the most recent update of the app, there’s a fourth mode, the“feeds” mode. This allows players to replace the text of the story with tweets from a Twitter search, say the #MLA12 hashtag.
Note that any authorial information—Twitter user name, time or date—is stripped from the tweet when it appears,as if the tweet were the player’s own thoughts, making the feed mode more intimate than you might expect.
Like many of the best works of electronic literature, there are a number of ways to talk about Strange Rain, a number of ways to frame it. Especially in the wordless mode, Strange Rain fits alongside the growing genre of meditation apps for mobile devices, apps meant to calm the mind and sooth the spirit—like Pocket Pond:
In Pocket Pond, every touch of the screen creates a rippling effect.
The digital equivalent of a miniature zen garden, these apps allow us to contemplate minimalistic nature scenes on devices built by women workers in a FoxConn factory in Chengdu, China.
It’s appropriate that it’s the “wordless mode” that provides the seemingly most unmediated or direct experience of Strange Rain, when those workers who built the device upon which it runs are all but silent or silenced.
The “whispers” mode, meanwhile, with its words falling from the sky, recalls the trope in new media of falling letters—words that descend on the screen or even in large-scale multimedia installation pieces such as Camille Utterback and Romy Achituv’s Text Rain (1999).
And of course, the story mode even more directly situates Strange Rain as a work of electronic literature, allowing the reader to tap through “Convertible,” a short story by Loyer, which, not coincidentally I think, involvesa car crash, another long-standing trope of electronic literature.
As early as 1994, in fact, Stuart Moulthrop asked the question, “Why are there so many car wrecks in hypertext fiction?” (Moulthrop, “Crash” 5). Moulthrop speculated that it’s because hypertext and car crashes share the same kind of “hyperkinetic hurtle” and “disintegrating sensory whirl” (8). Perhaps Moulthrop’s characterization of hypertext held up in 1994…
…(though I’m not sure it did), but certainly today there are many more metaphors one can use to describe electronic literature than a car crash. And in fact I’d suggest that Strange Rain is intentionally playing with the car crash metaphor and even overturning it with its slow, meditative pace.
At the same time as this reflective component of Strange Rain, there are elements that make the work very much a game, featuring what any player of modern console or PC games would find familiar: achievements, unlocked by triggering particular moments in the story. Strange Rain even shows up on the iOS’s “Game Center.”
The way users can tap through Alphonse’s thoughts in Strange Rain recalls one of Moulthrop’s own works, the post-9/11 Pax, which Moulthrop calls, using a term from John Cayley, a “textual instrument”—as if the piece were a musical instrument that produces text rather than music.
We could think of Strange Rain as a textual instrument, then, or to use Noah Wardrip-Fruin’s reformulation of Cayley’s idea, as “playable media.” Wardrip-Fruin suggests that thinking of electronic literature in terms of playable media replaces a rather uninteresting question—“Is this a game?”—with a more productive inquiry, “How is this played?”
There’s plenty to say about all of these framing elements of Strange Rain—as an artwork, a story, a game, an instrument—but I want to follow Wardrip-Fuin’s advice and think about the question, how is Strange Rain played? More specifically, what is its interface? What happens when we think about Strange Rain in terms of the poetics of motion and touch?
Let me show you a quick video of Erik Loyer demonstrating the interface of Strange Rain, because there are a few characteristics of the piece that are lost in my description of it.
A key element that I hope you can see from this video is that the dominant visual element of Strange Rain—the background photograph—is never entirely visible on the screen. The photograph was taken during a tornado watch in Paulding County in northwest Ohio in 2007 and posted as a Creative Commons image on the photo-sharing site Flickr. But we never see this entire image at once on the iPad or iPhone screen. The boundaries of the photograph exceed the dimensions of the screen, and Strange Rain uses the hardware accelerometer to detect your motion, your movements. So that when you tilt the iPad even slightly, the image tilts slightly in the opposite direction. It’s as if there’s a larger world inside the screen, or rather, behind the screen. And this world is broader and deeper than what’s seen on the surface. Loyer described it to me this way: it’s “like augmented reality, but without the annoying distraction of trying to actually see the real world through the display” (Loyer 1).
This kinetic screen is one of the most compelling features of Strange Rain. As soon as you pick up the iPad or iPhone with Strange Rain running, it reacts to you. The work interacts with you before you even realize you’re interacting with it. Strange Rain taps into a kind of “camcorder subjectivity”—the entirely naturalized practice we now have of viewing the world through devices that have cameras on one end and screens on the other. Think about older videocameras, which you held up to your eye, and you saw the world straight through the camera. And then think of Flip cams or smartphone cameras we hold out in front of us. We looked through older videocameras as we filmed. We look at smartphone cameras as we film.
So when we pick up Strange Rain we have already been trained to accept this camcorder model, but we’re momentarily taken aback, I think, to discover that it doesn’t work quite the way we think it should. That is, it’s as if we are shooting a handheld camcorder onto a scene we cannot really control.
This aspect of the interface plays out in interesting ways. Loyer has an illustrative story about the first public test of Strange Rain. As people began to play the piece, many of them held it up over their heads so that “it looked like the rain was falling on them from above—many people thought that was the intended way to play the piece” (Loyer 1).
That is, people wanted it to work like a camcorder, and when it didn’t, they themselves tried to match their exterior actions to the interior environment of the piece.
There’s more to say about the poetics of motion with Strange Rain but I want to move on to the idea of touch. We’ve seen how touch propels the narrative of Strange Rain. Originally Loyer had planned on having each tap generate a single word, though he found that to be too tedious, requiring too many taps to telegraph a single thought (Loyer 1). It was, oddly enough in a work of playable media that was meant to be intimate and contemplative, too slow. Or rather, it required too much action—too much tapping—on the part of reader. So much tapping destroyed the slow, recursive feeling of the piece. It becomes frantic instead of serene.
Loyer tweaked the mechanic then, making each tap produce a distinct thought. Nonetheless, from my own experience and from watching other readers, I know that there’s an urge to tap quickly. In the story mode of Strange Rain you sometimes get caught in narrative loops—which again is Loyer playing with the idea of recursivity found in early hypertext fiction rather than merely reproducing it. Given the repetitive nature of Strange Rain, I’ve seen people want to fight against the system and tap fast. You see the same thought five times in a row, and you start tapping faster, even drumming using multiple fingers. And the piece paradoxically encourages this, as the only way to bring about a conclusion is to provoke an intense moment of anxiety for Alphonse, which you do by tapping more frantically.
I’m fascinated by with this tension between slow tapping and fast tapping—what I call haptic density—because it reveals the outer edges of the interface of the system. Quite literally.
Move from three fingers to four—easy to do when you want to bring Alphonse to a crisis moment—and the iPad translates your gestures differently. Four fingers tells the iPad you want to swipe to another application, the Windows equivalent of ALT-TAB. The multi-touch interface of the iPad trumps the touch interface of Strange Rain. There’s a slipperiness of the screen. The text is precipitously and perilously fragile and inadvertently escapable. The immersive nature of new media that years ago Janet Murray highlighted as an essential element of the form is entirely an illusion.
I want to conclude then by asking a question: what happen when we begin to think differently about interfaces? We usually think of an interface as a shared contact point between two distinct objects. The focus is on what is common. But what if we begin thinking—and I think Strange Rain encourages this—what if we begin thinking about interfaces in terms of difference. Instead of interfaces, what about thresholds, liminal spaces between two distinct elements. How does Strange Rain or any piece of digital expressive culture have both an interface, and a threshold, or thresholds? What are the edges of the work? And what do we discover when we transgress them?
This is a comprehensive list of digital humanities sessions scheduled for the 2012 Modern Language Association Conference in Seattle, Washington. The 2012 list stands at 58 sessions, up from 44 last year (and 27 the year before). If the trend continues, within the decade it will no longer make sense to compile this list; it’ll be easier to list the sessions that don’t in some way relate in to the influence and impact of digital materials and tools upon language, literary, textual, and media studies.
It’s possible I may have missed a session or two; if so, let me know in the comments and I’ll add the panel to the list. Note that there’s also a pre-convention Getting Started in the Digital Humanities with DHCommons workshop; but because this workshop is application-only, it does not appear in the official MLA program.
You may also want to follow the MLA Tweetup Twitter account for updates on various spontaneous and planned meet-ups in Seattle.
[UPDATE 13 January 2012: I’ve begun adding links to presentations and papers if they’ve been posted online.]
Thursday, January 5
Pre-Convention Digital Humanities Project Mixer
1-4 pm in Convention Center, rooms 3A & 3B
Projects looking for collaborators and collaborators looking for projects, come mix and mingle in this informal project poster session that offers a face-to-face DHCommons experience. Representatives from projects looking for collaborators or just wanting to get the word out will share information and materials about their projects. This forum will also offer great opportunities for one-on-one conversations about pursuing projects in the digital humanities. If you would like to share your project, please sign up here, but otherwise there is no need to register.
This event is open to all MLA participants.
1. Evaluating Digital Work for Tenure and Promotion: A Workshop for Evaluators and Candidates
8:30–11:30 a.m., Willow A, Sheraton
Presiding: Alison Byerly, Middlebury Coll.; Katherine A. Rowe, Bryn Mawr Coll.; Susan Schreibman, Trinity Coll. Dublin
The workshop will provide materials and facilitated discussion about evaluating work in digital media (e.g., scholarly editions, databases, digital mapping projects, born-digital creative or scholarly work). Designed for both creators of digital materials (candidates for tenure and promotion) and administrators or colleagues who evaluate those materials, the workshop will propose strategies for documenting, presenting, and evaluating such work. Preregistration required.
9. Large Digital Libraries: Beyond Google Books
12:00 noon–1:15 p.m., 611, WSCC
Presiding: Michael Hancher Univ. of Minnesota, Twin Cities
Speakers: Tanya E. Clement, Univ. of Maryland, College Park; Amanda L. French, George Mason Univ.; George Oates, Open Library; Glenn Roe, Univ. of Chicago; Andrew M. Stauffer, Univ. of Virginia; Jeremy York, HathiTrust Digital Library
Aside from Google Books, the two principal repositories for digitized books are Open Library and HathiTrust Digital Library; Digital Public Library of America is now in its planning stage. What are the merits and prospects of these three projects? How can they be improved? What role should scholars play in their improvement? These questions will be addressed by participants in each project and by others experienced in the digital humanities.
12. Transmedia Stories and Literary Games
12:00 noon–1:15 p.m., 615, WSCC
“Hundred Thousand Billion Fingers: Oulipian Games and Serial Players,” Patrick LeMieux Duke Univ.
“Make Love, Not Warcraft: Virtual Worlds and Utopia,” Stephanie Boluk Vassar Coll.
“Oscillation: Transmedia Storytelling and Narrative Theory by Design,” Patrick Jagoda Univ. of Chicago
Presiding: Sean Scanlan, New York City Coll. of Tech., City Univ. of New York
“Making Online Peer Review Interactive: Sticky Notes and Highlighters,” Cheryl E. Ball, Illinois State Univ.
“The Bearable Light of Openness: Renovating Obsolete Peer-Review Bottlenecks,” Aaron J. Barlow, New York City Coll. of Tech., City Univ. of New York
“The Law Review Approach: What the Humanities Can Learn,” Allen Mendenhall, Auburn Univ., Auburn
41. Social Networks, Jewish Identity, and New Media
1:45–3:00 p.m. University, Sheraton
Presiding: Jonathan S. Skolnik Univ. of Massachusetts, Amherst
“Social Networking, Jewish Identity, and New Jewish Ritual: Tattooed Jews on Facebook,” Erika Meitner Virginia Polytechnic Inst. and State Univ.
“Electronic Apikoros: Searching for the Nineteenth-Century Origins of Contemporary Satire in the Jewish Blogosphere,” Ashley Aronsen Passmore Texas A&M Univ., College Station
“From MySpace to MyJewishSpace: The Role of the Internet in the Self-Definition of New Jews in Austria and Germany,” Andrea Reiter Univ. of Southampton
47. Old Books and New Tools
1:45–3:00 p.m. 606, WSCC
Presiding: Sarah Werner Folger Shakespeare Library
Speakers: Katherine D. Harris, San José State Univ.; Jeffrey Knight, Univ. of Washington, Seattle; Matt Thomas, Univ. of Iowa; Whitney Trettien, Duke Univ.; Meg Worley, Palo Alto, CA
This roundtable will consider how the categories of old books and new tools might illuminate each other. Speakers will provide individual reflections on their experiences with old books and new tools before opening up the conversation to the theoretical and practical concerns driving the use and interactions of the two.
Presiding: Kathleen Woodward Univ. of Washington, Seattle
“Emergent Projects, Processes, and Stories,” Sidonie Ann Smith Univ. of Michigan, Ann Arbor
“Learning Collaboratories, Now and in the Future,” Curtis Wong Microsoft Research
“It’s the Data, Stupid!,” Ed Lazowska Univ. of Washington, Seattle
“How to Crowdsource Thinking,” Cathy N. Davidson Duke Univ.
Scholars from the human, natural, and computational sciences will address the future of higher education in a digital age. They will identify problems in higher education today and provide recommendations for what is needed as we go forward. What pressure does this information age exert on the current ways we think about higher education? How does a conversation across the computational sciences and the humanities address, ease, or exacerbate that pressure?
87. Digital Literary Studies: When Will It End?
3:30–4:45 p.m. 304, WSCC
Presiding: David A. Golumbia Virginia Commonwealth Univ.
“Digital Birth, Digital Adoption, Digital Disownment: Reconceiving Computational Textuality,” John David Zuern Univ. of Hawai’i, Manoa
“Digital Literary Studies circa 1954: Lacan’s Machines and Shannon’s Minds,” Bernard Dionysius Geoghegan Northwestern Univ.
“Digital Anamnesis,” Benjamin J. Robertson Univ. of Colorado, Boulder
121. Writing the Jasmine Revolution and Tahrir Square: Graffiti, Film, Collage, Poetry
5:15–6:30 p.m., Cedar, Sheraton
Presiding: Kathryn Lachman Univ. of Massachusetts, Amherst
“Tagging the Jasmine Revolution: Social Media and Graffiti in the Tunisian Uprising,” David Fieni Cornell Univ.
“Quand la révolution filmique anticipe la révolution populaire,” Mirvet Médini Kammoun Institut Supérieur des Beaux-Arts de Tunis
“The Women’s Manifesto: Thinking Egypt 2011 Transnationally,” Basuli Deb Univ. of Nebraska, Lincoln
“Poetic Responses to the North African Revolutions,” Mahdia Benguesmia Univ. of Batna
125. What’s Still Missing? What Now? What Next? Digital Archives in American Literature
5:15–6:30 p.m., 608, WSCC
Presiding: Brad Evans, Rutgers Univ., New Brunswick
Speakers: Donna M. Campbell, Washington State Univ., Pullman; Julia H. Flanders, Brown Univ.; Kenneth M. Price, Univ. of Nebraska, Lincoln; Oya Rieger, Cornell Univ.; Robert Scholes, Brown Univ.; Jeremy York, HathiTrust Digital Library
This roundtable has two goals: (1) to provide a forum for reflection on the first twenty years of the digital archive, especially as it relates to American materials, which might include consideration of what is still missing and of methodologies for making use of what is there now, and (2) to offer an opportunity for researchers who have become dependent on the archive to talk with major players in its production, in the hope of fostering new avenues for cooperation.
150. Digital Humanities and Internet Research
7:00–8:15 p.m., 613, WSCC
Presiding: John Jones Univ. of Texas, Dallas
“Creating a Conceptual Search Engine and Multimodal Corpus for Humanities Research,” Robin A. Reid Texas A&M Univ., Commerce
“What the Digital Can’t Remember,” John Jones
“Toward a Rhetoric of Collaboration: An Online Resource for Teaching and Learning Research,” Jennifer Sano-Franchini Michigan State Univ.
161. The Webs We Weave: Online Pedagogy in Community Colleges
7:00–8:15 p.m., 615, WSCC
Presiding: Linda Weinhouse, Community Coll. of Baltimore County, MD
“Blended Learning: The Best of Both Worlds?,” Pamela Sue Hardman, Cuyahoga Community Coll., Western Campus, OH
“Magic in the Web,” Michael R. Best, Univ. of Victoria; Jeremy Ehrlich, Univ. of Victoria
“The Digital-Dialogue Journal: Tool for Enhanced Classic Communication,” Bette G. Hirsch, Cabrillo Coll., CA
“Delivering Literary Studies in the Twenty-First Century: The Relevance of Online Pedagogies,” Kristine Blair, Bowling Green State Univ.
Friday, January 6
187. Digital Humanities and Hispanism
8:30–9:45 a.m. Grand A, Sheraton
Presiding: Kyra A. Kietrys Davidson Coll.
Speakers: Mike Blum, Coll. of William and Mary; Francie Cate-Arries, Coll. of William and Mary; Kyra A. Kietrys; Kathy Korcheck, Central Coll.; William Anthony Nericcio, San Diego State Univ.; Rocío Quispe-Agnoli, Michigan State Univ.; Amaranta Saguar García, Univ. of Oxford, Lady Margaret Hall; David A. Wacks, Univ. of Oregon
Demonstrations by Hispanists who use technology in their scholarship and teaching. The presenters include a graduate student; junior and senior Latin American, Peninsular, and comparativist colleagues whose work spans medieval to contemporary times; and an academic technologist. After brief presentations of the different digital tools, the audience will circulate among the stations to participate in interactive demonstrations.
202. The Presidential Forum: Language, Literature, Learning
“Of Degraded Tongues and Digital Talk: Race and the Politics of Language,” Imani Perry, Princeton Univ.
“Learning to Unlearn,” Judith Halberstam, Univ. of Southern California
“Borrowing Privileges: Dreaming in Foreign Tongues,” Bala Venkat Mani, Univ. of Wisconsin, Madison
“Teaching Literature and the Bitter Truth about Starbucks,” Christopher Freeburg, Univ. of Illinois, Urbana
The forum addresses three fundamental points of orientation for our profession: language, in its various materialities; literature, broadly understood; and learning, especially student learning and our educational missions. The language and literature classroom has to serve the needs of today’s students. How do changing understandings of identity, performance, and media translate into transformations in teaching and learning?
215. Digital South, Digital Futures
10:15–11:30 a.m. 606, WSCC
Presiding: Vincent J. Brewton Univ. of North Alabama
“Documenting the American South,” Natalia Smith Univ. of North Carolina, Chapel Hill
“Space, Place, and Image: Mapping Farm Securities Administration (FSA) Photographs and the Photogrammar Project,” Lauren Tilton Yale Univ.
“Southern Spaces: The Development of a Digital Southern Studies Journal,” Frances Abbott Emory Univ.
“Mapping a New Deal for New Orleans Artists,” Michael Mizell-Nelson Univ. of New Orleans
217. Reconfiguring the Scholarly Editor: Textual Studies at the University of Washington, Seattle
10:15–11:30 a.m., 613, WSCC
Presiding: Míceál Vaughan, Univ. of Washington, Seattle
“Neither Editor nor Librarian: The Interventions Required in the New Context of Texts in the Digital World,” Joseph Tennis, Univ. of Washington, Seattle
“Revealing a Coronation Tribute: Decoding the Hidden Aural and Visual Symbols,” JoAnn Taricani, Univ. of Washington, Seattle
“Mapping Editors,” Meg Roland, Marylhurst Univ.
“The Editor as Curator: Early Histories of Collected Works Editions in English,” Jeffrey Knight, Univ. of Washington, Seattle
249. Building Digital Humanities in the Undergraduate Classroom
12:00 noon–1:15 p.m. Grand A, Sheraton
Presiding: Kathi Inman Berens Univ. of Southern California
Speakers: Kathryn E. Crowther, Georgia Inst. of Tech.; Brian Croxall, Emory Univ.; Maureen Engel, Univ. of Alberta; Paul Fyfe, Florida State Univ.; Kathi Inman Berens; Janelle A. Jenstad, Univ. of Victoria; Charlotte Nunes, Univ. of Texas, Austin; Heather Zwicker, Univ. of Alberta
This electronic roundtable assumes that “building stuff” is foundational to the digital humanities and that the technical barriers to participation can be low. When teaching undergraduates digital humanities, simple tools allow students to focus on the simultaneous practices of building and interpreting. This show-and-tell presents projects of variable technical complexity that foster robust interpretation.
259. Representation in the Shadow of New Media Technologies
12:00 noon–1:15 p.m., 304, WSCC
Presiding: Lan Dong Univ. of Illinois, Springfield
“Web Video and Ethnic Media: Linking Representation and Distribution,” Aymar Jean Christian Univ. of Pennsylvania
“Among Friends: Comparing Social Networking Functions in the Baltimore Sun and Baltimore Afro-American in 1904 and 1933,” Daniel Greene Univ. of Maryland, College Park
“Digital Trash Talk: The Rhetoric of Instrumental Racism as Procedural Strategy,” Lisa Nakamura Univ. of Illinois, Urbana
276. Getting Funded in the Humanities: An NEH Workshop
1:30–3:30 p.m. 3B, WSCC
Presiding: Jason C. Rhody National Endowment for the Humanities
This workshop will highlight recent awards and outline current funding opportunites. In addition to emphasizing grant programs that support individual and collaborative research and education, the workshop will include information on the NEH’s Office of Digital Humanities. A question-and-answer period will follow.
301. Reconfiguring Publishing
1:45–3:00 p.m., Grand A, Sheraton
Presiding: Carolyn Guertin Univ. of Texas, Arlington; William Thompson Western Illinois Univ.
Speakers: James Copeland, Ugly Duckling Presse; Gail E. Hawisher, Univ. of Illinois, Urbana; James MacGregor, Public Knowledge Project; Rita Raley, Univ. of California, Santa Barbara; Avi Santo, Old Dominion Univ.; Cynthia L. Selfe, Ohio State Univ., Columbus; Raymond G. Siemens, Univ. of Victoria
This session intends not to bury publishing but to raise awareness of its transformations and continuities as it reconfigures itself. New platforms are causing publishers to return to their roots as booksellers while booksellers are once again becoming publishers. Open-access models of publishing are creating new models for content creation and distribution as small print-focused presses are experiencing a renaissance. Come see!
315. The New Dissertation: Thinking outside the (Proto-)Book
3:30–4:45 p.m., 606, WSCC
Presiding: Kathleen Woodward, Univ. of Washington, Seattle
Speakers: David Damrosch, Harvard Univ.; Kathleen Fitzpatrick, MLA; Richard E. Miller, Rutgers Univ., New Brunswick; Sidonie Ann Smith, Univ. of Michigan, Ann Arbor; Kathleen Woodward
In 2010 the Executive Council appointed a working group to explore the state of the doctoral dissertation: How can it adapt to digital innovation, open access, new concepts of “authorship”? What counts as scholarship in the world today? How do we address the national problems of cost and time to degree? This roundtable will offer members of the working group an opportunity to make the case that as we shift the terminology from scholarly publication to scholarly communication we need to expand the forms of the dissertation and to reconceptualize what the dissertation is and how it can prepare graduates for academic careers in the coming decades.
332. Digital Narratives and Gaming for Teaching Language and Literature
3:30–4:45 p.m. Aspen, Sheraton
Presiding: Barbara Lafford Arizona State Univ.
“Narrative Expression and Scientific Method in Online Gaming Worlds,” Steven Thorne Portland State Univ.
“Designing Narratives: A Framework for Digital Game-Mediated L2 Literacies Development,” Jonathon Reinhardt Univ. of Arizona; Julie Sykes Univ. of New Mexico, Albuquerque
“Close Playing, Paired Playing: A Practicum,” Edmond Chang Univ. of Washington, Seattle; Timothy Welsh Loyola Univ., New Orleans
Responding: Dave McAlpine Univ. of Arkansas, Little Rock
343. The Cultural Place of Nineteenth-Century Poetry
3:30–4:45 p.m., 611, WSCC
Presiding: Charles P. LaPorte, Univ. of Washington, Seattle
“Lyric and Music at the Fin de Siècle: The Cultural Place of Song,” Emily M. Harrington, Penn State Univ., University Park
“Olympics 2012 and Victorian Poetry for All Time,” Margaret Linley, Simon Fraser Univ.
349. Digital Pedagogy
5:15–6:30 p.m., Grand A, Sheraton
Presiding: Katherine D. Harris San José State Univ.
Speakers: Sheila T. Cavanagh, Emory Univ.; Elizabeth Chang, Univ. of Missouri, Columbia; Lori A. Emerson, Univ. of Colorado, Boulder; Adeline Koh, Richard Stockton Coll. of New Jersey; John Lennon, Univ. of South Florida Polytechnic; Kevin Quarmby, Shakespeare’s Globe Trust; Katherine Singer, Mount Holyoke Coll.; Roger Whitson, Georgia Inst. of Tech.
Discussions about digital projects and digital tools often focus on research goals. For this electronic roundtable, we will instead demonstrate how these digital resources, tools, and projects have been integrated into undergraduate and graduate curricula.
378. Old Labor and New Media
5:15–6:30 p.m., 608, WSCC
Presiding: Alison Shonkwiler Rhode Island Coll.
“America Needs Indians: Representations of Native Americans in Counterculture Narrative and the Roots of Digital Utopianism,” Lisa Nakamura Univ of Illinois, Urbana
“The Eyes of Real Labor and the Illusions of Virtual Reality,” Matt Goodwin Univ. of Massachusetts, Amherst
“Digital Voices: Representations of Migrant Workers in Dubai and Los Angeles,” Anne Cong-Huyen Univ. of California, Santa Barbara
Responding: Seth Perlow Cornell Univ.
Saturday, January 7
410. Reconfiguring the Literary: Narratives, Methods, Theories
8:30–9:45 a.m. 608, WSCC
Presiding: Susan Schreibman Trinity Coll., Dublin
Speakers: Alison Booth, Univ. of Virginia; Mark Stephen, Byron Univ. of Sydney; Øyvind Eide, Univ. of Oslo; Alexander Gil, Univ. of Virginia; Rita Raley, Univ. of California, Santa Barbara
425. Composing New Partnerships in the Digital Humanities
8:30–9:45 a.m., 606, WSCC
Presiding: Catherine Jean Prendergast Univ. of Illinois, Urbana
Speakers:Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York; Catherine Jean Prendergast; Alexander Reid, Univ. at Buffalo, State Univ. of New York; Spencer Schaffner, Univ. of Illinois, Urbana; Annette Vee, Univ. of Pittsburgh
The objective of this roundtable is to facilitate interactions between digital humanists and writing studies scholars who, despite shared interests in digital authorship, intellectual property, peer review, classroom communication, and textual revision, have often failed to collaborate. An extended period for audience involvement has been designed to seed partnerships beyond the conference.
428. Technology and Chinese Literature and Language
10:15–11:30 a.m., Boren, Sheraton
Presiding: Xiaoping Song Norwich Univ.
“Adaptation: Rewriting Modern Chinese Literary Masterpieces,” Paul Manfredi Pacific Lutheran Univ.
“Technology in Chinese Instruction: A Web-Based Extensive Reading Program,” Helen Heling Shen Univ. of Iowa
“Technology and Teaching Chinese Literature in Translation,” Keith Dede Lewis and Clark Coll.
“Text-Image-Imagined Words: An Approach to Teaching Chinese Literature,” Xiaoping Song
The speakers will discuss the preservation of texts as a core purpose of libraries, engaging questions regarding the tasks of deciding what materials to preserve and when and which to let go: best practices; institutional and collective roles for the preservation of materials in various formats; economics and governance structures of preserving materials; issues of tools, standards, and platforms for digital materials.
450. Digital Faulkner: William Faulkner and Digital Humanities
10:15–11:30 a.m. 615, WSCC
Presiding: Steven Knepper Univ. of Virginia
Speakers: Keith Goldsmith, Vintage Books; John B. Padgett Brevard, Coll.; Noel Earl Polk, Mississippi State Univ.; Stephen Railton, Univ. of Virginia; Peter Stoicheff, Univ. of Saskatchewan
A roundtable on digital humanities and its implications for teaching and scholarship on the work of William Faulkner.
467. The Future of Teaching
12:00 noon–1:15 p.m., Grand C, Sheraton
Presiding: Priscilla B. Wald, Duke Univ.
“Gaming the Humanities Classroom,” Patrick Jagoda, Univ. of Chicago
“Intimacy in Three Acts,” Margaret Rhee, Univ. of California, Berkeley
“One Course, One Project,” Jentery Sayers, Univ. of Victoria
“The Meta Teacher,” Bulbul Tiwari, Stanford Univ.
This session features innovative advanced doctoral students and junior scholars who are making their mark as scholars and as teachers using new interactive, multimedia technologies of writing and publishing in their research and classrooms. The panelists cross the boundaries of the humanities, arts, sciences, and technology and are committed to new forms of scholarship and pedogogy. They practice the virtues of open, public, digitally accessible thinking and represent the vibrancy of our profession. Fiona Barnett, Duke Univ., will coordinate live Twitter feeds and other input during the session.
468. Networks, Maps, and Words: Digital-Humanities Approaches to the Archive of American Slavery
482. Of Kings’ Treasuries and the E-Protean Invasion: The Evolving Nature of Scholarly Research
12:00 noon–1:15 p.m., 613, WSCC
Presiding: Jude V. Nixon, Salem State Univ.
Speakers: Douglas M. Armato, Univ. of Minnesota Press; Harriett Green, Univ. of Illinois, Urbana; Dean J. Smith, Project MUSE; Pierre A. Walker, Salem State Univ.
This roundtable addresses the veritable explosion of emerging technologies (Google Books, Wikipedia, and e-readers) currently available to faculty members to enhance their scholarly research and how these resources are altering fundamentally the method of scholarly research. The session also wishes to examine access to these technologies and how they interact with the traditional research library and the still meaningful role, if any, it plays in scholarly research.
487. Context versus Convenience: Teaching Contemporary Business Communication through Digital Media
12:00 noon–1:15 p.m., 306, WSCC
Presiding: Mahli Xuan Mechenbier, Kent State Univ.
“Reenvisioning and Renovating the Twenty-First-Century Business Communication Classroom,” Lara Smith-Sitton, Georgia State Univ.
“Contextualizing Conventions: Technology in Business Writing Classrooms,” Suanna H. Davis, Houston Community Coll., Central Coll., TX
“Teaching Business Communication through Simulation Games,” Katherine V. Wills, Indiana Univ.–Purdue Univ., Columbus
490. Reconfiguring the Scholarly Edition
12:00 noon–1:15 p.m., 611, WSCC
Presiding: Susan Schreibman, Trinity Coll. Dublin
Speakers: Michael R. Best, Univ. of Victoria; John Bryant, Hofstra Univ.; Alexander Gil, Univ. of Virginia; Elizabeth Grove-White, Univ. of Victoria; Grant Simpson, Indiana Univ., Bloomington; John A. Walsh, Indiana Univ., Bloomington
New theories of editing have broadened the approaches available to editors of scholarly editions. Noteworthy amongst these are the changes brought about by editing for digital publication. New methods for digital scholarship, forms of editions, theories informing digital publication, and tools offer exciting alternatives to traditional notions of the scholarly edition.
513. Principles of Exclusion: The Future of the Nineteenth-Century Archive
1:45–3:00 p.m., 611, WSCC
Presiding: Lloyd P. Pratt, Univ. of Oxford, Linacre Coll.
“Missing Links; or, Girls of Today, Archives of Tomorrow,” William A. Gleason, Princeton Univ.
“Anonymity, Authorship, and Digital Archives in American Literature,” Elizabeth Lorang, Univ. of Nebraska, Lincoln
“Dashed Hopes: Small-Scale Digital Archives of the 1990s,” Amy Earhart, Texas A&M Univ., College Station
532. Reading Writing Interfaces: Electronic Literature’s Past and Present
1:45–3:00 p.m. 613, WSCC
Presiding: Marjorie Luesebrink Irvine Valley Coll., CA
“Early Authors of E-Literature, Platforms of the Past,” Dene M. Grigar Washington State Univ., Vancouver
“Seven Types of Interface in the Electronic Literature Collection Volume Two,” Marjorie Luesebrink ; Stephanie Strickland New York, NY
539. #alt-ac: Alternative Paths, Pitfalls, and Jobs in the Digital Humanities
3:30–4:45 p.m. 3B, WSCC
Presiding: Sara Steger Univ. of Georgia
Speakers:Brian Croxall, Emory Univ.; Julia H. Flanders, Brown Univ.; Jennifer Howard, Chronicle of Higher Education; Matthew Jockers, Stanford Univ.; Shana Kimball, Univ. of Michigan, Ann Arbor; Bethany Nowviskie, Univ. of Virginia; Lisa Spiro, National Inst. for Tech. in Liberal Education
This roundtable brings together various perspectives on alternative academic careers from professionals in digital humanities centers, libraries, publishing, and humanities labs. Speakers will discuss how and whether digital humanities is especially suited to fostering non-tenure-track positions and how that translates to the role of alt-ac in digital humanities and the academy. Related session: “#alt-ac: The Future of ‘Alternative Academic’ Careers” (595).
566. Ending the Edition
3:30–4:45 p.m., 303, WSCC
Presiding: Carol DeBoer-Langworthy, Brown Univ.
“Mary Moody Emerson’s Almanacks: Digital Editions and Imagined Endings,” Noelle A. Baker, Neenah, WI
“Closing the Book on a Multigenerational Edition: Harvard’s The Collected Works of Ralph Waldo Emerson,” Ronald A. Bosco, Univ. at Albany, State Univ. of New York; Joel Myerson, Univ. of South Carolina, Columbia
“‘Letting Go’: The Final Volumes of the Cambridge Fitzgerald Edition,” James L. W. West, Penn State Univ., University Park
581. Digital Humanities versus New Media
5:15–6:30 p.m., 611, WSCC
” Everything Old Is New Again: The Digital Past and the Humanistic Future,” Alison Byerly Middlebury Coll.
“As Study or as Paradigm? Humanities and the Uptake of Emerging Technologies,” Andrew Pilsch Penn State Univ., University Park
“Digital Tunnel Vision: Defining a Rhetorical Situation,” David Robert Gruber North Carolina State Univ.
“Digital Humanities Authorship as the Object of New Media Studies,” Victoria E. Szabo Duke Univ.
595. #alt-ac: The Future of “Alternative Academic” Careers
5:15–6:30 p.m., 3B, WSCC
Presiding: Bethany Nowviskie, Univ. of Virginia
Speakers: Donald Brinkman, Microsoft Research; Neil Fraistat, Univ. of Maryland, College Park; Robert Gibbs, Univ. of Toronto; Charles Henry, Council on Library and Information Resources; Bethany Nowviskie; Jason C. Rhody, National Endowment for the Humanities; Elliott Shore, Bryn Mawr Coll.
In increasing numbers, scholars are pursuing careers as “alternative academics”—embracing hybrid and non-tenure-track positions in libraries, presses, humanities and cultural heritage organizations, and digital labs and centers. Speakers represent organizations helping to craft alternatives to the traditional academic career. Related session: “#alt-ac: Alternative Paths, Pitfalls, and Jobs in the Digital Humanities” (539).
603. Innovative Pedagogy and Research in Technical Communication
5:15–6:30 p.m., 615, WSCC
Presiding: William Klein Univ. of Missouri, St. Louis
1. “The New Normal of Public Health Research by Technical Communication Professionals,” Thomas Barker Texas Tech Univ.
2. “Teaching the New Paradigm: Social Media inside and outside the Classroom,” William Magrino Rutgers Univ., New Brunswick; Peter B. Sorrell Rutgers Univ., New Brunswick
3. “Technical and Rhetorical Communication through DIY (Do-It-Yourself) Digital Video,” Crystal VanKooten Univ. of Michigan, Ann Arbor
Is there gravity in digital worlds? Moving beyond both lamentations and celebrations of the putatively free-floating informatic empyrean, this roundtable will explore the ways in which representations in myriad digital platforms—verbal, visual, musical, cinematic—might bear the weight of materiality, presence, and history and the ways in which bodies—both human and hardware—might be recruited for or implicated in the effort.
730. New Media Narratives and Old Prose Fiction
1:45–3:00 p.m., 310, WSCC
Presiding: Amy J. Elias Univ. of Tennessee, Knoxville
“New Media: Its Use and Abuse for Literature and for Life,” Joseph Paul Tabbi Univ. of Illinois, Chicago
“Contrasts and Convergences of Electronic Literature,” Dene M. Grigar Washington State Univ., Vancouver
“Computing Language and Poetry,” Nick Montfort Massachusetts Inst. of Tech.
736. Close Playing: Literary Methods and Video Game Studies
This roundtable moves beyond the games-versus-stories dichotomy to explore the full range of possible literary approaches to video games. These approaches include the theoretical and methodological contributions of reception studies, reader-response theory, narrative theory, critical race and gender theory, disability studies, and textual scholarship.
Speakers: Mark Algee-Hewitt, McGill Univ.; Alison Booth, Univ. of Virginia; Amanda Gailey, Univ. of Nebraska, Lincoln; Laura C. Mandell, Texas A&M Univ., College Station
Roundtable on the theoretical, practical, and institutional issues surrounding the transformation of print-era texts into digital forms for scholarly use. What forms of editing need to be done, and by whom? What new research questions are becoming possible? How will the global digital library change professional communication? What is the future of the academic research library? How can we make sustainable digital textual resources for literary studies?
Every scholarly community has its disagreements, its tensions, its divides. One tension in the digital humanities that has received considerable attention is between those who build digital tools and media and those who study traditional humanities questions using digital tools and media. Variously framed as do vs. think, practice vs. theory, or hack vs. yack, this divide has been most strongly (and provocatively) formulated by Stephen Ramsay. At the 2011 annual Modern Language Association convention in Los Angeles, Ramsay declared, “If you are not making anything, you are not…a digital humanist.”
I’m going to step around Ramsay’s argument here (though I recommend reading the thoughtful discussion that ensued on Ramsay’s blog). I mention Ramsay simply as an illustrative example of the various tensions within the digital humanities. There are others too: teaching vs. research, universities vs. liberal arts colleges, centers vs. networks, and so on. I see the presence of so many divides—which are better labeled as perspectives—as a sign that there are many stakeholders in the digital humanities, which is a good thing. We’re all in this together, even when we’re not.
I’ve always believed that these various divides, which often arise from institutional contexts and professional demands generally beyond our control, are a distracting sideshow to the true power of the digital humanities, which has nothing to do with production of either tools or research. The heart of the digital humanities is not the production of knowledge; it’s the reproduction of knowledge. I’ve stated this belief many ways, but perhaps most concisely on Twitter: [blackbirdpie url=”http://twitter.com/samplereality/statuses/26563304351″]The promise of the digital is not in the way it allows us to ask new questions because of digital tools or because of new methodologies made possible by those tools. The promise is in the way the digital reshapes the representation, sharing, and discussion of knowledge. We are no longer bound by the physical demands of printed books and paper journals, no longer constrained by production costs and distribution friction, no longer hampered by a top-down and unsustainable business model. And we should no longer be content to make our work public achingly slowly along ingrained routes, authors and readers alike delayed by innumerable gateways limiting knowledge production and sharing.
I was riffing on these ideas yesterday on Twitter, asking, for example, what’s to stop a handful of of scholars from starting their own academic press? It would publish epub books and, when backwards compatibility is required, print-on-demand books. Or what about, I wondered, using Amazon Kindle Singles as a model for academic publishing. Imagine stand-alone journal articles, without the clunky apparatus of the journal surrounding it. If you’re insistent that any new publishing venture be backed by an imprimatur more substantial than my “handful of scholars,” then how about a digital humanities center creating its own publishing unit?
It’s with all these possibilities swirling in my mind that I’ve been thinking about the MLA’s creation of an Office of Scholarly Communication, led by Kathleen Fitzpatrick. I want to suggest that this move may in the future stand out as a pivotal moment in the history of the digital humanities. It’s not simply that the MLA is embracing the digital humanities and seriously considering how to leverage technology to advance scholarship. It’s that Kathleen Fitzpatrick is heading this office. One of the founders of MediaCommons and a strong advocate for open review and experimental publishing, Fitzpatrick will bring vision, daring, and experience to the MLA’s Office of Scholarly Communication.
I have no idea what to expect from the MLA, but I don’t think high expectations are unwarranted. I can imagine greater support of peer-to-peer review as a replacement of blind review. I can imagine greater emphasis placed upon digital projects as tenurable scholarship. I can imagine the breadth of fields published by the MLA expanding. These are all fairly predictable outcomes, which might have eventually happened whether or not there was a new Office of Scholarly Communication at the MLA.
But I can also imagine less predictable outcomes. More experimental, more peculiar. Equally as valuable though—even more so—than typical monographs or essays. I can imagine scholarly wikis produced as companion pieces to printed books. I can imagine digital-only MLA books taking advantage of the native capabilities of e-readers, incorporating videos, songs, dynamic maps. I can image MLA Singles, one-off pieces of downloadable scholarship following the Kindle Singles model. I can imagine mobile publishing, using smartphones and GPS. I can imagine a 5,000-tweet conference backchannel edited into the official proceedings of the conference backchannel.
There are no limits. And to every person who objects, But, wait, what about legitimacy/tenure/cost/labor/& etc, I say, you are missing the point. Now is not the time to hem in our own possibilities. Now is not the time to base the future on the past. Now is not the time to be complacent, hesitant, or entrenched in the present.
William Gibson has famously said that “the future is already here, it’s just not very evenly distributed.” With the digital humanities we have the opportunity to distribute that future more evenly. We have the opportunity to distribute knowledge more fairly, and in greater forms. The “builders” will build and the “thinkers” will think, but all of us, no matter where we fall on this false divide, we all need to share. Because we can.
(Radiohead Crowd photograph courtesy of Flickr user Samuel Stroube / Creative Commons Licensed]
I recently received word that my proposal for a roundtable on videogame studies was accepted for the annual Modern Language Association Convention, to be held next January in Seattle, Washington. I’m very excited for myself and my fellow participants: Ed Chang, Steve Jones, Jason Rhody, Anastasia Salter, Tim Welsh, and Zach Whalen. (Updated with links to talks below)
This roundtable is particularly noteworthy in two ways. First, it’s a departure from the typical conference model in the humanities, namely three speakers each reading twenty-minute essays at an audience, followed by ten minutes of posturing and self-aggrandizement thinly disguised as Q&A. Instead, each speaker on the “Close Playing” roundtable will briefly (no more than six minutes each) lay out opening remarks or provocations, and then we’ll invite the audience to a long open discussion. Last year’s Open Professoriate roundtable followed a similar model, and the level of collegial dialogue between the panelists and the audience was inspiring (and even newsworthy)—and I hope the “Close Playing” roundtable can emulate that success.
The second noteworthy feature of the roundtable is the topic itself. Videogames—an incredibly rich form of cultural expression—have been historically unrepresented, if not entirely absent from the MLA. I noted this silence in the midst of the 2011 convention in Los Angeles:
How is it possible that I am the only person talking about videogames at #MLA11?
This is not to say there isn’t an interest in videogames at the MLA; indeed, I am convinced from the conversations I’ve had at the conference that there’s a real hunger to discuss games and other media forms that draw from the same cultural well as storytelling. Partly in the interest of promoting the critical study of videogames, and partly to serve as a successful model for future roundtable proposals (which I can assure you, the MLA Program Committee wants to see more of), I’m posting the “Close Playing” session proposal here (see also the original CFP).
We hope to see you in Seattle in January!
CLOSE PLAYING: LITERARY METHODS AND VIDEOGAME STUDIES
(As submitted to the MLA Program Committee
for the 2012 conference in Seattle, Washington)
Nearly fifteen years ago a contentious debate erupted in the emerging field of videogame studies between self-proclaimed ludologists and the more loosely-defined narratologists. At stake—or so it seemed at the time—was the very soul of videogame studies. Would the field treat games as a distinct cultural form, which demanded its own theory and methodology? Or were videogames to be considered “texts,” which could be analyzed using the same approaches literary scholars took to poetry, drama, and fiction? Were games mainly about rules, structure, and play? Or did games tell stories and channel allegories? Ludologists argued for the former, while many others defended the latter. The debate played out in conferences, blogs, and the early issues of scholarly e-journals such as Game Studies and Electronic Book Review.
In the ensuing years the debate has dissipated, as both sides have come to recognize that no single approach can adequately explore the rich and diverse world of videogames. The best scholarship in the field is equally attune to both the formal and thematic elements of games, as well as to the complex interplay between them. Furthermore, it’s become clear that ludologists mischaracterized literary studies as a strictly New Critical endeavor, a view that woefully overlooks the many insights contemporary literary scholarship can offer to this interdisciplinary field.
In the past few years scholars have begun exploring the whole range of possible literary approaches to games. Methodologies adopted from reception studies, reader-response theory, narrative theory, critical race and gender theory, queer studies, disability studies, rhetoric and composition, and textual studies have all contributed in substantive ways to videogames studies. This roundtable will focus on these contributions, demonstrating how various methods of literary studies can help us understand narrative-based games as well as abstract, non-narrative games (for example, Tetris). And as Jameson’s famous mantra “always historicize” reminds us, the roundtable will also address the wider social and historical context that surrounds games.
This topic is ideally suited for a roundtable format (rather than a panel of three papers) precisely because of the diversity of approaches, which are well-represented by the roundtable participants. Moreover, each presenter will limit his or her opening remarks to a nonnegotiable six minutes, focusing on the possibilities of one or two specific methodologies for close-reading videogames, rather than a comprehensive close reading of a single game. With six presenters, this means the bulk of the session time (roughly thirty-five minutes) will be devoted to an open discussion, involving both the panel and the audience.
“Close Playing: Literary Methods and Videogame Studies” will appeal to a broad swath of the MLA community. While many will find subject of videogames studies compelling enough by itself, the discussion will be relevant to those working in textual studies, media studies, and more broadly, the digital humanities. The need for this roundtable is clear: as we move toward the second decade of videogames studies, the field can no longer claim to be an emerging discipline; the distinguished participants on this panel—with the help of the audience—will survey the current lay of the land in videogame studies, but more importantly, point the way forward.
A roundtable discussion of specific approaches and close playings that explore the methodological contribution of literary studies toward videogame studies. 300-word abstract and 1-page bio to Mark Sample (firstname.lastname@example.org) by March 15.
All participants must be MLA members by April 7. Also note that this is a proposed special session; the MLA Program Committee will have the final say on the roundtable’s acceptance.
[Controllers photo courtesy of Flickr user Kimli / Creative Commons License]
[I was on a panel called “The Open Professoriat(e)” at the 2011 MLA Convention in Los Angeles, in which we focused on the dynamic between academia, social media, and the public. My talk was an abbreviated version of a post that appeared on samplereality in July. Here is the text of the talk as I delivered it at the MLA, interspersed with still images from my presentation. The original slideshow is at the end of this post. Co-panelists Amanda French and Erin Templeton have also posted their talks online.]
Rather than make an argument in the short time I have, I want to make a provocation, urging everyone here to consider the way social media can enable what I call tactical collaborations both within and outside of the professoriate.
I’ve always had trouble keeping the words tactic and strategy straight. Or, as early forms of the words appear in the OED, tactick and the curiously elongated stratagematick.
This quote comes from a 17th century translation of a history of Roman emperors (circa 240 AD).
I love the quote and it tells me that tactics and strategy have always been associated with battle. But I still have trouble telling one from the other. I know one is, roughly speaking, short term, while the other is long range. One is the details and the other the big picture.
I’ll blame the old board game Stratego for my confusion. The placement of my flag, the movement of my scouts, that seemed tactical to me, yet the game was called Stratego.
Even diving into the etymology of the words doesn’t help much at first: Tactic is from the ancient Greek τακτóς, meaning arranged or ordered.
While Strategy comes from the Greek στρατηγóς, meaning commander or general. A general is supposed to be a big-picture kind of guy, so I guess that makes sense. And I suppose the arrangement of individual elements comes close to the modern day meaning of a military tactic.
All of this curiosity about the meaning of the word tactic began last May, when Dan Cohen and Tom Scheinfeldt at the Center for History and New Media at George Mason University announced a crowd-sourced book called Hacking the Academy. They announced it on Twitter on Friday, May 21 and by Friday, May 28, one week later, all the submissions were in. 330 submissions from nearly 200 people.
The collection is now in the final stages of editing, with a table of contents of around 60 pieces by 40 or so different authors. It will be peer-reviewed and published by Digital Culture Books, an imprint of the University of Michigan Press. As you can imagine, the idea of crowdsourcing a scholarly book, in a week no less, generated excitement, questions, and some worthwhile skepticism.
And it was one of these critiques of Hacking the Academy that prompted my thoughts about tactical collaboration. Jennifer Howard, a senior reporter for The Chronicle of Higher Education, posed several questions that would-be hackers ought to consider during the course of hacking the academy. It was Howard’s last question that resonated most with me.
Have you looked for friends in the enemy camp lately? Howard cautioned us that some of the same institutional forces we think we’re fighting might actually be allies when we want to be a force for change. I read Howard’s question and I immediately began rethinking what collaboration means. Instead of a commitment, it’s an expedience. Instead of strategic partners, find immediate allies. Instead of full frontal assaults, infiltrate and disseminate.
In academia we have many tactics for collaboration, but very little tactical collaboration.
And this is how I defined tactical collaboration:
I’m reminded of de Certeau’s vision of tactics in The Practice of Everyday Life. Unlike a strategy, which operates from a secure base, a tactic, as de Certeau writes, operates “in a position of withdrawal…it is a maneuver ‘within the enemy’s field of vision’” (37).
De Certeau goes on to add that a tactic “must vigilantly make use of the cracks….It poaches in them. It creates surprises in them. It can be where it is least expected” (37).
So that’s what a tactic is. I should’ve skipped the OED and Stratego and headed straight for de Certeau. He teaches us that strategies, like institutions, depend upon dominance over space—physical as well as discursive space. But tactics rely upon momentary victories in and over time. Tactics require agility, surprise, feigned retreats as often as real retreats. They require collaborations that the more strategically-minded might otherwise discount. And social media presents the perfect landscape for these tactical collaborations to play out.
Despite my being here today, I’m very skeptical of institutions and associations. We live in a world where we can’t idly hope for or rely upon institutional support or recognition. To survive and thrive, humanists must be fleet-footed, mobile, insurgent. Decentralized and nonhierarchical. We need to stop forming committees and begin creating coalitions. We need affinities over affiliations, and networks over institutes.
Tactical collaboration is crucial for any humanist seeking to open up the professoriate, any scholar seeking to poach from the institutional reserves of knowledge production, any teacher seeking to challenge the ever intensifying bureaucratization and Taylorization of learning, any contingent faculty seeking to forge success and stability out of contingency.
We need tactical collaborations, and we need them now. The strategematick may be the domain of emperors and institutions, but like the word itself, it’s quaint and outdated. Let tactics be our ruse and our practice.
Certeau, Michel de. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.
Herodian. Herodians of Alexandria his imperiall history of twenty Roman caesars & emperours of his time / First writ in Greek, and now converted into an heroick poem by C.B. Staplyton. London: W. Hunt, 1652. Web. 14 July 2010.
[This is the text of my second talk at the 2011 MLA convention in Los Angeles, for a panel on “Close Reading the Digital.” My talk was accompanied by a Prezi “Zooming” presentation, which I have replicated here with still images (the original slideshow is at the end of this post). In 15 minutes I could only gesture toward some of the broader historical and cultural meanings that resonate outward from code—but I am pursuing this project further and I welcome your thoughts and questions.]
New media critics such as Nick Montfort and Matthew Kirschenbaum have observed that a certain “screen essentialism” pervades new media studies, in which the “digital event on the screen,” as Kirschenbaum puts it (Kirschenbaum 4), becomes the sole object of study at the expense of the underlying computer code, the hardware, the storage devices, and even the non-digital inputs and outputs that make the digital object possible in the first place. There are a number of ways to remedy this essentialism, and the approach that I want to focus on today is the close reading of code.
Frederich Kittler has said that code is the only language that does what it says. But the close reading of code insists that code not only does what it says, it says things it does not do. Like any language, code operates on a literal plane—literal to the machine, that is—but it also operates on an evocative plane, rife with gaps, idiosyncrasies, and suggestive traces of its context. And the more the language emphasizes human legibility (for example, a high-level language like BASIC or Inform 7), the greater the chance that there’s some slippage in the code that is readable by the machine one way and readable by scholars and critics in another.
Today I want to close read some snippets of code from Micropolis, the open-source version of SimCity that was included on the Linux-based XO computers in the One Laptop per Child program.
Designed by the legendary Will Wright, SimCity was released by Maxis in 1989 on the Commodore 64, and it was the first of many popular Sim games, such as SimAnt and SimFarm, not to mention the enduring SimCity series of games—that were ported to dozens of platforms, from DOS to the iPad. Electronic Arts owns the rights to the SimCity brand, and in 2008, EA released the source code of the original game into the wild under a GPL License—a General Public License. EA prohibited any resulting branch of the game from using the SimCity name, so the developers, led by Don Hopkins, called it Micropolis, which was in fact Wright’s original name for his city simulation.
From the beginning, SimCity was criticized for presenting a naive vision of urban planning, if not an altogether egregious one. I don’t need to rehearse all those critiques here, but they boil down to what Ian Bogost calls the procedural rhetoric of the game. By procedural rhetoric, Bogost simply means the implicit or explicit argument a computer model makes. Rather than using words like a book, or images like a film, a game “makes a claim about how something works by modeling its processes” (Bogost, “The Proceduralist Style“).
In the case of SimCity, I want to explore a particularly rich site of embedded procedural rhetoric—the procedural rhetoric of crime. I’m hardly the first to think about the way SimCity or Micropolis models crime. Again, these criticisms date back to the nineties. And as recently as 2007, the legendary computer scientist Alan Kay called SimCity a “pernicious…black box,” full of assumptions and “somewhat arbitrary knowledge” that can’t be questioned or changed (Kay).
Kay goes on to illustrate his point using the example of crime in SimCity. SimCity, Kay notes, “gets the players to discover that the way to counter rising crime is to put in more police stations.” Of all the possible options in the real world—increasing funding for education, creating jobs, and so on—it’s the presence of the police that lowers crime in SimCity. That is the procedural rhetoric of the game.
And it doesn’t take long for players to figure it out. In fact, the original manual itself tells the player that “Police Departments lower the crime rate in the surrounding area. This in turn raises property values.”
It’s one thing for the manual to propose a relationship between crime, property values, and law enforcement, but quite another for the player to see that relationship enacted within the simulation. Players have to get a feel for it on their own as they play the game. The goal of the simulation, then, is not so much to win the game as it is to uncover what Lev Manovich calls the “hidden logic” of the game (Manovich 222). A player’s success in a simulation hinges upon discovering the algorithm underlying the game.
But, if the manual describes the model to us and players can discover it for themselves through gameplay, then what’s the value of looking at the code of the game. Why bother? What can it tell us that playing the game cannot?
Before I go any further, I want to be clear: I am not a programmer. I couldn’t code my way out of a paper bag. And this leads me to a crucial point I’d like to make today: you don’t have to be a coder to talk about code. Anybody can talk about code. Anybody can close read code. But you do need to develop some degree of what Michael Mateas has called “procedural literacy” (Mateas 1).
Let’s look at a piece of code from Micropolis and practice procedural literacy. This is a snippet from span.cpp, one of the many sub-programs called by the core Micropolis engine.
It’s written in C++, one of the most common middle-level programming languages—Firefox is written in C++, for example, as well as Photoshop, and nearly every Microsoft product. By paying attention to variable names, even a non-programmer might be able to discern that this code scans the player’s city map and calculates a number of critical statistics: population density, the likelihood of fire, pollution, land value, and the function that originally interested me in Micropolis,a neighborhood’s crime rate.
This specific calculation appears in lines 413-424. We start off with the crime rate variable Z at a baseline of 128, which is not as random at it seems, being exactly half of 256, the highest 8-bit binary value available on the original SimCity platform, the 8-bit Commodore 64.
128 is the baseline and the crime rate either goes up or down from there. The land value variable is subtracted from Z, and then the population density is added to Z:
While the number of police stations lowers Z.
It’s just as the manual said: crime is a function of population density, land value, and police stations, and a strict function at that. But the code makes visible nuances that are absent from the manual’s pithy description of crime rates. For example, land that has no value—land that hasn’t been built upon or utilized in your city—has no crime rate. This shows up in lines 433-434:
Also, because of this strict algorithm, there is no chance of a neighborhood existing outside of this model. The algorithm is, in Jeremy Douglass’s words when he saw this code, “absolutely deterministic.” A populous neighborhood with little police presence can never be crime free. Land value is likewise reduced to set formula, seen in this equation in lines 264-271:
Essentially these lines tell us that land value is a function of the property’s distance from the city center, the type of terrain, the nearby pollution, and the crime rate. Again, though, players will likely discover this for themselves, even if they don’t read the manual, which spells out the formula, explicitly telling us that “the land value of an area is based on terrain, accessibility, pollution, and distance to downtown.”
So there’s an interesting puzzle I’m trying to get at here. How does looking at the code teach us something new? If the manual describes the process, and the game enacts it, what does exploring the code do?
I think back to Sherry Turkle’s now classic work, Life on the Screen, about the relationship between identity formation and what we would now call social media. Turkle spends a great deal of time talking about what she calls, in a Baudrillardian fashion, the “seduction of the simulation.” And by simulations Turkle has in mind exactly what I’m talking about here, the Maxis games like SimCity, SimLife, and SimAnt that were so popular 15 years ago.
Turkle suggests that players can, on the one hand, surrender themselves totally to the simulation, openly accepting whatever processes are modeled within. On the other hand, players can reject the simulation entirely—what Turkle calls “simulation denial.” These are stark opposites, and our reaction to simulations obviously need not be entirely one or the other.
There’s a third alternative Turkle proposes: understanding the simulation, exploring its assumptions, both procedural and cultural (Turkle 71-72).
I’d argue that the close reading of code adds a fourth possibility, a fourth response to a simulation. Instead of surrendering to it, or rejecting it, or understanding it, we can deconstruct it. Take it apart. Open up the black box. See all the pieces and how they fit together. Even tweak the code ourselves and recompile it with our own algorithms inside.
When we crack open the code like this, we may well find surprises that playing the game or reading the manual will not tell us. Remember, code does what it says, but it also says things it does not do. Let’s consider the code for a file called disasters.cpp. Anyone with a passing familiarity with SimCity might be able to guess what a file called disasters.cpp does. It’s the routine that determines which random disasters will strike your city. The entire 408 line routine is worth looking at, but what I’ll draw your attention to is the section that begins at line 109, where the probability of the different possible disasters appears:
In the midst of rather generic biblical disasters (you see here there’s a 22% chance of a fire, and a 22% chance of a flood), there is a startling excision of code, the trace of which is only visible in the programmer’s comments. In the original SimCity there was a 1 out of 9 chance that an airplane would crash into the city. After 9/11 this disaster was removed from the code at the request Electronic Arts.
Playing Micropolis, say perhaps as one of the children in the OLPC program, this erasure is something we’d never notice. And we’d never notice because the machine doesn’t notice—it stands outside the procedural rhetoric of the game. It’s only visible when we read the code. And then, it pops, even to non-programmers. We could raise any number of questions about this decision to elide 9/11. There are questions, for example, about the way the code is commented. None of the other disasters have any kind of contextual, historically-rooted comments, the effect of which is that the other disasters are naturalized—even the human-made disasters like Godzilla-like monster that terrorizes an over-polluted city.
There are questions about the relationship between simulation, disaster, and history that call to mind Don DeLillo’s White Noise, where one character tells another, “The more we rehearse disaster, the safer we’ll be from the real thing…..There is no substitute for a planned simulation” (196).
And finally there are questions about corporate influence and censorship—was EA’s request to remove the airplane crash really a request, or more of a condition? How does this relate to EA’s more recent decision in October of 2010 to remove the Taliban from its most recent version of Medal of Honor? If you don’t know, a controversy erupted last fall when word leaked out that Medal of Honor players would be able to assume the role of the Taliban in the multiplayer game. After weeks of holding out, EA ended up changing all references to the Taliban to the unimaginative “Opposing Force.” So at least twice, EA, and by proxy, the videogame industry in general, has erased history, making it more palatable, or as a cynic might see it, more marketable.
I want to close by circling back to Michael Mateas’s idea of procedural literacy. My commitment to critical code studies is ultimately pedagogical as much as it is methodological. I’m interested in how we can teach everyday people, and in particular, nonprogramming undergraduate students, procedural literacy. I think these pieces of code from Micropolis make excellent material for novices, and in fact, I do have my videogame studies students dig around in this source code. Most of them have never programmed, let alone in C++, so I give them some prompts to get them started.
And for you today, here in the audience, I have similar questions, about the snippets of code that I showed, but also questions more generally about close reading digital objects. What other approaches are worth taking? What other games, simulations, or applications have the source available for study, and what might you want to look at with those programs? And finally, what are the limits of reading code from a humanist perspective?