The Poetics of Non-Consumptive Reading

May 22nd, 2013 § 23 comments § permalink

Ted Underwood's topic model of the PMLA, from the Journal of Digital Humanities, Vol. 2., No. 1 (Winter 2012)

“Non-consumptive research” is the term digital humanities scholars use to describe the large-scale analysis of a texts—say topic modeling millions of books or data-mining tens of thousands of court cases. In non-consumptive research, a text is not read by a scholar so much as it is processed by a machine. The phrase frequently appears in the context of the long-running legal debate between various book digitization efforts (e.g. Google Books and HathiTrust) and publishers and copyright holders (e.g. the Authors Guild). For example, in one of the preliminary Google Books settlements, non-consumptive research is defined as “computational analysis” of one or more books “but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within.” Non-consumptive reading is not reading in any traditional way, and it certainly isn’t close reading. Examples of non-consumptive research that appear in the legal proceedings (the implications of which are explored by John Unsworth) include image analysis, text extraction, concordance development, citation extraction, linguistic analysis, automated translation, and indexing.

More recently, Matthew Sag has reformulated non-consumptive research as “nonexpressive use.” In an amicus brief filed on behalf of HathiTrust, Sag, Matthew Jockers, and Jason Schultz explain that with digital humanities-style book digitization, “works are copied for reasons unrelated to their protectable expressive qualities; none of the works in question are being read by humans as they would be if sitting on the shelves of a library or bookstore.” Scholars “do not read, understand, or enjoy” the copyrighted works in question. The works’ expressive qualities—tone, perspective, figurative language, thematic content, and so on—are mere words on a page, pieces of data used to generate metadata. This nonexpressive use is the primary legal defense of digitization for the sake of large-scale textual analysis.

In the last chapter of Macroanalysis (2013), Jockers argues that unless the law recognizes the value of nonexpressive use of copyrighted works, digital humanists will be stuck studying books in the public domain. “Today’s digital-minded literary scholar is shackled in time,” Jockers writes. “We are all, or are all soon to become, nineteenth centuryists.” This sentiment echoes my own argument in Debates in the Digital Humanities, in which I use the contemporary American novelist Don DeLillo as a case study. Yet as I hope is obvious in my chapter, I am somewhat skeptical about what large-scale text analysis might reveal about DeLillo’s novels that we don’t already know. I present a counterfactual timeline that satirizes what scholars might learn about DeLillo from non-consumptive research. I particularly like this entry from 1999, with its oblique reference to Barthes’ “The Death of the Author”:

An English professor skilled in computational analysis uses word frequency counts to compare the text of the White Noise Omnibus CD-ROM with a scanned and OCR’d version of the raucous but out-of-print novel Amazons by Cleo Birdwell, long suspected to be the work of DeLillo. The professor’s computer proves with a +/– 10 percent error rate that DeLillo is the author of Amazons, primarily based on the recurrence of the name “Murray Jay Siskind” in both novels. The English professor publishes his findings in the journal Social Text, concluding that “now that the author has been found, the text is explained.”

The joke—one of them, at least— is that everyone already knows DeLillo is the primary author of Amazons. No text analysis is needed. There is no +/-10 percent error rate. We know it with with 100% certainty, and a trip to the Don DeLillo Papers at the Harry Ransom Center at UT-Austin will reveal not only draft manuscripts of Amazons but also a letter from DeLillo to his agent that explains why he wants to publish under a pseudonym. (“I want to be out of the picture. I want to disengage myself,” DeLillo writes.)

My counterfactual timeline parodies other digital humanities applications as well, including data-mining, GIS, and 3D environments. I don’t mean to suggest these digital tools have no place in humanities research. My chapter has a lot of hyperbole, and I routinely overstate my case in order to make my point (a rhetorical flourish that itself parodies academic discourse). In any case, I’ve been thinking more critically lately about what non-consumptive research—that is, nonexpressive use—of contemporary copyright-protected works can add to our understanding of those works. I want to propose an approach to non-consumptive research that stands in direct opposition to the stance articulated by most digital humanists:

Let’s turn our non-consumptive use of digitized works into expressive use of digitized works.

Consider my project House of Leaves of Grass as an illustrative example. As I explain in my artist’s statement, House of Leaves of Grass is a 100 trillion stanza-long mashup of Walt Whitman’s Leaves of Grass (which is in the public domain) and Mark Z. Danielewski’s House of Leaves (which is not). To create the work (which was inspired by Sea and Spar Between by Nick Montfort and Stephanie Strickland), I subjected the source texts to a number of typical non-consumptive analyses. The most conventional of these analyses were simply word frequency lists, made using Voyant Tools. Here’s my list of the 2,017 most frequently used words in House of Leaves; these are all the words in Danielewski’s novel that appear ten times or more. Guided by this list and other non-consumptive analyses, I reassembled both common and unique words and phrases from Leaves of Grass and House of Leaves into an entirely new work. In other words, House of Leaves of Grass transforms a non-consumptive engagement of House of Leaves and Leaves of Grass into an expressive engagement of those texts, which can be read, understood, and enjoyed. I transformed what Franco Moretti would call a distant reading into a new textual—and expressive—object.

The way House of Leaves of Grass calls attention to key lines—say the variations of “This is not for you” from House of Leaves or the repetition of “I Sing!” from Leaves of Grass—reinstates the expressive potential of what had become, in my non-consumptive research, a database of words. The seemingly empirical “model” of a corpus typically built from distant reading offers itself up as an aesthetic object on its own terms. Furthermore, not only can we close-read House of Leaves of Grass (and, given its size, close reading may be the only conceivable way to read it), we can use House of Leaves of Grass to aid in a close reading of its source texts. My distant reading of House of Leaves and Leaves of Grass became a close reading.

Borrowing from my experience making House of Leaves of Grass, I want to advocate for a poetics of non-consumptive reading in the digital humanities. Scholars and students of art, literature, history, and culture ought to transform more of our non-consumptive research into expressive objects. Nonexpressive use of texts is a dead-end for the humanities. A computer model surrounded by a wall of explanatory words is not enough. Make the computer model itself an expressive object. Turn your data into a story, into a game, into art. Call it aesthetic empiricism or empirical aesthetics. Call it whatever you want. But without a poetics of machine reading, there is nothing.

Header image is Ted Underwood’s visualization of Andrew Goldstone’s topic model of the PMLA, from the Journal of Digital Humanities, Vol. 2., No. 1 (Winter 2012)

no life no life no life no life: the 100,000,000,000,000 stanzas of House of Leaves of Grass

May 8th, 2013 § 26 comments § permalink

HoLoGMark Z. Danielewski’s House of Leaves is a massive novel about, among other things, a house that is bigger on the inside than the outside. Walt Whitman’s Leaves of Grass is a collection of poems about, among other things, the expansiveness of America itself.

What happens when these two works are remixed with each other? It’s not such an odd question. Though separated by nearly a century, they share many of the same concerns. Multitudes. Contradictions. Obsession. Physical impossibilities. Even an awareness of their own lives as textual objects.

To explore these connections between House of Leaves and Leaves of Grass I have created House of Leaves of Grass, a poem (like Leaves of Grass) that is for all practical purposes boundless (like the house on Ash Tree Lane in House of Leaves). Or rather, it is bounded on an order of magnitude that makes it untraversable in its entirety. The number of stanzas (from stanza, the Italian word for “room”) approximates the number of cells in the human body, around 100 trillion. And yet the container for this text is a mere 24K.

There are three distinct source texts for House of Leaves of Grass. As its title suggests, House of Leaves of Grass remixes Danielewski’s House of Leaves (2000) with Whitman’s Leaves of Grass (the “deathbed edition” of 1891-1892). Key words and phrases were selected from these two works according to either frequency of appearance or thematic significance and then algorithmically remixed into couplets based on seven templates. The third source text for House of Leaves of Grass is its electronic literature forebear, Nick Montfort and Stephanie Strickland’s Sea and Spar Between (2011). Sea and Spar Between provided inspiration (and the underlying platform) for House of Leaves of Grass, though the two works are dramatically different in terms of content and tone.

House of Leaves of Grass is available online at The work displays properly in any modern computer-based browser, such as Firefox, Safari, or Chrome. A keyboard and mouse are required to explore the work. (I prefer using the arrow keys to navigate, and the mouse wheel—or the multi-touch equivalent—to zoom in and out of the work.

While the instructions for reading House of Leaves of Grass provide some details about the work, here is more background about the data sources and tools I used:


  • Modified version of Javascript-based Sea and Spar Between, by Nick Montfort and Stephanie Strickland

Data Sources

  • Mark Z. Danielewksi, House of Leaves (2000 full color 2nd edition, scanned and OCR’d)
  • Walt Whitman, Leaves of Grass (1891-1892 edition, from Project Gutenberg)
  • Spreadsheet of word frequencies, n-grams, and other data, generated from the texts above using the tools below


Electronic Literature after Flash (MLA14 Proposal)

April 10th, 2013 § 6 comments § permalink

gamegame6I recently proposed a sequence of lightning talks for the next Modern Language Association convention in Chicago (January 2014). The participants are tackling a literary issue that is not at all theoretical: the future of electronic literature. I’ve also built in a substantial amount of time for an open discussion between the audience and my participants—who are all key figures in the world of new media studies. And I’m thrilled that two of them—Dene Grigar and Stuart Moulthrop—just received an NEH grant dedicated to a similar question, which is documenting the experience of early electronic literature.

Electronic literature can be broadly conceived as literary works created for digital media that in some way take advantage of the unique affordances of those technological forms. Hallmarks of electronic literature (e-lit) include interactivity, immersiveness, fluidly kinetic text and images, and a reliance on the procedural and algorithmic capabilities of computers. Unlike the avant garde art and experimental poetry that is its direct forebear, e-lit has been dominated for much of its existence by a single, proprietary technology: Adobe’s Flash. For fifteen years, many e-lit authors have relied on Flash—and its earlier iteration, Macromedia Shockwave—to develop their multimedia works. And for fifteen years, readers of e-lit have relied on Flash running in their web browsers to engage with these works.

Flash is dying though. Apple does not allow Flash in its wildly popular iPhones and iPads. Android no longer supports Flash on its smartphones and tablets. Even Adobe itself has stopped throwing its weight behind Flash. Flash is dying. And with it, potentially an entire generation of e-lit work that cannot be accessed without Flash. The slow death of Flash also leaves a host of authors who can no longer create in their chosen medium. It’s as if a novelist were told that she could no longer use a word processor—indeed, no longer even use words.

Or is it?

This roundtable brings together a range of practicing e-lit authors and scholars to discuss what the end of Flash means for electronic literature, new media, and the broader field of digital humanities. Each participant will limit his or her remarks to a strictly-timed six minutes, with the bulk of the session devoted to an open discussion between the panel and the audience. We will open with Chris Funkhouser, who argues that the importance of Flash to digital poetry in the early years of the 21st century cannot be understated. An e-lit poet himself, Funkhouser suggests that it is not the end of the software itself that is his primary concern, but the question of what happens to the aesthetic principles that have emerged out of Flash.

Building on Funkhouser’s ideas, Dene Grigar will next highlight two critical characteristics of Flash poetry: kinopoeia (movement that imitates or suggests a word or idea) and musicopoeia (music that imitates or suggests a word or idea). Grigar highlights three works that show the need for such new terminology: Ana Maria Uribe’s Anipoemas, John Kusch’s Red Lily, and Thom Swiss’s Shy Boy.

After Funkhauser’s and Grigar’s introductions to Flash we move to questions about the preservation, emulation, and study of Flash-based electronic literature. Zach Whalen begins this discussion by recalling earlier concerns about the preservation of web-based e-lit. Whalen focuses on Talan Memmot’s groundbreaking Lexia to Perplexia, which cannot be viewed in modern web browsers. Whalen explores why Lexia to Perplexia “breaks” and what one must alter in order to “fix” it. Whalen then questions the tacit assumption of digital preservation projects, which is that digital works must always be preserved. Ultimately, Whalen concludes that ephemerality and obsolescence are significant aesthetic properties of electronic literary works.

Next, Leonardo Flores picks up on the ethical and artistic dimensions of preservation by exploring the strategies that e-lit authors have developed to extend the life of their works. Using Dreaming Methods and R3/\/\1X\/\/0RX (remixworx)—two British e-lit collectives—as his case studies, Flores finds one strategy is to make the source material of individual works public, while another strategy involves migrating works to alternative platforms, such as HTML5 or iOS, that offer similar—but not the same—functionality.

The importance of code arises in both Whalen’s and Flores’ lightning talks, and Mark Marino pursues this question full throttle in his talk about code studies and Flash e-lit. Marino grounds his insights in his collaborative study of William Poundstone’s canonical work, Project for Tachistoscope [Bottomless Pit]. Marino suggests that studying the underlying ActionScript code of Project for Tachistoscope can deepen our understanding of the work, revealing new layers to the work that more screen-focused analyses neglect.

The final two lightning talks imagine the future of electronic literature without Flash. Amanda Visconti surveys the way e-lit can appropriate digital platforms that were never designed for poetics or narrative. Visconti argues that such platform poaching combines the veneer of credibility associated with a digital archive or a wiki with a narrative license that is simultaneously ethically dangerous and rich with possibilities for counterfactual knowledge.

Finally, the formal part of the roundtable ends with Stuart Moulthrop, the author of some of the most widely read and taught electronic literature works. In an act of provocation Moulthrop argues that “there never was such a thing as Flash.” Moulthrop sees Flash as a blip in the idiosyncratic timeline of electronic literature. Flash was, Moulthrop points out, an always-limited convenience, merely a way of developing interesting interfaces for the Web. Moulthrop ultimately finds that the idea of an interface-based literary art is larger and more durable than Adobe’s powerful but deeply flawed product. 

Moderated by Mark Sample, this diverse “Electronic Literature after Flash” roundtable capitalizes upon the growing interest in electronic literature—and the digital humanities more generally. Given its focus on the preservation and study of soon-to-be obsolescent forms of technology, this roundtable will also appeal to MLA members invested in the more conventional fields of textual studies, bibliographic preservation, media studies, and information sciences. And finally, the roundtable speaks to the enduring concerns of authors and artists who simply want their works to be available and accessible to future generations of readers.

Image: game, game, game, and again game by Jason Nelson

CFP: Electronic Literature after Flash (MLA 2014, Chicago)

March 9th, 2013 § 2 comments § permalink

Attention artists, creators, theorists, teachers, curators, and archivists of electronic literature!

I’m putting together an e-lit roundtable for the Modern Language Association Convention in Chicago next January. The panel will be “Electronic Literature after Flash” and I’m hoping to have a wide range of voices represented. See the full CFP for more details. Abstracts due March 15, 2013.

An Account of Randomness in Literary Computing

January 8th, 2013 § 18 comments § permalink

imageBelow is the text of my presentation at the 2013 MLA Convention in Boston. The panel was Reading the Invisible and Unwanted in Old and New Media, and it was assembled by Lori Emerson, Paul Benzon, Zach Whalen, and myself.

Seeking to have a rich discussion period—which we did indeed have—we limited our talks to about 12 minutes each. My presentation was therefore more evocative than comprehensive, more open-ended than conclusive. There are primary sources I’m still searching for and technical details I’m still sorting out. I welcome feedback, criticism, and leads.

An Account of Randomness in Literary Computing
Mark Sample
MLA 2013, Boston

There’s a very simple question I want to ask this evening:

Where does randomness come from?

Randomness has a rich history in arts and literature, which I don’t need to go into today. Suffice it to say that long before Tristan Tzara suggested writing a poem by pulling words out of a hat, artists, composers, and writers have used so-called “chance operations” to create unpredictable, provocative, and occasionally nonsensical work. John Cage famously used chance operations in his experimental compositions, relying on lists of random numbers from Bell Labs to determine elements like pitch, amplitude, and duration (Holmes 107–108). Jackson Mac Low similarly used random numbers to generate his poetry, in particular relying on a book called A Million Random Digits with 100,000 Normal Deviates to supply him with the random numbers (Zweig 85).


Million Random Digits with 100,000 Normal Deviates


Published by the RAND Corporation in 1955 to supply Cold War scientists with random numbers to use in statistical modeling (Bennett 135), the book is still in print—and you should check out the parody reviews on “With so many terrific random digits,” one reviewer jokes, “it’s a shame they didn’t sort them, to make it easier to find the one you’re looking for.”

This joke actually speaks to a key aspect of randomness: the need to reuse random numbers, so that, say you’re running a simulation of nuclear fission, you can repeat the simulation with the same random numbers—that is, the same probability—while testing some other variable. In fact, most of the early work on random number generation in the United States was funded by either the U.S. Atomic Commission or the U.S. Military (Montfort et al. 128). The RAND Corporation itself began as a research and development arm of the U.S. Air Force.

Now the thing with going down a list of random numbers in a book, or pulling words out of hat—a composition method, by the way, Thom Yorke used for Kid A after a frustrating bout of writer’s block—is that the process is visible. Randomness in these cases produces surprises, but the source itself of randomness is not a surprise. You can see how it’s done.

What I want to ask here today is, where does randomness come from when it’s invisible? What’s the digital equivalent of pulling words out of a hat? And what are the implications of chance operations performed by a machine?

To begin to answer these questions I am going to look at two early works of electronic literature that rely on chance operations. And when I say early works of electronic literature, I mean early, from fifty and sixty years ago. One of these works has been well studied and the other has been all but forgotten.


My first case study is the Strachey Love Letter Generator. Programmed by Christopher Strachey, a close friend of Alan Turing, the Love Letter Generator is likely—as Noah Wardrip-Fruin argues—the first work of electronic literature, which is to say a digital work that somehow makes us more aware of language and meaning-making. Strachey’s program “wrote” a series of purplish prose love letters on the Ferranti Mark I Computer—the first commercially available computer—at Manchester University in 1952 (Wardrip-Fruin “Digital Media” 302):

M. U. C.

Affectionately known as M.U.C., the Manchester University Computer could produce these love letters at a pace of one per minute, for hours on end, without producing a duplicate.

The “trick,” as Strachey put it in a 1954 essay about the program (29-30), is its two template sentences (My adjective noun adverb verb your adjective noun and You are my adjective noun) in which the nouns, adjectives, and adverbs are randomly selected from a list of words Strachey had culled from a Roget’s thesaurus. Adverbs and adjectives randomly drop out of the sentence as well, and the computer randomly alternates the two sentences.

image008The Love Letter Generator has attracted—for a work of electronic literature—a great deal of scholarly attention. Using Strachey’s original notes and source code (see figure to the left), which are archived at the Bodleian Library at the University of Oxford, David Link has built an emulator that runs Strachey’s program, and Noah Wardrip-Fruin has written a masterful study of both the generator and its historical context.

As Wardrip-Fruin calculates, given that there are 31 possible adjectives after the first sentence’s opening possessive pronoun “My” and then 20 possible nouns that could that could occupy the following slot, the first three words of this sentence alone have 899 possibilities. And the entire sentence has over 424 million combinations (424,305,525 to be precise) (“Digital Media” 311).


A partial list of word combinations for a single sentence from the Strachey Love Letter Generator

On the whole, Strachey was publicly dismissive of his foray into the literary use of computers. In his 1954 essay, which appeared in the prestigious trans-Atlantic arts and culture journal Encounter (a journal, it would be revealed in the late 1960s, that was primarily funded by the CIA—see Berry, 1993), Strachey used the example of the love letters to illustrate his point that simple rules can generate diverse and unexpected results (Strachey 29-30). And indeed, the Love Letter Generator qualifies as an early example of what Wardrip-Fruin calls, referring to a different work entirely, the Tale-Spin effect: a surface illusion of simplicity which hides a much more complicated—and often more interesting—series of internal processes (Expressive Processing 122).

Wardrip-Fruin coined this term—the Tale-Spin effect—from Tale-Spin, an early story generation system designed by James Mehann at Yale University in 1976. Tale-Spin tended to produce flat, plodding narratives, though there was the occasional existential story:

[quote style="1"]Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. He was unable to call for help. He drowned.[/quote]

But even in these suggestive cases, the narratives give no sense of the process-intensive—to borrow from Chris Crawford—calculations and assumptions occurring behind the interface of Tale-Spin.

In a similar fashion, no single love letter reveals the combinatory procedures at work by the Mark I computer.

M. U. C.

This Tale-Spin effect—the underlying processes obscured by the seemingly simplistic, even comical surface text—are what draw Wardrip-Fruin to the work. But I want to go deeper than the algorithmic process that can produce hundreds of millions of possible love letters. I want to know, what is the source of randomness in the algorithm? We know Strachey’s program employs randomness, but where does that randomness come from? This is something the program—the source code itself—cannot tell us, because randomness operates at a different level, not at the level of code or software, but in the machine itself, at the level of hardware.

In the case of Strachey’s Love Letter Generator, we must consider the computer it was designed for, the Mark I. One of the remarkable features of this computer was that it had a hardware-based random number generator. The random number generator pulled a string of random numbers from what Turing called “resistance noise”—that is, electrical signals produced by the physical functioning of the machine itself—and put the twenty least significant digits of this number into the Mark I’s accumulator—its primary mathematical engine (Turing). Alan Turing himself specifically requested this feature, having theorized with his earlier Turing Machine that a purely logical machine could not produce randomness (Shiner). And Turing knew—like his Cold War counterparts in the United States—that random numbers were crucial for any kind of statistical modeling of nuclear fission.

I have more to say about randomness in the Strachey Love Letter Generator, but before I do, I want to move to my second case study. This is an early, largely unheralded work called SAGA. SAGA was a script-writing program on the TX-0 computer. The TX-0 was the first computer to replace vacuum tubes with transistors and also the first to use interactive graphics—it even had a light pen.

The TX-0 was built at Lincoln Laboratory in 1956—a classified MIT facility in Bedford, Massachusetts chartered with the mission of designing the nation’s first air defense detection system. After TX-0 proved that transistors could out-perform and outlast vacuum tubes, the computer was transferred to MIT’s Research Laboratory of Electronics in 1958 (McKenzie), where it became a kind of playground for the first generation of hackers (Levy 29-30).

vlcsnap-2013-01-02-21h53m01s188In 1960, CBS broadcast an hour-long special about computers called “The Thinking Machine.” For the show MIT engineers Douglas Ross and Harrison Morse wrote a 13,000 line program in six weeks that generated a climactic shoot-out scene from a Western.

Several computer-generated variations of the script were performed on the CBS program. As Ross told the story years later, “The CBS director said, ‘Gee, Westerns are so cut and dried couldn’t you write a program for one?’ And I was talked into it.”


The TX-0’s large—for the time period—magnetic core memory was used “to keep track of everything down to the actors’ hands.” As Ross explained it, “The logic choreographed the movement of each object, hands, guns, glasses, doors, etc.” (“Highlights from the Computer Museum Report”).

And here, is the actual output from the TX-0, printed on the lab’s Flexowriter printer, where you can get a sense of the way SAGA generated the play:

TX-0 SAGA Output

In the CBS broadcast, Ross explained the narrative sequence as a series of forking paths.


Each “run” of SAGA was defined by sixteen initial state variables, with each state having several weighted branches (Ross 2). For example, one of the initial settings is who sees whom first. Does the sheriff see the robber first or is it the other way around? This variable will influence who shoots first as well.

There’s also a variable the programmers called the “inebriation factor,” which increases a bit with every shot of whiskey, and doubles for every swig straight from the bottle. The more the robber drinks, the less logical he will be. In short, every possibility has its own likely consequence, measured in terms of probability.

The MIT engineers had a mathematical formula for this probability (Ross 2):


But more revealing to us is the procedure itself of writing one of these Western playlets.

First, a random number was set; this number determined the probability of the various weighted branches. The programmers did this simply by typing a number following the RUN command when they launched SAGA; you can see this in the second slide above, where the random number is 51455. Next a timing number established how long the robber is alone before the sheriff arrives (the longer the robber is alone, the more likely he’ll drink). Finally each state variable is read, and the outcome—or branch—of each step is determined.

What I want to call your attention to is how the random number is not generated by the machine. It is entered in “by hand” when one “runs” the program. In fact, launching SAGA with the same random number and the same switch settings will reproduce a play exactly (Ross 2).

In a foundational work in 1996 called The Virtual Muse Charles Hartman observed that randomness “has always been the main contribution that computers have made to the writing of poetry”—and one might be tempted to add, to electronic literature in general (Hartman 30). Yet the two case studies I have presented today complicate this notion. The Strachey Love Letter Generator would appear to exemplify the use of randomness in electronic literature. But—and I didn’t say this earlier—the random numbers generated by the Mark I’s method tended not to be reliably random enough; remember, random numbers often need to be reused, so that the programs that run them can be repeated. This is called pseudo-randomness. This is why books like the RAND Corporation’s A Million Random Digits is so valuable.

But the Mark I’s random numbers were so unreliable that they made debugging programs difficult, because errors never occurred the same way twice. The random number instruction eventually fell out of use on the machine (Campbell-Kelly 136). Skip ahead 8 years to the TX-0 and we find a computer that doesn’t even have a random number generator. The random numbers must be entered manually.

The examples of the Love Letters and SAGA suggest at least two things about the source of randomness in literary computing. One, there is a social-historical source; wherever you look at randomness in early computing, the Cold War is there. The impact of the Cold War upon computing and videogames has been well-documented (see, for example Edwards, 1996 and Crogan, 2011), but few have studied how deeply embedded the Cold War is in the software algorithms and hardware processes themselves of modern computing.

Second, randomness does not have a progressive timeline. The story of randomness in computing—and especially in literary computing—is neither straightforward nor self-evident. Its history is uneven, contested, and mostly invisible. So that even when we understand the concept of randomness in electronic literature—and new media in general—we often misapprehend its source.


Bennett, Deborah. Randomness. Cambridge, MA: Harvard University Press, 1998. Print.

Berry, Neil. “Encounter.” Antioch Review 51.2 (1993): 194. Print.

Crogan, Patrick. Gameplay Mode: War, Simulation, and Technoculture. Minneapolis: University of Minnesota Press, 2011. Print.

Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press, 1996. Print.

Hartman, Charles O. Virtual Muse: Experiments in Computer Poetry. Hanover, NH: Wesleyan University Press, 1996. Print.

“Highlights from the Computer Museum Report.” Spring 1984. Web. 23 Dec. 2012.

Holmes, Thomas B. Electronic and Experimental Music: A History of a New Sound. Psychology Press, 2002. Print.

Levy, Steven. Hackers: Heroes of the Computer Revolution. Sebastopol, CA: O’Reilly Media, 2010. Print.

McKenzie, John A. “TX-0 Computer History.” 1 Oct. 1974. Web. 20 Dec. 2012.

Montfort, Nick et al. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. Cambridge, MA: MIT Press, 2013. Print.

Ross, D.T. “Memorandum 8436-M-29: Preliminary Operating Notes for SAGA II.” 19 Oct. 1960. 20 Dec. 2012. <>.

Shiner, Jeff. “Alan Turing’s Contribution Can’t Be Computed.” Agile Blog. 29 Dec. 2012. <>.

Strachey, Christopher. “The ‘Thinking’ Machine.” Encounter III.4 (1954) : 25–31. Print.

Turing, A.M. “Programmers’ Handbook for the Manchester Electronic Computer Mark II.” Oct. 1952. Web. 23 Dec. 2012.

Wardrip-Fruin, Noah. “Digital Media Archaeology: Interpreting Computational Processes.” Media Archaeology: Approaches, Applications, and Implications. Ed by. Erkki Huhtamo & Jussi Parikka. Berkeley, California: University of California Press, 2011. Print.

—. Expressive Processing: Digital Fictions, Computer Games, and Software Sudies. MIT Press, 2009. Print.

Zweig, Ellen. “Jackson Mac Low: The Limits of Formalism.” Poetics Today 3.3 (1982): 79–86. Web. 1 Jan. 2013.

IMAGE CREDITS (in order of appearance)

Being, On. Alan Turing and the Mark 1. 2010. 24 Dec. 2012. <>.

A Million Random Digits with 100,000 Normal Deviates. Courtesy of Casey Reas and10 PRINT CHR$(205.5+RND(1));: GOTO 10. Cambridge, Mass.: MIT Press, 2013. 129.

“Ferranti Mark 1 Sales Literature.” 24 Dec. 2012. <>.

Image of Love Letter Source code courtesy of Link, David. “There Must Be an Angel: On the Beginnings of the Arithmetics of Rays.” 2006. 23 Dec. 2012. <>.

Still Image from “The Thinking Machine.” CBS, October 26, 1960. <—mit-centennial-film>.

Western Drama Written by TX-0. 1960. Computer History Museum. Web. 20 Dec. 2012. <>.

SAGA Printout from Pfeiffer, John E. The Thinking Machine. Philadelphia: Lippincott, 1962. 132. Print.

Doug Ross Explaining TX-0 Program in the Film “The Thinking Machine.” 1960. Computer History Museum. Web. 20 Dec. 2012. <>.

Strange Rain and the Poetics of Motion and Touch

February 5th, 2012 § 3 comments § permalink

Dramatic Clouds over the Fields

Here (finally) is the talk I gave at the 2012 MLA Convention in Seattle. I was on Lori Emerson’s Reading Writing Interfaces: E-Literature’s Past and Present panel, along with Dene Grigar, Stephanie Strickland, and Marjorie Luesebrink. Lori’s talk on e-lit’s stand against the interface-free aesthetic worked particularly well with my own talk, which focused on Erik Loyer’s Strange Rain. I don’t offer a reading of Strange Rain so much as I use the piece as an entry point to think about interfaces—and my larger goal of reframing our concept of interfaces.
Title Slide: Strange Rain and the Poetics of Touch and Motion

Today I want to talk about Strange Rain, an experiment in digital storytelling by the new media artist Erik Loyer.

The Menu to Strange Rain

Strange Rain came out in 2010 and runs on Apple iOS devices—the iPhone, iPod Touch, and iPad. As Loyer describes the work, Strange Rain turns your iPad into a “skylight on a rainy day.” You can play Strange Rain in four different modes. In the wordless mode, dark storm clouds shroud the screen, and the player can touch and tap its surface, causing columns of rain to pitter patter down upon the player’s first-person perspective. The raindrops appear to splatter on the screen, streaking it for a moment, and then slowly fade away. Each tap also plays a note or two of a bell-like celesta.

The Wordless Mode of Strange Rain

The other modes build upon this core mechanic. In the “whispers” mode, each tap causes words as well as raindrops to fall from the sky.

The Whisper Mode of Strange Rain

The “story” mode is the heart of Strange Rain. Here the player triggers the thoughts of Alphonse, a man standing in the rain, pondering a family tragedy.

The Story Mode of Strange Rain

And finally, with the most recent update of the app, there’s a fourth mode, the “feeds” mode. This allows players to replace the text of the story with tweets from a Twitter search, say the #MLA12 hashtag.

The Feeds Mode of Strange Rain

Note that any authorial information—Twitter user name, time or date—is stripped from the tweet when it appears, as if the tweet were the player’s own thoughts, making the feed mode more intimate than you might expect.

Another View of the Feeds Mode of Strange Rain

Like many of the best works of electronic literature, there are a number of ways to talk about Strange Rain, a number of ways to frame it. Especially in the wordless mode, Strange Rain fits alongside the growing genre of meditation apps for mobile devices, apps meant to calm the mind and sooth the spirit—like Pocket Pond:

The Meditation App Pocket Pond

In Pocket Pond, every touch of the screen creates a rippling effect.

A Miniature Zen Garden

The digital equivalent of a miniature zen garden, these apps allow us to contemplate minimalistic nature scenes on devices built by women workers in a FoxConn factory in Chengdu, China.

Foxconn Factory Explosion

It’s appropriate that it’s the “wordless mode” that provides the seemingly most unmediated or direct experience of Strange Rain, when those workers who built the device upon which it runs are all but silent or silenced.

The “whispers” mode, meanwhile, with its words falling from the sky, recalls the trope in new media of falling letters—words that descend on the screen or even in large-scale multimedia installation pieces such as Camille Utterback and Romy Achituv’s Text Rain (1999).

Alison Clifford's The Sweet Old Etcetera Text Rain


And of course, the story mode even more directly situates Strange Rain as a work of electronic literature, allowing the reader to tap through “Convertible,” a short story by Loyer, which, not coincidentally I think, involves a car crash, another long-standing trope of electronic literature.

Michael Joyce's Afternoon

As early as 1994, in fact, Stuart Moulthrop asked the question, “Why are there so many car wrecks in hypertext fiction?” (Moulthrop, “Crash” 5). Moulthrop speculated that it’s because hypertext and car crashes share the same kind of “hyperkinetic hurtle” and “disintegrating sensory whirl” (8). Perhaps Moulthrop’s characterization of hypertext held up in 1994…

Injured driver & badly damaged vehicle from Kraftwagen Depot München, June 1915

…(though I’m not sure it did), but certainly today there are many more metaphors one can use to describe electronic literature than a car crash. And in fact I’d suggest that Strange Rain is intentionally playing with the car crash metaphor and even overturning it with its slow, meditative pace.

At the same time as this reflective component of Strange Rain, there are elements that make the work very much a game, featuring what any player of modern console or PC games would find familiar: achievements, unlocked by triggering particular moments in the story. Strange Rain even shows up on the iOS’s “Game Center.”

Slide16: iOS Game Center

The way users can tap through Alphonse’s thoughts in Strange Rain recalls one of Moulthrop’s own works, the post-9/11 Pax, which Moulthrop calls, using a term from John Cayley, a “textual instrument”—as if the piece were a musical instrument that produces text rather than music.


We could think of Strange Rain as a textual instrument, then, or to use Noah Wardrip-Fruin’s reformulation of Cayley’s idea, as “playable media.” Wardrip-Fruin suggests that thinking of electronic literature in terms of playable media replaces a rather uninteresting question—“Is this a game?”—with a more productive inquiry, “How is this played?”

There’s plenty to say about all of these framing elements of Strange Rain—as an artwork, a story, a game, an instrument—but I want to follow Wardrip-Fuin’s advice and think about the question, how is Strange Rain played? More specifically, what is its interface? What happens when we think about Strange Rain in terms of the poetics of motion and touch?

Let me show you a quick video of Erik Loyer demonstrating the interface of Strange Rain, because there are a few characteristics of the piece that are lost in my description of it.

A key element that I hope you can see from this video is that the dominant visual element of Strange Rain—the background photograph—is never entirely visible on the screen. The photograph was taken during a tornado watch in Paulding County in northwest Ohio in 2007 and posted as a Creative Commons image on the photo-sharing site Flickr. But we never see this entire image at once on the iPad or iPhone screen. The boundaries of the photograph exceed the dimensions of the screen, and Strange Rain uses the hardware accelerometer to detect your motion, your movements. So that when you tilt the iPad even slightly, the image tilts slightly in the opposite direction. It’s as if there’s a larger world inside the screen, or rather, behind the screen. And this world is broader and deeper than what’s seen on the surface. Loyer described it to me this way: it’s “like augmented reality, but without the annoying distraction of trying to actually see the real world through the display” (Loyer 1).

This kinetic screen is one of the most compelling features of Strange Rain. As soon as you pick up the iPad or iPhone with Strange Rain running, it reacts to you. The work interacts with you before you even realize you’re interacting with it. Strange Rain taps into a kind of “camcorder subjectivity”—the entirely naturalized practice we now have of viewing the world through devices that have cameras on one end and screens on the other. Think about older videocameras, which you held up to your eye, and you saw the world straight through the camera. And then think of Flip cams or smartphone cameras we hold out in front of us. We looked through older videocameras as we filmed. We look at smartphone cameras as we film.


So when we pick up Strange Rain we have already been trained to accept this camcorder model, but we’re momentarily taken aback, I think, to discover that it doesn’t work quite the way we think it should. That is, it’s as if we are shooting a handheld camcorder onto a scene we cannot really control.

This aspect of the interface plays out in interesting ways. Loyer has an illustrative story about the first public test of Strange Rain. As people began to play the piece, many of them held it up over their heads so that “it looked like the rain was falling on them from above—many people thought that was the intended way to play the piece” (Loyer 1).

That is, people wanted it to work like a camcorder, and when it didn’t, they themselves tried to match their exterior actions to the interior environment of the piece.

There’s more to say about the poetics of motion with Strange Rain but I want to move on to the idea of touch. We’ve seen how touch propels the narrative of Strange Rain. Originally Loyer had planned on having each tap generate a single word, though he found that to be too tedious, requiring too many taps to telegraph a single thought (Loyer 1). It was, oddly enough in a work of playable media that was meant to be intimate and contemplative, too slow. Or rather, it required too much action—too much tapping—on the part of reader. So much tapping destroyed the slow, recursive feeling of the piece. It becomes frantic instead of serene.

Loyer tweaked the mechanic then, making each tap produce a distinct thought. Nonetheless, from my own experience and from watching other readers, I know that there’s an urge to tap quickly. In the story mode of Strange Rain you sometimes get caught in narrative loops—which again is Loyer playing with the idea of recursivity found in early hypertext fiction rather than merely reproducing it. Given the repetitive nature of Strange Rain, I’ve seen people want to fight against the system and tap fast. You see the same thought five times in a row, and you start tapping faster, even drumming using multiple fingers. And the piece paradoxically encourages this, as the only way to bring about a conclusion is to provoke an intense moment of anxiety for Alphonse, which you do by tapping more frantically.

I’m fascinated by with this tension between slow tapping and fast tapping—what I call haptic density—because it reveals the outer edges of the interface of the system. Quite literally.

Move from three fingers to four—easy to do when you want to bring Alphonse to a crisis moment—and the iPad translates your gestures differently. Four fingers tells the iPad you want to swipe to another application, the Windows equivalent of ALT-TAB. The multi-touch interface of the iPad trumps the touch interface of Strange Rain. There’s a slipperiness of the screen. The text is precipitously and perilously fragile and inadvertently escapable. The immersive nature of new media that years ago Janet Murray highlighted as an essential element of the form is entirely an illusion.

I want to conclude then by asking a question: what happen when we begin to think differently about interfaces? We usually think of an interface as a shared contact point between two distinct objects. The focus is on what is common. But what if we begin thinking—and I think Strange Rain encourages this—what if we begin thinking about interfaces in terms of difference. Instead of interfaces, what about thresholds, liminal spaces between two distinct elements. How does Strange Rain or any piece of digital expressive culture have both an interface, and a threshold, or thresholds? What are the edges of the work? And what do we discover when we transgress them?

Works Cited

Dramatic Clouds over the Fields

Post-Print Fiction Reading List (the print stuff, at least)

June 17th, 2011 § 3 comments § permalink

A Shredded Book

{a href=""}Ex-Book{/a} by James Bridle

I’m excited to announce the print side of my post-print fiction reading list:

Each of these works offers a meditation upon the act of reading or writing, the power of stories, the role of storytellers, and the materiality of books themselves as physical objects. In addition to these printed and (mostly) bound texts, my English Honors Seminar students will encounter a range of other unconventional narrative forms, from Jonathan Blow’s Braid to Kate Pullinger’s Inanimate Alice, from Christopher Strachey’s machine-generated love letters to Robert Coover’s deck of storytelling playing cards. Along the way we’ll also consider mash-ups, databased stories, role-playing games, interactive fiction, and a host of other narrative forms. We’ll also (a heads-up to my students) create some of our own post-print beasties…

•    Italo Calvino, If on a Winter’s Night a Traveler (Harvest Books, ISBN 0156439611)
•    Don DeLillo, Mao II (Penguin, ISBN 978-0140152746)
•    Mark Z. Danielewski, House of Leaves (Pantheon, ISBN 978-0375703768)
•    Salvador Plascencia, The People of Paper (Mariner, ISBN 978-0156032117)
•    Anne Carson, Nox (New Directions, ISBN 978-0811218702)

Post-Print Fiction Course Description (for Fall 2011)

January 23rd, 2011 § 3 comments § permalink

Typewriter Covered with VegetationHere is an early, tentative course description for my Fall 2011 senior seminar for the English Honors students. I welcome comments or reading recommendations!

Post-Print Fiction (ENGL 400 Honors Seminar)

For several centuries the novel has been associated with a single material form: the bound book, made of paper and printed with ink. But what happens when storytelling diverges from the book? What happens when writers weave stories that extend beyond the printed word? What happens when fiction appears in digital form, generated from a reader’s actions or embedded in a videogame? What happens when a novel has no novelist behind it, but a crowd of authors—or no human at all, just an algorithm? We will address these questions and many more in this English Honors Seminar dedicated to post-print fiction. We will begin with two “traditional” novels that nonetheless ponder the meaning of narrative, books, and technology, and move quickly into several novels that, depending upon one’s point of view, either represent that last dying gasp of the printed book or herald a renaissance of the form. Finally, we will devote the latter part of the semester exploring electronic literature, kinetic poetry, transmedia narratives, and paranovels that both challenge and enrich our understanding of fiction in the 21st century.

Possible works to be studied include Mao II by Don DeLillo, The People of Paper by Salvador Plascencia, House of Leaves by Mark Z. Danielewski, Tree of Codes by Jonathan Safran Foer, Personal Effects: Dark Arts by J.C. Hutchins and Jordan Weisman, This Is Not a Book by Keri Smith, Braid by Jonathan Blow, The Baron by Victor Gijspers, as well as works by Deena Larsen, Nick Montfort, Mary Flanagan, Jason Nelson, Jonathan Harris, Shelley Jackson, Young-Hae Chang Heavy Industries, Stephanie Strickland, and many more.

[Typewriter photograph courtesy of Flickr user paulmorriss / Creative Commons License]

Electronic Literature is a Foreign Land

July 21st, 2009 § 1 comment § permalink

One of the more brilliant works of electronic literature I savor teaching is Brian Kim Stefan’s Star Wars, One Letter at a Time, which is exactly what it sounds like. Aside what’s going on in the piece itself (which deserves its own separate blog post), what I enjoy is the almost violent reaction it provokes in students.  Undergraduate and graduate students alike are incredibly resistant to SWOLAAT, in most cases flat-out denying any claims Stefan’s reworking of Star Wars might make toward literariness.

The dismissive response of my students to SWOLAAT is only the most extreme example of what happens with many pieces of electronic literature, both in my classroom and in the wider world. For example, I’ve been reading through Johanna Drucker’s review of Matthew Kirschenbaum’s groundbreaking Mechanisms, as well as the e-lit community’s reaction to her statement that no works have “appeared in digital media whose interest goes beyond novelty value.” A bit aghast at Drucker’s remark, Noah Wardrip-Fruin and Scott Rettberg have both responded, and I was struck by Rettberg’s observation that

ELO [The Electronic Literature Organization] has submitted a number of very good digital humanities grant proposals to the NEH, and we have had the same response nearly every time — on a panel of three reviewers, two will find the proposal worth funding, and one of whom will state flatly that it has no merit, not on the basis of the proposal itself or its relevance to the call, but because they find electronic literature itself to be without merit.

It occurred to me recently that the denial of electronic literature’s literary merit — whether it’s coming from my students or a distinguished NEH panel — is not due to a misplaced desire to preserve the sanctity of what counts as literature as it is sheer xenophobia.

Electronic literature is a foreign land.

Electronic literature might as well be the national literature of Moldavia. To the uninitiated student or scholar, e-lit is at worst strange, incomprehensible, and inscrutable, and at best, simply silly.

So, I’m wondering, would the same process by which a stranger in a strange land grows accustomed to foreignness and even appreciates and incorporates cultural difference into his or her own life — could that process apply to e-lit?

Below (larger image) is a six stage model of intercultural sensitivity, designed by Milton J. Bennett in the late eighties and early nineties to describe the progress of individuals as they experience greater and more frequent cultural difference. And I think this model could help us introduce students to the foreign world of electronic literature.

Developmental Model of Intercultural Sensitivity

In the early ethnocentric stages of Bennett’s model, individuals begin by first denying that cultural difference exists in the first place, either because of their own isolation or because of willful ignorance. Greater exposure to cultural difference next prompts a defensive posture, an us-versus-them mentality in which existing cognitive categories are reinforced and any comment directed toward one’s own culture is perceived as an attack. The last ethnocentric stage is characterized by a minimization of difference. Individuals tell themselves that “people are the same everywhere,” a superficially benign attitude that in fact masks uniqueness and still evaluates other cultures from a reference point within one’s own culture. The final three stages are marked by an understanding that behaviors, norms, beliefs and so on are all relative. The first ethnorelative stage is acceptance, genuinely acknowledging cultural difference and seeing that difference within its own cultural context. Next comes adaptation, when individuals change their own attitudes, behaviors, and even language to match their surroundings in an attempt to communicate and empathize. Finally, integration occurs when individuals move freely between cultures, practicing what Bennett calls “constructive marginality,” that is, seeing identity construction as an ongoing process that is always marginal to any specific social group.

If we think of electronic literature as a foreign land, then I propose we use this developmental model to accurately chart a stranger’s encounter with the genre. As my experience with Star Wars, One Letter at a Time illustrates, students first begin reading electronic literature in either the denial or defense stages (meaning they’ve either never experienced e-lit before or they have and they hate it). I can imagine an entire syllabus structured around the goal of moving students from denial to integration. Just as educators and sociologists have come up with practical strategies to facilitate the progress of study abroad students along Bennett’s continuum, so too can we design specific assignments that develop students’ competencies in each of these stages: from a total inability to read the differences between traditional literature and born digital literature to an integration of those very differences into their non-e-lit lives. And with each point in between, we target stage-appropriate skills and practices, meeting the students where they are, rather than expecting them to reflexively appreciate the virtues of something as alien as Reiner Strasser and M.D Coverley’s ii: in the white darkness or something as unsettling as Jason Nelson’s This Is How You Will Die. This type of approach to teaching electronic literature would be far more rewarding (to both the professor and the students) than the kind of sink-or-swim model in Katherine Hayle’s theoretically dense (and unteachable, as I’ve discovered) introduction to Electronic Literature.

Imagine too that we begin writing grant and publishing proposals with these stages in mind, understanding that committees and panels and editors are likely stuck in the ethnocentric stages, judging literature from what we might call the “Great Works” perspective. E-lit challenges this perspective, but not on grounds of literariness; it challenges existing notions of literature simply because it’s different. We can teach sensitivity to difference to our students, and we should model sensitivity in our own writings as well. Teachers and researchers of electronic literature are its ambassadors, and it is up to us to introduce strangers to the medium in a firm, but welcoming, guiding way.

Electronic Literature Course Description

April 13th, 2009 § 4 comments § permalink

A few of my English department colleagues and myself are preparing to propose a new Electronic Literature course, to replace a more vaguely named “Textual Media” class in the university course catalog. Here is an incredibly first draft version of the course description, building in part on language from the Electronic Literature Organization’s own description of electronic literature:

Electronic Literature (3 credits) Electronic literature refers to expressive texts that are born digital and can only be read, interacted with, or otherwise experienced in a digital environment. Contemporary writers, artists, and designers are producing a wide range of electronic literature, including hypertext fiction, kinetic poetry, interactive fiction, computer-generated poetry and stories, digital mapping, and online collaborative writing projects via SMS, emails, and blogs. In all of these cases, electronic literature takes advantage of the capabilities and contexts of stand-alone or networked computers. Such literary texts often demand new reading and interpretative practices, which this class will develop in students.

I’m eager to hear any feedback about this purposefully generic description.