If you’re an academic, you’ve probably heard about the recent New York Timesarticle covering the decline of humanity majors at places like Stanford and Harvard. As many people have already pointed out, the article is a brilliant example of cherry-picking anecdotal evidence to support an existing narrative (i.e. the crisis in the humanities)—instead of using, you know, actual facts and statistics to understand what’s going on.
Ben Schmidt, a specialist in intellectual history at Northeastern University, has put together an interactive graph of college majors over the past few decades, using the best available government data. Playing around with the data shows some surprises that counter the prevailing narrative about the humanities. For example, Computer Science majors have declined since 1986, while History has remained steady. Ben argues elsewhere that not only was the steepest decline in the humanities in the 1970s instead of the 2010s, but that the baseline year that most crisis narratives begin with (the peak year of 1967) was itself an aberration.
Clearly we should be doing more to counter the perception that the humanities—and by extension, the liberal arts—are in crisis mode. My own experience in the classroom doesn’t support this notion, and neither does the data.
This is a list of digitally-inflected sessions at the 2014 Modern Language Association Convention (Chicago, January 9-12). These sessions in some way address digital tools, objects, and practices in language, literary, textual, cultural, and media studies. The list also includes sessions about digital pedagogy and scholarly communication. The list stands at 78 entries, making up less than 10% of the total 810 convention slots. Please leave a comment if this list is missing any relevant sessions. Continue reading “Digital Humanities at MLA 2014”→
“Non-consumptive research” is the term digital humanities scholars use to describe the large-scale analysis of a texts—say topic modeling millions of books or data-mining tens of thousands of court cases. In non-consumptive research, a text is not read by a scholar so much as it is processed by a machine. The phrase frequently appears in the context of the long-running legal debate between various book digitization efforts (e.g. Google Books and HathiTrust) and publishers and copyright holders (e.g. the Authors Guild). For example, in one of the preliminary Google Books settlements, non-consumptive research is defined as “computational analysis” of one or more books “but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within.” Non-consumptive reading is not reading in any traditional way, and it certainly isn’t close reading. Examples of non-consumptive research that appear in the legal proceedings (the implications of which are explored by John Unsworth) include image analysis, text extraction, concordance development, citation extraction, linguistic analysis, automated translation, and indexing. Continue reading “The Poetics of Non-Consumptive Reading”→
Mark Z. Danielewski’s House of Leaves is a massive novel about, among other things, a house that is bigger on the inside than the outside. Walt Whitman’s Leaves of Grass is a collection of poems about, among other things, the expansiveness of America itself.
What happens when these two works are remixed with each other? It’s not such an odd question. Though separated by nearly a century, they share many of the same concerns. Multitudes. Contradictions. Obsession. Physical impossibilities. Even an awareness of their own lives as textual objects.
The 21st century will be the century of the fugitive. Not because fugitives are proliferating, but because they are disappearing. And not disappearing in the way that fugitives like to disappear, but disappearing because they simply won’t exist. Technology won’t allow it.
A manhunt summons forth the great machinery of the state: scores of armed agents, ballistic tests and DNA samples, barking dogs, helicopters, infrared flybys. There is no evading it. It’s nearly impossible now to become a fugitive. And the more difficult fugitive life becomes, the more legendary fugitive figures become. As Peter Stallybrass and Allon White put it in their classic study of the grotesque and carnivalesque, “…what is socially peripheral is so frequently symbolically central.” The more marginalized and rare fugitives become, the greater the role they will play in our symbolic repertoire. In film, literature, music, art, videogames—in all these arenas, the fugitive will play a central role. Fugitives will come to occupy the same place in our collective consciousness as cowboys or pirates. And just as the Western film genre dominated the mid-20th century—while agribusiness was at the same time industrializing the west, making the cowboy superfluous—the 21st century will be dominated by the symbolic figure of the fugitive. Continue reading “The Century of the Fugitive and the Secret of the Detainee”→
I am thrilled to share the news that in August I will join the faculty of Davidson College, where I will be building a new interdisciplinary program in Digital Studies. This is a tremendous opportunity for me, and my immodest goal is to make Davidson College a model for other liberal arts colleges—and even research universities—when it comes to digital studies.
This means I am leaving George Mason University, and I am doing so with much sadness. I have been surrounded by generous colleagues, dedicated teachers, and rigorous thinkers. I cannot imagine a better place to have begun my career. At the same time, my life at GMU has always been complicated by the challenges of a long distance commute, which I have written about here and elsewhere. My new position at Davidson will eliminate this commute. After seven or so years of flying 500 miles to work each week, it will be heaven to simply bike one mile to work every day.
And a good thing too—because I have big plans for Digital Studies at Davidson and much work to do. Students are already enrolling in my Fall 2013 courses, but more than individual classes, we have an entire program to design. I am thrilled to begin working with my new colleagues in both the humanities and sciences. Together we are going to build something both unique and uniquely Davidson.
I recently proposed a sequence of lightning talks for the next Modern Language Association convention in Chicago (January 2014). The participants are tackling a literary issue that is not at all theoretical: the future of electronic literature. I’ve also built in a substantial amount of time for an open discussion between the audience and my participants—who are all key figures in the world of new media studies. And I’m thrilled that two of them—Dene Grigar and Stuart Moulthrop—just received an NEH grant dedicated to a similar question, which is documenting the experience of early electronic literature.
Electronic literature can be broadly conceived as literary works created for digital media that in some way take advantage of the unique affordances of those technological forms. Hallmarks of electronic literature (e-lit) include interactivity, immersiveness, fluidly kinetic text and images, and a reliance on the procedural and algorithmic capabilities of computers. Unlike the avant garde art and experimental poetry that is its direct forebear, e-lit has been dominated for much of its existence by a single, proprietary technology: Adobe’s Flash. For fifteen years, many e-lit authors have relied on Flash—and its earlier iteration, Macromedia Shockwave—to develop their multimedia works. And for fifteen years, readers of e-lit have relied on Flash running in their web browsers to engage with these works.
Flash is dying though. Apple does not allow Flash in its wildly popular iPhones and iPads. Android no longer supports Flash on its smartphones and tablets. Even Adobe itself has stopped throwing its weight behind Flash. Flash is dying. And with it, potentially an entire generation of e-lit work that cannot be accessed without Flash. The slow death of Flash also leaves a host of authors who can no longer create in their chosen medium. It’s as if a novelist were told that she could no longer use a word processor—indeed, no longer even use words. Continue reading “Electronic Literature after Flash (MLA14 Proposal)”→
Not so long ago a video of a flock of starlings swooping and swirling as one body in the sky went viral. Only two minutes long, the video shows thousands of birds over the River Shannon in Ireland, pouring themselves across the clouds, each bird following the one next to it. The birds flew not so much in formation as they flew in the biological equivalent of phase transition. This phenomenon of synchronized bird flight is called a murmuration. What makes the murmuration hypnotic is the starlings’ seemingly uncoordinated coordination, a thousand birds in flight, like fluid flowing across the skies. But there’s something else as well. Something else about the murmuration that appeals to us at this particular moment, that helps to explain this video’s virality.
The murmuration defies our modern understanding of crowds. Whether the crazed seagulls of Hitchcock’s The Birds, the shambling hordes of zombies that seem to have infected every strain of popular culture, or the thousands upon thousands of protestors of the Arab Spring, we are used to chaotic, disorganized crowds, what Elias Canetti calls the “open” crowd (Canetti 1984). Open crowds are dense and bound to grow denser, a crowd that itself attracts more crowds. Open crowds cannot be contained. They erupt. Continue reading “From a Murmur to a Drone”→
Attention artists, creators, theorists, teachers, curators, and archivists of electronic literature!
I’m putting together an e-lit roundtable for the Modern Language Association Convention in Chicago next January. The panel will be “Electronic Literature after Flash” and I’m hoping to have a wide range of voices represented. See the full CFP for more details. Abstracts due March 15, 2013.
When does anything—service, teaching, editing, mentoring, coding—become scholarship?
My answer is simply this: a creative or intellectual act becomes scholarship when it is public and circulates in a community of peers that evaluates and builds upon it.
Now for some background behind the question and the rationale for my answer.
What counts as the threshold of scholarship has been on my mind lately, spurred on by two recent events at my home institution, George Mason University. The first was a discussion in my own department (English) about the public humanities, a concept every bit as hard to pin down as its two highly contested constitutive terms. A key question in the department discussion was whether the enormous amount of outreach our faculty perform—through public readings, in area high schools, with local teachers and lifelong learners outside of Mason—counts as the public humanities. I suggested at the time that the public humanities revolves around scholarship. The question, then, is not when does outreach become the public humanities? The question is, when does outreach become an act of scholarship?
The department discussion was a low-stakes affair. It decided the fate of exactly nothing, except perhaps the establishment of a subcommittee to further explore the intersection of faculty work and the public humanities.
But the anxiety at the heart of this question—when does anything become scholarship?—plays out in much more consequential ways in the academy. This brings me to the second event at Mason, the deliberations of the College of Humanities and Social Science’s Promotion and Tenure committee. My colleague Sean Takats, whom some may know as the Director of Research Projects for the Roy Rosenzweig Center for History and New Media and the co-director of the Zotero project, has recently given a devastating account of the RPT committee’s response to his tenure case. Happily, the college committee approved Sean’s case 10-2, but what’s devastating is the attitude of some members of the committee toward Sean’s significant work in the digital humanities. Sean quotes from the committee’s official letter, with the money quote being “some [committee members] determined that projects like Zotero et al., while highly valuable, should be considered as major service activity instead.”
Sean deftly contrasts the committee’s impoverished notion of scholarship with Mason’s own faculty handbook’s definition, which is more expansive and explicitly acknowledges “artistic work, software and media, exhibitions, and performance.”
I absolutely appreciate Mason’s definition of scholarly achievement. But I like my definition of scholarship even more. Where does mine come from? From the scholarship of teaching—another field, like digital humanities, which has challenged the preeminence of the single-authored manuscript as the gold standard of scholarship (though, like DH, it doesn’t exclude such forms of scholarship).
More specifically, I have adapted my definition from Lee Shulman, the former president of the Carnegie Foundation for the Advancement of Teaching. In “Taking Learning Seriously,” Shulman advances a persuasive case for the scholarship of teaching and learning. Shulman argues that for an intellectual act to become scholarship, it should have at least three characteristics:
it becomes public; it becomes an object of critical review and evaluation by members of one’s community; and members of one’s community begin to use, build upon, and develop those acts of mind and creation.
In other words, scholarship is public, circulating in a community that not only evaluates it but also builds upon it. Notice that Shulman’s formulation of scholarship is abstracted from any single discipline, and even more crucially, it is platform-agnostic. Exactly how the intellectual act circulates and generates new work in response isn’t what’s important. What’s important is that the work is out there for all to see, review, and use. The work has been made public—which after all is the original meaning of “to publish.”
Let’s return to the CHSS committee’s evaluation of Sean’s work with Zotero. I don’t know enough about the way Sean framed his tenure case, but from the outside looking in, and knowing what I know about Zotero, it’s not only reasonable to acknowledge that Zotero meets these three criteria of scholarship (public, reviewed, and used), it’d take a willful misapprehension of Zotero, its impact, and implications to see it as anything but scholarship.
Sean notes that the stance of narrow-minded RPT committees will have a chilling effect on digital work, and I don’t think he exaggerates. But I see this as a crisis that extends beyond the digital humanities, encompassing faculty who approach their scholarship in any number of “unconventional” ways. The scholarship of teaching, certainly, but also faculty involved in scholarly editing, the scholarship of creativity, and a whole host of public humanities efforts.
The solution—or at least one prong of a solution—must be for faculty who have already survived the gauntlet of tenure to work ceaselessly to promote an atmosphere that pairs openness with critical review, yet which is not entrenched in any single medium—print, digital, performance, and so on. We can do this in the background by writing tenure letters, reviewing projects, and serving on committees ourselves. But we can and should also do this publicly, right here, right now.
In January I also performed my first public reading of one of my creative works— Takei, George—during the off-site electronic literature reading at the 2012 MLA Convention in Seattle. There’s even grainy documentary footage of this reading, thanks to the efforts of the organizers Dene Grigar, Lori Emerson, and Kathi Inmans Berens. I also gave a well-received talk at the MLA about another work of electronic literature, Erik Loyer’s beautiful Strange Rain. And finally in January, I spent odd moments at the convention huddled in a coffee shop (this was Seattle, after all) working with my co-authors on the final revisions of a book manuscript. More about that book later in this post.
All of this happened in the first weeks of January. And the rest of the year was equally as busy. In addition to my regular commuting life, I traveled a great deal to conferences and other gatherings. As I mentioned, I presented at the MLA, but I also talked at the Society for Cinema and Media Studies convention (Boston in March), Computers and Writing (Raleigh in May), the Electronic Literature Organization (Morgantown in June), and the Society for Literature, Science, and the Arts (Milwaukee in September). In May I was a co-organizer of THATCamp Piedmont, held on the campus of Davidson College. During the summer I was a guest at the annual Microsoft Research Faculty Summit (Redmond in July). In the fall I was an invited panelist for my own institution’s Forum on the Future of Higher Education (in October) and an invited speaker for the University of Kansas’s Digital Humanities seminar (in November).
If the year began the publication of a modest—and frankly, immensely fun to write—chapter in an edited book, then I have to point out that it ended with the publication of a much larger (and challenging and unwieldy) project, a co-authored book from MIT Press: 10 PRINT CHR$(205.5+RND(1));: GOTO 10 (or 10 PRINT, as we call it). I’ve already written about the book, and I expect more posts will follow. I’ll simply say now that my co-authors and I are grateful for and astonished by its bestselling (as far as academic books go) status: within days of its release, the book was ranked #1,375 on Amazon, out of 8 million books. This figure is all the most astounding when you consider that we released a free PDF version of the book on the same day as its publication. More evidence that giving away things is the best way to also sell things.
I was busy with other scholarly projects throughout 2012 as well. I finished revisions of a critical code studies essay that will appear in the next issue of Digital Humanities Quarterly, and I wrapped up a chapter for an edited collection coming out from Routledge on mobile media narratives. I also continued to publish in unconventional but peer-reviewed venues. Most notably, Enculturation and the Journal of Digital Humanities, which has published two pieces of mine. On the flip side of peer-review, I read and wrote reader’s reports for several journals and publishers, including University of Minnesota Press, MIT Press, Routledge, and Digital Humanities Quarterly. (You see how the system works: once you publish with a press it’s not long until they ask you to review someone else’s work for them. Review it forward, I say.)
In addition to scholarly work, I’ve invested more time than ever this year in creative work. On the surface my creative work is a marginal activity—and often marginalized when it comes time to count in my annual faculty report. But I increasingly see my creativity and scholarship bound up in a virtuous circle. I’ve already mentioned my first fully-functional work of electronic literature, “Takei, George.” In June this piece appeared as a juried selection in Electrifying Literature: Affordances and Constraints, a media art exhibit held in conjunction with the 2012 Electronic Literature Organization conference. A tip to other scholars who aim to do more creative work: submit your work to juried exhibitions or other curated shows; if your work is selected, it’s the equivalent of peer-review and your creative work suddenly passes the threshold needed to appear on CVs and faculty activity reports. Another creative project of mine, Postcard for Artisanal Tweeting, appeared in Rough Cuts: Media and Design in Process, an online exhibit curated by Kari Kraus on The New Everyday, a Media Commons Project.
My own blog is another site where I blend creativity and scholarship. My recent post on Intrusive Scaffolding is as much a creative nonfiction piece as it is scholarship (more so, in fact). And my favorite post of 2012 began as an inside joke about scholarly blogs. The background is this: during a department meeting discussion about how blogging should be recognized in our annual infrequent merit salary raises, a senior colleague expressed concern that one professor’s cupcake blog would count as much as another professor’s research-oriented blog. In response to this discussion, I wrote a blog post about cupcakes that blended critical theory and creativity. And cursing. The post struck a nerve, and it was my most widely read and retweeted blog post ever. About cupcakes.
Late in 2012 my creative work took me into new territory: Twitterbots, those autonomous “agents” on Twitter that are occasionally useful and often annoying. My bot Citizen Canned is in the process of tweeting every unique word from the script of Citizen Kane, by order of frequency (as opposed to, say, by order of significance, which would have a certain two syllable word appear first). With roughly 4,400 unique words to tweet, at a rate of once per hour, I estimate that Citizen Kane will tweet the least frequently used word in the movie sometime five months from now.Another of the Twitterbots I built in 2012 is 10print_ebooks. This bot mashes up the complete text of my 10 PRINT book and generates occasionally nonsensical but often genius Markov chain tweets from it. The bot also incorporates text from other tweets that use the #10print hashtag, meaning it “learns” from the community. The Citizen Cane bot runs in PHP while the 10 PRINT bot is built in Processing.
Alongside this constant scholarly and creative work (not to mention teaching) ran a parallel timeline, mostly invisible. This was me, waiting for my tenure decision to be handed down. In the summer of 2011 I submitted my materials and by December 2011, I learned that my department had voted unanimously in my favor. Next, in January 2012 the college RPT (Rank-Promotion-Tenure) committee voted 10-2 in my favor. It’s a bit crazy that the committee report echoes what I’ve heard about my work since grade school:
Mark Sample presents an unusual case. His work is at the edge of his discipline’s interaction with digital media technology. It blurs the lines between scholarship, teaching, and service in challenging ways. It also marks the point where traditional scholarly peer review meets the public interface of the internet. This makes for some difficulty in assessing his case.
In February my dean voted in favor of my case too. Next came the provost’s support at the end of March. In a surprise move, the provost recommended me for tenure on two counts: genuine excellence in teaching and genuine excellence in research. Professors usually earn tenure on the strength of their research alone. It’s uncommon to earn tenure at Mason on excellence in teaching, and an anomaly to earn tenure for both. By this point, approval from the president and the Board of Visitors (our equivalent of a Board of Trustees) might have seemed like rubber stamps, but I wasn’t celebrating tenure as a done deal. In fact, when I finally received the official notice—and contract—in June, I still didn’t feel like celebrating. And by the time my tenure and promotion went into effect in August 2012, I was too busy gearing up for the semester (and indexing 10 PRINT) to think much about it.
In other words, I reached the end of 2012 without celebrating some of its best moments. On the other hand, I feel that most of its “best moments” were actually single instances in ongoing processes, and those processes are never truly over. 10 PRINT may be out, but I’m already looking forward to future collaborations with some of my co-authors. I wrote a great deal in 2012, but much of that occurred serially in places like ProfHacker, Play the Past, and Media Commons, where I will continue to write in 2013 and beyond.
What else with 2013 bring? I am working on two new creative projects and I have begun sketching out a new book project as well. Next fall I will begin a year-long study leave (Fall 2013/Spring 2014), and I aim to make significant progress on my book during that time. Who knows what else 2013 will bring. Maybe sleep?
Seeking to have a rich discussion period—which we did indeed have—we limited our talks to about 12 minutes each. My presentation was therefore more evocative than comprehensive, more open-ended than conclusive. There are primary sources I’m still searching for and technical details I’m still sorting out. I welcome feedback, criticism, and leads.
An Account of Randomness in Literary Computing
MLA 2013, Boston
There’s a very simple question I want to ask this evening:
Where does randomness come from?
Randomness has a rich history in arts and literature, which I don’t need to go into today. Suffice it to say that long before Tristan Tzara suggested writing a poem by pulling words out of a hat, artists, composers, and writers have used so-called “chance operations” to create unpredictable, provocative, and occasionally nonsensical work. John Cage famously used chance operations in his experimental compositions, relying on lists of random numbers from Bell Labs to determine elements like pitch, amplitude, and duration (Holmes 107–108). Jackson Mac Low similarly used random numbers to generate his poetry, in particular relying on a book called A Million Random Digits with 100,000 Normal Deviates to supply him with the random numbers (Zweig 85).
Published by the RAND Corporation in 1955 to supply Cold War scientists with random numbers to use in statistical modeling (Bennett 135), the book is still in print—and you should check out the parody reviews on Amazon.com. “With so many terrific random digits,” one reviewer jokes, “it’s a shame they didn’t sort them, to make it easier to find the one you’re looking for.”
This joke actually speaks to a key aspect of randomness: the need to reuse random numbers, so that, say you’re running a simulation of nuclear fission, you can repeat the simulation with the same random numbers—that is, the same probability—while testing some other variable. In fact, most of the early work on random number generation in the United States was funded by either the U.S. Atomic Commission or the U.S. Military (Montfort et al. 128). The RAND Corporation itself began as a research and development arm of the U.S. Air Force.
Now the thing with going down a list of random numbers in a book, or pulling words out of hat—a composition method, by the way, Thom Yorke used for Kid A after a frustrating bout of writer’s block—is that the process is visible. Randomness in these cases produces surprises, but the source itself of randomness is not a surprise. You can see how it’s done.
What I want to ask here today is, where does randomness come from when it’s invisible? What’s the digital equivalent of pulling words out of a hat? And what are the implications of chance operations performed by a machine?
To begin to answer these questions I am going to look at two early works of electronic literature that rely on chance operations. And when I say early works of electronic literature, I mean early, from fifty and sixty years ago. One of these works has been well studied and the other has been all but forgotten.
My first case study is the Strachey Love Letter Generator. Programmed by Christopher Strachey, a close friend of Alan Turing, the Love Letter Generator is likely—as Noah Wardrip-Fruin argues—the first work of electronic literature, which is to say a digital work that somehow makes us more aware of language and meaning-making. Strachey’s program “wrote” a series of purplish prose love letters on the Ferranti Mark I Computer—the first commercially available computer—at Manchester University in 1952 (Wardrip-Fruin “Digital Media” 302):
YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. YOU ARE MY WISTFUL SYMPATHY: MY TENDER LIKING.
M. U. C.
Affectionately known as M.U.C., the Manchester University Computer could produce these love letters at a pace of one per minute, for hours on end, without producing a duplicate.
The “trick,” as Strachey put it in a 1954 essay about the program (29-30), is its two template sentences (Myadjectivenounadverbverb your adjectivenoun and You are my adjectivenoun) in which the nouns, adjectives, and adverbs are randomly selected from a list of words Strachey had culled from a Roget’s thesaurus. Adverbs and adjectives randomly drop out of the sentence as well, and the computer randomly alternates the two sentences.
The Love Letter Generator has attracted—for a work of electronic literature—a great deal of scholarly attention. Using Strachey’s original notes and source code (see figure to the left), which are archived at the Bodleian Library at the University of Oxford, David Link has built an emulator that runs Strachey’s program, and Noah Wardrip-Fruin has written a masterful study of both the generator and its historical context.
As Wardrip-Fruin calculates, given that there are 31 possible adjectives after the first sentence’s opening possessive pronoun “My” and then 20 possible nouns that could that could occupy the following slot, the first three words of this sentence alone have 899 possibilities. And the entire sentence has over 424 million combinations (424,305,525 to be precise) (“Digital Media” 311).
On the whole, Strachey was publicly dismissive of his foray into the literary use of computers. In his 1954 essay, which appeared in the prestigious trans-Atlantic arts and culture journal Encounter (a journal, it would be revealed in the late 1960s, that was primarily funded by the CIA—see Berry, 1993), Strachey used the example of the love letters to illustrate his point that simple rules can generate diverse and unexpected results (Strachey 29-30). And indeed, the Love Letter Generator qualifies as an early example of what Wardrip-Fruin calls, referring to a different work entirely, the Tale-Spin effect: a surface illusion of simplicity which hides a much more complicated—and often more interesting—series of internal processes (Expressive Processing 122).
Wardrip-Fruin coined this term—the Tale-Spin effect—from Tale-Spin, an early story generation system designed by James Mehann at Yale University in 1976. Tale-Spin tended to produce flat, plodding narratives, though there was the occasional existential story:
Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. He was unable to call for help. He drowned.
But even in these suggestive cases, the narratives give no sense of the process-intensive—to borrow from Chris Crawford—calculations and assumptions occurring behind the interface of Tale-Spin.
In a similar fashion, no single love letter reveals the combinatory procedures at work by the Mark I computer.
MY AFFECTION LUSTS FOR YOUR TENDERNESS. YOU ARE MY PASSIONATE DEVOTION: MY WISTFUL TENDERNESS. MY LIKING WOOS YOUR DEVOTION. MY APPETITE ARDENTLY TREASURES YOUR FERVENT HUNGER.
M. U. C.
This Tale-Spin effect—the underlying processes obscured by the seemingly simplistic, even comical surface text—are what draw Wardrip-Fruin to the work. But I want to go deeper than the algorithmic process that can produce hundreds of millions of possible love letters. I want to know, what is the source of randomness in the algorithm? We know Strachey’s program employs randomness, but where does that randomness come from? This is something the program—the source code itself—cannot tell us, because randomness operates at a different level, not at the level of code or software, but in the machine itself, at the level of hardware.
In the case of Strachey’s Love Letter Generator, we must consider the computer it was designed for, the Mark I. One of the remarkable features of this computer was that it had a hardware-based random number generator. The random number generator pulled a string of random numbers from what Turing called “resistance noise”—that is, electrical signals produced by the physical functioning of the machine itself—and put the twenty least significant digits of this number into the Mark I’s accumulator—its primary mathematical engine (Turing). Alan Turing himself specifically requested this feature, having theorized with his earlier Turing Machine that a purely logical machine could not produce randomness (Shiner). And Turing knew—like his Cold War counterparts in the United States—that random numbers were crucial for any kind of statistical modeling of nuclear fission.
I have more to say about randomness in the Strachey Love Letter Generator, but before I do, I want to move to my second case study. This is an early, largely unheralded work called SAGA. SAGA was a script-writing program on the TX-0 computer. The TX-0 was the first computer to replace vacuum tubes with transistors and also the first to use interactive graphics—it even had a light pen.
The TX-0 was built at Lincoln Laboratory in 1956—a classified MIT facility in Bedford, Massachusetts chartered with the mission of designing the nation’s first air defense detection system. After TX-0 proved that transistors could out-perform and outlast vacuum tubes, the computer was transferred to MIT’s Research Laboratory of Electronics in 1958 (McKenzie), where it became a kind of playground for the first generation of hackers (Levy 29-30).
In 1960, CBS broadcast an hour-long special about computers called “The Thinking Machine.” For the show MIT engineers Douglas Ross and Harrison Morse wrote a 13,000 line program in six weeks that generated a climactic shoot-out scene from a Western.
Several computer-generated variations of the script were performed on the CBS program. As Ross told the story years later, “The CBS director said, ‘Gee, Westerns are so cut and dried couldn’t you write a program for one?’ And I was talked into it.”
The TX-0’s large—for the time period—magnetic core memory was used “to keep track of everything down to the actors’ hands.” As Ross explained it, “The logic choreographed the movement of each object, hands, guns, glasses, doors, etc.” (“Highlights from the Computer Museum Report”).
And here, is the actual output from the TX-0, printed on the lab’s Flexowriter printer, where you can get a sense of the way SAGA generated the play:
In the CBS broadcast, Ross explained the narrative sequence as a series of forking paths.
Each “run” of SAGA was defined by sixteen initial state variables, with each state having several weighted branches (Ross 2). For example, one of the initial settings is who sees whom first. Does the sheriff see the robber first or is it the other way around? This variable will influence who shoots first as well.
There’s also a variable the programmers called the “inebriation factor,” which increases a bit with every shot of whiskey, and doubles for every swig straight from the bottle. The more the robber drinks, the less logical he will be. In short, every possibility has its own likely consequence, measured in terms of probability.
The MIT engineers had a mathematical formula for this probability (Ross 2):
But more revealing to us is the procedure itself of writing one of these Western playlets.
First, a random number was set; this number determined the probability of the various weighted branches. The programmers did this simply by typing a number following the RUN command when they launched SAGA; you can see this in the second slide above, where the random number is 51455. Next a timing number established how long the robber is alone before the sheriff arrives (the longer the robber is alone, the more likely he’ll drink). Finally each state variable is read, and the outcome—or branch—of each step is determined.
What I want to call your attention to is how the random number is not generated by the machine. It is entered in “by hand” when one “runs” the program. In fact, launching SAGA with the same random number and the same switch settings will reproduce a play exactly (Ross 2).
In a foundational work in 1996 called The Virtual Muse Charles Hartman observed that randomness “has always been the main contribution that computers have made to the writing of poetry”—and one might be tempted to add, to electronic literature in general (Hartman 30). Yet the two case studies I have presented today complicate this notion. The Strachey Love Letter Generator would appear to exemplify the use of randomness in electronic literature. But—and I didn’t say this earlier—the random numbers generated by the Mark I’s method tended not to be reliably random enough; remember, random numbers often need to be reused, so that the programs that run them can be repeated. This is called pseudo-randomness. This is why books like the RAND Corporation’s A Million Random Digits is so valuable.
But the Mark I’s random numbers were so unreliable that they made debugging programs difficult, because errors never occurred the same way twice. The random number instruction eventually fell out of use on the machine (Campbell-Kelly 136). Skip ahead 8 years to the TX-0 and we find a computer that doesn’t even have a random number generator. The random numbers must be entered manually.
The examples of the Love Letters and SAGA suggest at least two things about the source of randomness in literary computing. One, there is a social-historical source; wherever you look at randomness in early computing, the Cold War is there. The impact of the Cold War upon computing and videogames has been well-documented (see, for example Edwards, 1996 and Crogan, 2011), but few have studied how deeply embedded the Cold War is in the software algorithms and hardware processes themselves of modern computing.
Second, randomness does not have a progressive timeline. The story of randomness in computing—and especially in literary computing—is neither straightforward nor self-evident. Its history is uneven, contested, and mostly invisible. So that even when we understand the concept of randomness in electronic literature—and new media in general—we often misapprehend its source.
Bennett, Deborah. Randomness. Cambridge, MA: Harvard University Press, 1998. Print.
Turing, A.M. “Programmers’ Handbook for the Manchester Electronic Computer Mark II.” Oct. 1952. Web. 23 Dec. 2012.
Wardrip-Fruin, Noah. “Digital Media Archaeology: Interpreting Computational Processes.” Media Archaeology: Approaches, Applications, and Implications. Ed by. Erkki Huhtamo & Jussi Parikka. Berkeley, California: University of California Press, 2011. Print.
—. Expressive Processing: Digital Fictions, Computer Games, and Software Sudies. MIT Press, 2009. Print.
Zweig, Ellen. “Jackson Mac Low: The Limits of Formalism.” Poetics Today 3.3 (1982): 79–86. Web. 1 Jan. 2013.
My five-year-old son recently learned how to ride a bike. He mastered the essential components of cycling—balance, peddling, and steering—in roughly ten minutes. Without using training wheels, ever. That idyllic scene of a bent-over parent pushing an unsteady child on a bike, working up enough speed to let go? It never happened. At least not with him.
I’m not sentimental for that Norman Rockwell moment, because I had it several years earlier with my older son. I spent hours running behind him, steadying him, catching him. What made it so difficult for my older son to learn how to ride a bike? Precisely the thing that was supposed to teach him: training wheels.
The difference between the way my sons learned how to ride a bike was training wheels. My older son used them, and consequently learned how to ride only with difficulty. His younger brother used a balance bike (the Skuut in his case), a small light (often wooden) bike with two wheels and no pedals. As the child glides along, thrust forward by pushing off from the ground, he or she learns how to balance in a gradated way. A slight imbalance might be corrected by simply tipping a toe to the ground, or the child can put both feet on the ground to fully balance the bike. Or anything in between.
With a pedal-less bike you continually self-correct your balance, based on immediate feedback. I’m leaning too much to one side? Oooh, drag my foot a little there. Contrast this with training wheels. There’s no immediate feedback. In fact, there’s no need to balance at all. The training wheels do your balancing for you. Training wheels externalize the hardest part of riding a bike. If you’re a little kid and want to start riding a bike, training wheels are great. If you’re a little kid and want to start to learn how to ride a bike, training wheels will be your greatest obstacle.
If you think of riding a bike in terms of pedagogy, training wheels are what learning experts call scaffolding. Way back in 1991, Allan Collins, John Seely Brown, and Ann Holum wrote about a type of teaching called cognitive apprenticeship, and they used the term scaffolding to describe “the support the master gives apprentices in carrying out a task. This can range from doing almost the entire task for them to giving occasional hints as to what to do next.” As the student—the apprentice—becomes more competent, the teacher—the master—gradually backs away, in effect removing the scaffolding. It’s a process Collins, Brown, and Holum call “fading.” The problem with training wheels, then, is that fading is all but impossible. You either have training wheels, or you don’t.
Training wheels are a kind of scaffolding. But they are intrusive scaffolding, obstructive scaffolding. These bulky metal add-ons get in the way quite literally, but they also interfere pedagogically. Riding a bike with training wheels prepares a child for nothing more than riding a bike—with training wheels.
My oldest child, I said, learned how to ride a bike with training wheels. But that’s not exactly what happened. After weeks of struggle—and mounting frustration—he learned. But only because I removed the all-or-nothing training wheels and replaced them with his own body. I not only removed the training wheels from his bike, but I removed the pedals themselves. In essence, I made a balance bike out of a conventional bike. Only then did he learn to balance, the most fundamental aspect of bike-riding. I learned something too: when my younger son was ready to ride a bike we would skip the training wheels entirely.
My kids’ differing experiences lead me to believe that we place too much value on scaffolding, or at least, on the wrong kind of scaffolding. And now I’m not talking simply about riding bikes. I’m thinking of my own university classroom—and beyond, to online learning. We insist upon intrusive scaffolding. We are so concerned about students not learning that we surround the learning problem with scaffolding. In the process we obscure what we had hoped to reveal. Like relying on training wheels, we create complicated interfaces to experiences rather than simplifying the experiences themselves. Just as the balance bike simplifies the experience of bike riding, stripping it down to its core processes, we need to winnow down overly complex learning activities.
We could call this removal of intrusive scaffolding something like “unscaffolding” or “descaffolding.” In either case, the idea is that we take away structure instead of adding to it. And perhaps more importantly, the descaffolding reinstates the body itself as the site—and means of—learning. Scaffolding not only obstructs learning, it turns learning into an abstraction, something that happens externally. The more scaffolding there is, the less embodied the learning will be. Take away the intrusive scaffolding, and like my son on his balance bike, the learner begins to use what he or she had all along, a physical body.
I’ve been thinking about embodied pedagogy lately in relation to MOOCs—massive open online courses. In the worse cases, MOOCs are essentially nothing but scaffolding. A typical Coursera course will include video lectures for each lesson, an online quiz, and a discussion board. All scaffolding. In a MOOC, where are the bodies? And what is the MOOC equivalent of a balance bike? I want to suggest that unless online teaching—and classroom teaching as well—begins to first, unscaffold learning problems and second, rediscover embodied pedagogy, we will obstruct learning rather than foster it. We will push students away from authentic learning experiences rather than draw them toward such experiences.
After all, remember the etymological root of pedagogy: paedo, as in child, and agogic, as in leading or guiding. Teachers guide learners. Scaffolding—the wrong kind—obstructs learning.
Sacred Heart Missionphotograph courtesy of Fernando de Sousa / Creative Commons Licensed. Scaffolding photograph courtesy of Kevin Dooley / Creative Commons Licensed.
I’m delighted to announce the publication of10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (MIT Press, 2013). My co-authors are Nick Montfort (who conceived the project), Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark Marino, Michael Mateas, Casey Reas, and Noah Vawter. Published in MIT Press’s Software Studies series, 10 PRINTis about a single line of code that generates a continuously scrolling random maze on the Commodore 64. 10 PRINT is aimed at people who want to better understand the cultural resonance of code. But it’s also about aesthetics, hardware, typography, randomness, and the birth of home computing. 10 PRINT has already attracted attention from Bruce Sterling (who jokes that the title “really rolls off the tongue”), Slate, and Boing Boing. And we want humanists (digital and otherwise) to pay attention to the book as well (after all, five of the co-authors hold Ph.D.’s in literature, not computer science).
Aside from its nearly unpronounceable title, 10 PRINT is an unconventional academic book in a number of ways:
10 PRINT was written by ten authors in one voice. That is, it’s not a collection with each chapter written by a different individual. Every page of every chapter was collaboratively produced, a mind-boggling fact to humanists mired in the model of the single-authored manuscript. A few months before I knew I was going to work on 10 PRINT, I speculated that the future of scholarly publishing was going to be loud, crowded, and out of control. My experience with 10 PRINT bore out that theory—though the end product does not reflect the messiness of the writing process itself, which I’ll address in an upcoming post.
10 PRINT is nominally about a single line of code—the eponymous BASIC program for the Commodore 64 that goes 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. But we use that one line of code as both a lens and a mirror to explore so much more. In his generous blurb for10 PRINT, Matt Kirschenbaum quotes William Blake’s line about seeing the world in a grain of sand. This short BASIC program is our grain of sand, and in it we see vast cultural, technological, social, and economic forces at work.
10 PRINT emerges at the same time that the digital humanities appear to be sweeping across colleges and universities, yet it stands in direct opposition to the primacy of “big data” and “distant reading”—two of the dominant features of the digital humanities. 10 PRINT is nothing if not a return to close reading, to small data. Instead of speaking in terms of terabytes and petabytes, we dwell in the realm of single bits. Instead of studying datasets of unimaginable size we circle iteratively around a single line of code, reading it again and again from different perspectives. Even single characters in that line of code—say, the semicolon—become subject to intense scrutiny and yield surprising finds.
10 PRINT practices making in order to theorize being. My co-author Ian Bogost calls it carpentry. I’ve called it deformative humanities. It’s the idea that we make new things in order to understand old things. In the case of 10 PRINT, my co-authors and I have written a number of ports of the original program that run on contemporaries of the C64, like the Atari VCS, the Apple IIe, and the TRS-80 Color Computer. One of the methodological premises of 10 PRINT is that porting—like the act of translation—reveals new facets of the original source. Porting—again, like translation—also makes visible the broader social context of the original.
In the upcoming days I’ll be posting more about 10 PRINT, discussing the writing process, the challenges of collaborative authorship, our methodological approaches, and of some of the rich history we uncovered by looking at a single line of code.
In the meantime, a gorgeous hardcover edition is available (beautifully designed by my co-author, Casey Reas). Or download a free PDF released under a Creative Commons BY-NC-SA license.
On November 2 and 3, George Mason University convened a forum on the Future of Higher Education. Alternating between plenary panels and keynote presentations, the forum brought together observers of higher education as well as faculty and administrators from Mason and beyond. I was invited to appear on a panel about student learning and technology. The majority of the session was dedicated to Q&A moderated by Steve Pearlstein, but I did speak briefly about social pedagogy. Below are my remarks.
This morning I’d to share a few of my experiences with what you could call social pedagogy—a term I’ve borrowed from Randy Bass at the Center for New Designs in Learning and Scholarship at Georgetown University. Think of social pedagogy as outward facing pedagogy, in which learners connect to each other and to the world, and not just the professor. Social Pedagogy is also a lean-forward pedagogy. At its best a lean-forward pedagogy generates engagement, attention, and anticipation. Students literally lean forward. The opposite of a lean-forward pedagogy is of course a lean-back pedagogy. Just picture a student leaning back in the chair, passive, slack, and even bored.
A lean-forward social pedagogy doesn’t have to involve technology at all, but this morning I want to describe two examples from my own teaching that use Twitter. Last fall I was teaching a science fiction class and we were preparing to watch Ridley Scott’s Blade Runner. Since I wasn’t screening the film in class, students would be watching it in all sorts of contexts: on Netflix in the residence hall, on a reserve DVD upstairs in the JC, rented from iTunes, a BluRay collector’s set at home, and so on. However, I still wanted to create a collective experience out of these disparate viewings. To this end, I asked students to “live tweet” their own viewing, posting to Twitter whatever came to mind as they watched the film.
In this way I turned movie watching—a lean-back activity—into a lean-forward practice. And because the students often directed their tweets as replies to each other, it was social, much more social than viewing the film in class together. Over a 5-day period I had hundreds of tweets coming in, and I used a tool called Storify to track rhetorical and interpretative moves students made during this assignment. In particular, I categorized the incoming tweets, bringing to the surface some underlying themes in my students’ tweets. And then we began the next class period by examining the tweets and the themes they pointed to.
My next example of a social pedagogy assignment comes from later in the semester in the same science fiction class. I had students write a “Twitter essay.” This is an idea I borrowed from Jesse Stommel at Georgia Tech. For this activity, students wrote an “essay” of exactly 140 characters defining the word “alien.” The 140-character constraint makes this essay into a kind of puzzle, one that requires lean-forward style of engagement. And of course, I posed the essay question in a 140-character tweet:
Again I used Storify to capture my students’ essays and cluster them around themes. I was also able to highlight a Twitter debate that broke out among my students about the differences between the words alien and foreign. This was a productive debate that I’m not sure would have occurred if I hadn’t forced the students into being so precise—because they were on Twitter—about their use of language.
And finally, I copied and pasted the text from all the Twitter essays into Wordle, which generated a word cloud—in which every word is sized according to its frequency.
The word cloud gave me an admittedly reductivist snapshot of all the definitions of alien my students came up with. But the image ended up driving our next class discussion, as we debated what made it onto the word cloud and why.
These are two fairly simple, low-stakes activities I did in class. But they highlight this blend of technology and a lean-forward social pedagogy that I have increasingly tried to integrate into my teaching—and to think critically about as a way of fostering inquiry and discovery with my students.
[Crowd photograph courtesy of Flickr user Michael Dornbierer / Creative Commons Licensed]