Tactical Collaborations (2011 MLA Version)

Tactical Collaboration Title[I was on a panel called “The Open Professoriat(e)” at the 2011 MLA Convention in Los Angeles, in which we focused on the dynamic between academia, social media, and the public. My talk was an abbreviated version of a post that appeared on samplereality in July. Here is the text of the talk as I delivered it at the MLA, interspersed with still images from my presentation. The original slideshow is at the end of this post. Co-panelists Amanda French and Erin Templeton have also posted their talks online.]

Rather than make an argument in the short time I have, I want to make a provocation, urging everyone here to consider the way social media can enable what I call tactical collaborations both within and outside of the professoriate.

I’ve always had trouble keeping the words tactic and strategy straight. Or, as early forms of the words appear in the OED, tactick and the curiously elongated stratagematick.

Tactick and Strategematick

This quote comes from a 17th century translation of a history of Roman emperors (circa 240 AD).

History of Roman Emperors

I love the quote and it tells me that tactics and strategy have always been associated with battle. But I still have trouble telling one from the other. I know one is, roughly speaking, short term, while the other is long range. One is the details and the other the big picture.

I’ll blame the old board game Stratego for my confusion. The placement of my flag, the movement of my scouts, that seemed tactical to me, yet the game was called Stratego.

Stratego

Even diving into the etymology of the words doesn’t help much at first: Tactic is from the ancient Greek τακτóς, meaning arranged or ordered.

TaktosWhile Strategy comes from the Greek στρατηγóς, meaning commander or general. A general is supposed to be a big-picture kind of guy, so I guess that makes sense. And I suppose the arrangement of individual elements comes close to the modern day meaning of a military tactic.

Strategy

All of this curiosity about the meaning of the word tactic began last May, when Dan Cohen and Tom Scheinfeldt at the Center for History and New Media at George Mason University announced a crowd-sourced book called Hacking the Academy. They announced it on Twitter on Friday, May 21 and by Friday, May 28, one week later, all the submissions were in. 330 submissions from nearly 200 people.

Hacking the Academy

The collection is now in the final stages of editing, with a table of contents of around 60 pieces by 40 or so different authors. It will be peer-reviewed and published by Digital Culture Books, an imprint of the University of Michigan Press. As you can imagine, the idea of crowdsourcing a scholarly book, in a week no less, generated excitement, questions, and some worthwhile skepticism.

And it was one of these critiques of Hacking the Academy that prompted my thoughts about tactical collaboration. Jennifer Howard, a senior reporter for The Chronicle of Higher Education, posed several questions that would-be hackers ought to consider during the course of hacking the academy. It was Howard’s last question that resonated most with me.

Have You Looked for Friends in the Enemy Camp?

Have you looked for friends in the enemy camp lately? Howard cautioned us that some of the same institutional forces we think we’re fighting might actually be allies when we want to be a force for change. I read Howard’s question and I immediately began rethinking what collaboration means. Instead of a commitment, it’s an expedience. Instead of strategic partners, find immediate allies. Instead of full frontal assaults, infiltrate and disseminate.

In academia we have many tactics for collaboration, but very little tactical collaboration.

And this is how I defined tactical collaboration:

Tactical Collaboration Definition

I’m reminded of de Certeau’s vision of tactics in The Practice of Everyday Life. Unlike a strategy, which operates from a secure base, a tactic, as de Certeau writes, operates “in a position of withdrawal…it is a maneuver ‘within the enemy’s field of vision’” (37).

People Running from Police

De Certeau goes on to add that a tactic “must vigilantly make use of the cracks….It poaches in them. It creates surprises in them. It can be where it is least expected” (37).

Guernica Billboard

So that’s what a tactic is. I should’ve skipped the OED and Stratego and headed straight for de Certeau. He teaches us that strategies, like institutions, depend upon dominance over space—physical as well as discursive space. But tactics rely upon momentary victories in and over time. Tactics require agility, surprise, feigned retreats as often as real retreats. They require collaborations that the more strategically-minded might otherwise discount. And social media presents the perfect landscape for these tactical collaborations to play out.

Dennis Oppenheim Artwork

Despite my being here today, I’m very skeptical of institutions and associations. We live in a world where we can’t idly hope for or rely upon institutional support or recognition. To survive and thrive, humanists must be fleet-footed, mobile, insurgent. Decentralized and nonhierarchical. We need to stop forming committees and begin creating coalitions. We need affinities over affiliations, and networks over institutes.

Dennis Oppenheim Artwork II

Tactical collaboration is crucial for any humanist seeking to open up the professoriate, any scholar seeking to poach from the institutional reserves of knowledge production, any teacher seeking to challenge the ever intensifying bureaucratization and Taylorization of learning, any contingent faculty seeking to forge success and stability out of contingency.

We need tactical collaborations, and we need them now. The strategematick may be the domain of emperors and institutions, but like the word itself, it’s quaint and outdated. Let tactics be our ruse and our practice.

Stratego Board Game Box

Bibliography

Certeau, Michel de. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.

Herodian. Herodians of Alexandria his imperiall history of twenty Roman caesars & emperours of his time / First writ in Greek, and now converted into an heroick poem by C.B. Staplyton. London: W. Hunt, 1652. Web. 14 July 2010.

Image Credits

(1) Chase, Andy. Stratego. 2009. <http://www.flickr.com/photos/usonian/3580565990/>.

(2) ;P, Mayu. f1947661. 2008. <http://www.flickr.com/photos/mayu/4778962420/>.

(3) Unger, John T. “American Guernica: A Call for Guerilla Public Art.” <http://johntunger.typepad.com/studio/2005/10/the_guernica_pr.html>.

(4) Oppenheim, Dennis. Reading Position for Second Degree Burn. 1970. <http://www.artinfo.com/news/enlarged_image/21237/14781/>.

(5) Bauman, Frederick. Stratego Family. 2007. <http://www.flickr.com/photos/livingfrisbee/1499088800/>.

Original Slideshow

Criminal Code: The Procedural Logic of Crime in Videogames

Criminal Code[This is the text of my second talk at the 2011 MLA convention in Los Angeles, for a panel on “Close Reading the Digital.” My talk was accompanied by a Prezi “Zooming” presentation, which I have replicated here with still images (the original slideshow is at the end of this post). In 15 minutes I could only gesture toward some of the broader historical and cultural meanings that resonate outward from code—but I am pursuing this project further and I welcome your thoughts and questions.]

New media critics such as Nick Montfort and Matthew Kirschenbaum have observed that a certain “screen essentialism” pervades new media studies, in which the “digital event on the screen,” as Kirschenbaum puts it (Kirschenbaum 4), becomes the sole object of study at the expense of the underlying computer code, the hardware, the storage devices, and even the non-digital inputs and outputs that make the digital object possible in the first place. There are a number of ways to remedy this essentialism, and the approach that I want to focus on today is the close reading of code.

Micropolis on the Screen

Frederich Kittler has said that code is the only language that does what it says. But the close reading of code insists that code not only does what it says, it says things it does not do. Like any language, code operates on a literal plane—literal to the machine, that is—but it also operates on an evocative plane, rife with gaps, idiosyncrasies, and suggestive traces of its context. And the more the language emphasizes human legibility (for example, a high-level language like BASIC or Inform 7), the greater the chance that there’s some slippage in the code that is readable by the machine one way and readable by scholars and critics in another.

Today I want to close read some snippets of code from Micropolis, the open-source version of SimCity that was included on the Linux-based XO computers in the One Laptop per Child program.

SimCityBox

Designed by the legendary Will Wright, SimCity was released by Maxis in 1989 on the Commodore 64, and it was the first of many popular Sim games, such as SimAnt and SimFarm, not to mention the enduring SimCity series of games—that were ported to dozens of platforms, from DOS to the iPad. Electronic Arts owns the rights to the SimCity brand, and in 2008, EA released the source code of the original game into the wild under a GPL License—a General Public License. EA prohibited any resulting branch of the game from using the SimCity name, so the developers, led by Don Hopkins, called it Micropolis, which was in fact Wright’s original name for his city simulation.

MicropolisFrom the beginning, SimCity was criticized for presenting a naive vision of urban planning, if not an altogether egregious one. I don’t need to rehearse all those critiques here, but they boil down to what Ian Bogost calls the procedural rhetoric of the game. By procedural rhetoric, Bogost simply means the implicit or explicit argument a computer model makes. Rather than using words like a book, or images like a film, a game “makes a claim about how something works by modeling its processes” (Bogost, “The Proceduralist Style“).

In the case of SimCity, I want to explore a particularly rich site of embedded procedural rhetoric—the procedural rhetoric of crime. I’m hardly the first to think about the way SimCity or Micropolis models crime. Again, these criticisms date back to the nineties. And as recently as 2007, the legendary computer scientist Alan Kay called SimCity a “pernicious…black box,” full of assumptions and “somewhat arbitrary knowledge” that can’t be questioned or changed (Kay).

The Black Box

Kay goes on to illustrate his point using the example of crime in SimCity. SimCity, Kay notes, “gets the players to discover that the way to counter rising crime is to put in more police stations.” Of all the possible options in the real world—increasing funding for education, creating jobs, and so on—it’s the presence of the police that lowers crime in SimCity. That is the procedural rhetoric of the game.

And it doesn’t take long for players to figure it out. In fact, the original manual itself tells the player that “Police Departments lower the crime rate in the surrounding area. This in turn raises property values.”

SimCity Manual on Crime

It’s one thing for the manual to propose a relationship between crime, property values, and law enforcement, but quite another for the player to see that relationship enacted within the simulation. Players have to get a feel for it on their own as they play the game. The goal of the simulation, then, is not so much to win the game as it is to uncover what Lev Manovich calls the “hidden logic” of the game (Manovich 222). A player’s success in a simulation hinges upon discovering the algorithm underlying the game.

But, if the manual describes the model to us and players can discover it for themselves through gameplay, then what’s the value of looking at the code of the game. Why bother? What can it tell us that playing the game cannot?

Before I go any further, I want to be clear: I am not a programmer. I couldn’t code my way out of a paper bag. And this leads me to a crucial point I’d like to make today: you don’t have to be a coder to talk about code. Anybody can talk about code. Anybody can close read code. But you do need to develop some degree of what Michael Mateas has called “procedural literacy” (Mateas 1).

Let’s look at a piece of code from Micropolis and practice procedural literacy. This is a snippet from span.cpp, one of the many sub-programs called by the core Micropolis engine.

scan.cpp

It’s written in C++, one of the most common middle-level programming languages—Firefox is written in C++, for example, as well as Photoshop, and nearly every Microsoft product. By paying attention to variable names, even a non-programmer might be able to discern that this code scans the player’s city map and calculates a number of critical statistics: population density, the likelihood of fire, pollution, land value, and the function that originally interested me in Micropolis,a neighborhood’s crime rate.

This specific calculation appears in lines 413-424. We start off with the crime rate variable Z at a baseline of 128, which is not as random at it seems, being exactly half of 256, the highest 8-bit binary value available on the original SimCity platform, the 8-bit Commodore 64.

Z Variable

128 is the baseline and the crime rate either goes up or down from there. The land value variable is subtracted from Z, and then the population density is added to Z:

Z and PopulationWhile the number of police stations lowers Z.Z and Police Stations

It’s just as the manual said: crime is a function of population density, land value, and police stations, and a strict function at that. But the code makes visible nuances that are absent from the manual’s pithy description of crime rates. For example, land that has no value—land that hasn’t been built upon or utilized in your city—has no crime rate. This shows up in lines 433-434:

Crime Rate Zero

Also, because of this strict algorithm, there is no chance of a neighborhood existing outside of this model. The algorithm is, in Jeremy Douglass’s words when he saw this code, “absolutely deterministic.” A populous neighborhood with little police presence can never be crime free. Land value is likewise reduced to set formula, seen in this equation in lines 264-271:

Land Value in Micropolis

Essentially these lines tell us that land value is a function of the property’s distance from the city center, the type of terrain, the nearby pollution, and the crime rate. Again, though, players will likely discover this for themselves, even if they don’t read the manual, which spells out the formula, explicitly telling us that “the land value of an area is based on terrain, accessibility, pollution, and distance to downtown.”

Land Value in SimCity

So there’s an interesting puzzle I’m trying to get at here. How does looking at the code teach us something new? If the manual describes the process, and the game enacts it, what does exploring the code do?

I think back to Sherry Turkle’s now classic work, Life on the Screen, about the relationship between identity formation and what we would now call social media. Turkle spends a great deal of time talking about what she calls, in a Baudrillardian fashion, the “seduction of the simulation.” And by simulations Turkle has in mind exactly what I’m talking about here, the Maxis games like SimCity, SimLife, and SimAnt that were so popular 15 years ago.

Turkle suggests that players can, on the one hand, surrender themselves totally to the simulation, openly accepting whatever processes are modeled within. On the other hand, players can reject the simulation entirely—what Turkle calls “simulation denial.” These are stark opposites, and our reaction to simulations obviously need not be entirely one or the other.

SurrenderReject

There’s a third alternative Turkle proposes: understanding the simulation, exploring its assumptions, both procedural and cultural (Turkle 71-72).

Understand the Simulation

I’d argue that the close reading of code adds a fourth possibility, a fourth response to a simulation. Instead of surrendering to it, or rejecting it, or understanding it, we can deconstruct it. Take it apart. Open up the black box. See all the pieces and how they fit together. Even tweak the code ourselves and recompile it with our own algorithms inside.

Seeing the Parts of the Simulation

When we crack open the code like this, we may well find surprises that playing the game or reading the manual will not tell us. Remember, code does what it says, but it also says things it does not do. Let’s consider the code for a file called disasters.cpp. Anyone with a passing familiarity with SimCity might be able to guess what a file called disasters.cpp does. It’s the routine that determines which random disasters will strike your city. The entire 408 line routine is worth looking at, but what I’ll draw your attention to is the section that begins at line 109, where the probability of the different possible disasters appears:

disasters.cpp

In the midst of rather generic biblical disasters (you see here there’s a 22% chance of a fire, and a 22% chance of a flood), there is a startling excision of code, the trace of which is only visible in the programmer’s comments. In the original SimCity there was a 1 out of 9 chance that an airplane would crash into the city. After 9/11 this disaster was removed from the code at the request Electronic Arts.

Disasters Closeup

Playing Micropolis, say perhaps as one of the children in the OLPC program, this erasure is something we’d never notice. And we’d never notice because the machine doesn’t notice—it stands outside the procedural rhetoric of the game. It’s only visible when we read the code. And then, it pops, even to non-programmers. We could raise any number of questions about this decision to elide 9/11. There are questions, for example, about the way the code is commented. None of the other disasters have any kind of contextual, historically-rooted comments, the effect of which is that the other disasters are naturalized—even the human-made disasters like Godzilla-like monster that terrorizes an over-polluted city.

There are questions about the relationship between simulation, disaster, and history that call to mind Don DeLillo’s White Noise, where one character tells another, “The more we rehearse disaster, the safer we’ll be from the real thing…..There is no substitute for a planned simulation” (196).

And finally there are questions about corporate influence and censorship—was EA’s request to remove the airplane crash really a request, or more of a condition? How does this relate to EA’s more recent decision in October of 2010 to remove the Taliban from its most recent version of Medal of Honor? If you don’t know, a controversy erupted last fall when word leaked out that Medal of Honor players would be able to assume the role of the Taliban in the multiplayer game. After weeks of holding out, EA ended up changing all references to the Taliban to the unimaginative “Opposing Force.” So at least twice, EA, and by proxy, the videogame industry in general, has erased history, making it more palatable, or as a cynic might see it, more marketable.

I want to close by circling back to Michael Mateas’s idea of procedural literacy. My commitment to critical code studies is ultimately pedagogical as much as it is methodological. I’m interested in how we can teach everyday people, and in particular, nonprogramming undergraduate students, procedural literacy. I think these pieces of code from Micropolis make excellent material for novices, and in fact, I do have my videogame studies students dig around in this source code. Most of them have never programmed, let alone in C++, so I give them some prompts to get them started.

Prompts for Students

And for you today, here in the audience, I have similar questions, about the snippets of code that I showed, but also questions more generally about close reading digital objects. What other approaches are worth taking? What other games, simulations, or applications have the source available for study, and what might you want to look at with those programs? And finally, what are the limits of reading code from a humanist perspective?

Bibliography

Bogost, Ian. “Persuasive Games: The Proceduralist Style.” Gamasutra 21 Jan 2009. Web. 1 Feb 2009. <http://www.gamasutra.com/view/feature/3909/persuasive_games_the_.php?print=1>.

DeLillo, Don. White Noise. Penguin, 1985. Print.

Kay, Alan. “Discussion with Alan Kay about Visual Programming.” Don Hopkins 16 Nov 2007. Web. 30 Dec 2010. <http://www.donhopkins.com/drupal/node/140>.

Kirschenbaum, Matthew. Mechanisms: New Media and the Forensic Imagination. Cambridge Mass.: MIT Press, 2008. Print.

Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 2001. Print.

Mateas, Michael. “Procedural Literacy: Educating the New Media Practitioner.” Beyond Fun. 2008. 67–83. Print.

Montfort, Nick. “Continuous Paper: The Early Materiality and Workings of Electronic Literature.” Philadelphia, PA, 2004. <http: //nickm.com/writing/essays/continuous_paper_mla.html>.

Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster, 1995. Print.

Image Credits

(1) Greco, Roberto. SimCity. 2008. Web. 4 Jan 2011. <http://www.flickr.com/photos/robertogreco/2380137251/>.

(2) Wright, Will. SimCity. Maxis, 1989. Manual.

(3) Goldberg, Ken, and Bob Farzin. Dislocation of Intimacy. 1998. San Jose Museum of Art. Web. 14 Jan 2011. <http://goldberg.berkeley.edu/art/doi.html>.

All other images created by the author.

Original Presentation

Twitter is a Happening, to which I am Returning

I quit Twitter.

White Noise and Static

Or, more accurately, I quit twittering. Nearly three weeks ago with no warning to myself or others, I stopped posting on Twitter. I stopped updating Facebook, stopped checking in on Gowalla, stopped being present. I went underground, as far underground as somebody whose whole life is online can go underground.

In three years I had racked up nearly 9,000 tweets. If Twitter were a drug, I’d be diagnosed as a heavy user, posting dozens of times a day. And then I stopped.

Most people probably didn’t notice. A few did. I know that they noticed because my break from social media wasn’t complete. I lurked, intently, in all of these virtual places, most intently on Twitter.

White Noise at 10 Percent

In the weeks I was silent on Twitter I read in my timeline about divorce, disease, death. I read hundreds of tweets about nothing at all. I read tweets about scholarship, about teaching, about grading, about sleeping and not sleeping. Tweets about eating. Tweets about me. Tweets with questions and tweets with answers. And I thought about how I use Twitter, what it means to me, what it means to share my triumphs and my frustrations, my snark and my occasional kindness, my experiments with Twitter itself.

White Noise Static at 20 Percent Opacity

For the longest time the mantra “Blog to reflect, Tweet to connect” was how I thought about Twitter. The origin of that slogan is blogger Barbara Ganley, who was quoted two years ago in a New York Times article on slow blogging. Ganley’s pithy analysis seemed to summarize the difference between blogging and Twitter, and it circulated widely among my friends in the digital humanities. I repeated the slogan myself, even arguing that Twitter was the back channel for the digital humanities, an informal network—the informal network—that connected the graduate students, researchers, teachers, programmers, journalists, librarians, and archivists who work where technology and the humanities meet.

White Noise Static at 30 Percent Opacity

My retreat from Twitter has convinced me, however, that Twitter is not about connections. Saying that you tweet in order to connect is like saying you fly on airplanes in order to get pat-down by the TSA. If you’re looking for connections on Twitter, then you’re in the wrong place. And any connections you do happen to form will be random, accidental, haunted by mixed signals and potential humiliations.

I’ve been mulling over a different slogan in my mind. One that captures the multiplicity of Twitter. One that acknowledges the dynamism of Twitter. One that better describes my own antagonistic use of the platform. And it’s this:

Blogging is working through. Twitter is acting out.

White Noise Static at 40 Percent Opacity

Twitter is not about connections. Twitter is about acting out.

I mean “working through” and “acting out” in several ways. There’s the obvious allusion to Freud: working through and acting out roughly correspond to Freud’s distinction between mourning and melancholy. A mourner works through the past, absorbs it, integrates it. A mourner will think about the past, but live into the present. The melancholic meanwhile is prone to repetition, revisiting the same traumatic memory, replaying variations of it over and over. The melancholic lashes out, sometimes aggressively, sometimes defensively, often unknowingly.

It’s not difficult to see my use of Twitter as acting out, as rehashing my obsessions and dwelling upon my contentions. Even my break from Twitter is a kind of acting out, a passive-aggressive refusal to play.

But I also mean “acting out” in a more theatrical sense. Acting. Twitter is a performance. On my blog I have readers. But on Twitter I have an audience.

White Noise Static at 50 Percent Opacity

To be sure, it’s a participatory audience. Or at least possibly participatory. And this leads me to another realization about Twitter:

Twitter is a Happening.

I’m using Happening in the sixties New York City art scene sense of the word: an essentially spontaneous artistic event that stands outside—or explodes from within—the formal spaces where creativity is typically safely consumed. Galleries, stages, museums. As Allan Kaprow, one of the founders of the movement, put it in 1961,

[quote]Happenings are events that, put simply, happen. Though the best of them have a decided impact—that is, we feel, “here is something important”—they appear to go nowhere and do not make any particular literary point.[/quote]

Happenings lack any clear divide between the audience and the performers. Happenings are emergent, generated from the flimsiest of intentions. Happenings cannot be measured in terms of success, because even when they go wrong, they have gone right. Chance reigns supreme, and if a Happening can be reproduced, reenacted, it is no longer a happening. And if it’s not a Happening, then nothing happened.

White Noise Static at 60 Percent Opacity

Whether it’s a Twitter-only mock conference, ridiculous fake direct messages, or absurd tips making fun of our professional tendencies, I have insisted time and time again—though without consciously framing it this way—that Twitter ought to be a space for Happenings.

If you’re not involved somehow in a Twitter Happening—if you’re not inching toward participating in some spontaneous communal outburst of analysis or creativity—then you might as well switch to Facebook for making your connections.

Because Twitter is a Happening that thrives on participation, there’s something else I’ve realized about Twitter:

Twitter is better when I’m tweeting.

White Noise Static at 70 percent OpacityIf you are one of the nearly four hundred people I follow, don’t take this the wrong way, but Twitter is better when I’m around. I don’t mean to say that the rest of you are uninteresting. But until I or a few other like-minded people in my Twitter stream do something unexpected, Twitter feels flat, a polite conversation that may well be informative but is nothing that will leave me wondering at the end of the day, what the hell just happened?

I suppose this sounds arrogant. “Twitter is better when I’m around”?? I mean, who on earth made me judge of all of Twitterdom?? And indeed, this entire blog post likely seems self-indulgent. But I didn’t write it for you. I wrote it for me. I’m working through here. And besides, I’ve been criticized too many times by the people who know me best in real life, criticized for being too modest, too eager to downplay my own voice, that I’ll risk this one time sounding self-important.

There’s one final realization I’ve had about Twitter. For a while I had been wondering whether every word I wrote on Twitter was one less word I would write somewhere else. Was Twitter distracting me from what I really needed to write? Was Twitter making me less prolific? And so here it is, my most coherent articulation of what led me to break suddenly from social media: I quit Twitter because I wished to write deliberately, to type only the essential words of my research, and see if I could not learn what Twitterless life had to teach, and not, when I came up for tenure, discover that I had not written at all.

Or something like that.

It only took a few days before I knew the answer to my question about Twitter and writing. And it’s this: writing is not a zero sum game.

I write more when I tweet.

This is not as self-evident a truth as it sounds. Obviously every tweet means I’ve written everything I’ve ever written in my life, plus that one additional tweet. So yes, by tweeting I have written more. But in fact I write more of everything when I tweet. I have learned in the past few weeks that Twitter is a multiplier. Twitter is generative. Twitter is an engine of words, and when I tweet, all my writing, offline and on, private and public, benefits. There’s more of it, and it’s better.

And so I am returning to Twitter. While I had experimented with tweeterish postcards during my break from Twitter—what you might call slow tweeting—I am back on Twitter, and back for good. Twitter is a Happening. It’s not a space for connections, it’s a space for composition. I invite you to unfollow me if you think differently, for I can promise nothing about what I will or will not tweet and with what frequency these tweets will or will not come. I would also invite you to the Happening on Twitter, but that invitation is not mine to extend. It belongs to no one and to everyone. It belongs to the crowd.

White Noise and Static

Initial Thought on Archiving Social Media

My head is buzzing from the one-day Archiving Social Media workshop organized by the Center for History and New Media at George Mason University and our close neighbor, the University of Mary Washington. The workshop wrapped upon only a few hours ago, but I’m already feeling a need to synthesize some thoughts about archives, social media, and the humanities. And I know I won’t have time in the next day or two to do this, so I’m taking a moment to synthesize a single thought.

And it is this: we need a politics and poetics of the digital archive. We need a politics and poetics of the social media archive.

Much work has been done on the poetics of traditional archives—Carolyn Steedman’s Dust comes to mind—and there’s emerging political work on social media archives. But there is no deliberate attempt by humanists to understand and articulate the poetics of the social media archive.

And this is exactly what humanists should be doing. Matthew Kirschenbaum asked today, incisively, what can humanists bring to discussions about social media and archives. My answer is this: we don’t need to design new tools, create new implementation plans, or debate practical use issues. We need to think through social media archives and think through the poetics of these archives. We need to discern and articulate the social meaning of social archives. That’s what humanists can do.

One Week, One Tool, Many Anthologies

Many of you have already heard about Anthologize, the blog-to-book publishing tool created in one week by a crack team of twelve digital humanists, funded by the NEH’s Office of Digital Humanities, and shepherded by George Mason University’s Center for History and New Media. Until the moment of the tool’s unveiling on Tuesday, August 3, very few people knew what the tool was going to be. That would include me.

So, it was entirely coincidental that the night before Anthologize’s release, I tweeted:

I had no idea that the One Week Team was working on a WordPress plugin that could take our blogs and turn them into formats suitable for e-readers or publishers like Lulu.com (the exportable formats include ePub, PDF, RTF, and TEI…so far). When I got a sneak preview of Anthologize via the outreach team’s press kit, it was only natural that I revisit my previous night’s tweet, with this update:

I’m willing to stand behind this statement—Twitter and Blogs are the first drafts of scholarship. All they need are better binding—and I’m even more willing to argue that Anthologize can provide that binding.

But the genius of Anthologize isn’t that it lets you turn blog posts into PDFs. They are already many ways to do this. The genius of the tool is the way it lets you remix a blog into a bound object. A quick look at the manage project page (larger image) will show how this works:

All of your blog’s posts are listed in the left column, and you can filter them by tag or category. Then you drag-and-drop specific posts into the “Parts” column on the right side of the page. Think of each Part as a separate section or chapter of your final anthology. You can easily create new parts, and rearrange the parts and posts until you’ve found the order you’re looking for.

Using the “Import Content” tool that’s built into Anthologize, you aren’t even limited to your own blog postings. You can import anything that has an RSS feed, from Twitter updates to feeds from entirely different blogs and blogging platforms (such as Movable Type or Blogger). You can remix from a countless number of sources, and then compile it all together into one slick file. This remixing isn’t simply an afterthought of Anthologize. It defines the plugin and has enormous potential for scholars and teachers alike, ranging from organizing tenure material to building student portfolios.

Something else that’s neat about how Anthologize pulls in content is that draft (i.e. unpublished) posts show up alongside published posts in the left hand column. In other words, drafts can be published in your Anthologize project, even if they were never actually published on your blog. This feature makes it possible to create Anthologize projects without even making the content public first (though why would you want to?).

From Alpha to Beta to You

As excited as I am about the possibilities of Anthologize, don’t be misled into thinking that the tool is a ready-to-go, full-fledged publishing solution. Make no mistake about Anthologize: this is an extremely alpha version of the final plugin. If the Greeks had a letter that came before alpha, Anthologize would be it. There are several major known issues, and there are many features yet to add. But don’t forget: Anthologize was developed in under 200 hours. There were no months-long team meetings, no protracted management decisions, no obscene Gantt charts. The team behind Anthologize came and saw and coded, from brainstorm to repository in one week.
[pullquote align=”left”]The team behind Anthologize came and saw and coded, from brainstorm to repository in one week.[/pullquote] The week is over, and they’re still working, but now it’s your turn too. Try it out, and let the team know what works, what doesn’t, what you might use it for, and what you’d like to see in the next version. There’s an Anthologize Users Group you can join to share with other users and the official outreach team, and there’s also the Anthologize Development Group, where you can share your bugs and issues directly with the development team.

As for me, I’m already working on a wishlist of what I’d like to see in Anthologize. Here are just a few thoughts:

  • More use of metadata. I imagine future releases will allow user-selected metadata to be included in the Anthologized content. For example, it’d be great to have the option of including the original publication date.
  • Cover images. It’s already possible to include custom acknowledgments and dedications in the opening pages of the Anthologized project, but it’ll be crucial to be able to include a custom image as the anthology front cover.
  • Preservation of formatting. Right now quite a bit of formatting is stripped away when posts are anthologized. Block quotes, for example, become indistinguishable from the rest of the text, as do many headers and titles.
  • Fine-grained image control. A major bug prevents many blog post images from showing up in the Anthologize-generated book. Once this is fixed, it’d be wonderful to have even greater control of images (such as image resolution, alignment, and captions).
  • I haven’t experimented with Anthologize on WordPressMU or BuddyPress yet, but it’s a natural fit. Imagine each user being able to cull through tons of posts on a multi-user blog, and publishing a custom-made portfolio, comprised of posts that come from different users and different blogs.

As I play with Anthologize, talk with the developers, and share with other users, I’m sure I’ll come up with more suggestions for features, as well as more ways Anthologize can be used right now, as is. I encourage you to do the same. You’ll join a growing contingent of researchers, teachers, archivists, librarians, and students who are part of an open-source movement, but more importantly, part of a movement to change the very nature of how we construct and share knowledge in the 21st century.

Tactical Collaboration: or, Skilfull in both parts of War, Tactick and Stratagematick

[Note: See also the MLA 2011 version of this post, which I gave at panel discussion on “The Open Professoriat(e)”]

“Skilfull in both parts of War, Tactick and Stratagematick.”

From Herodians of Alexandria: his imperiall history of twenty Roman cæsars & emperours of his time. First writ in Greek, and now converted into an heroick poem by C.B: Stapylton (London: Printed by W. Hunt for the author, 1652)

I’ve always had trouble keeping tactic and strategy straight. And don’t even get me started on tactick and stratagematick, cited by the Oxford English Dictionary as very early forms of the words in English. I knew that one was, roughly speaking, short term, while the other was long range. One was the details, the other the big picture. But I always got confused about which was which. I’m not exactly sure what the root of my confusion was, but the game Stratego makes as good a scapegoat as any. The placement of my flag, the movement of my scouts, that seemed tactical to me, yet the game was called Stratego. It was enough to blow a young game player’s mind.

Even diving into the etymology of the words, which is how I tend to solve these puzzles nowadays, doesn’t help much at first:

  • Tactic, from the ancient Greek τακτóς, meaning arranged or ordered
  • Strategy, from the Greek στρατηγóς, meaning commander or general

A general is supposed to be a big-picture kind of guy, so I guess that makes sense. And I suppose the arrangement of individual elements comes close to the modern day meaning of a military tactic. (Which leads me to dispute the name of Stratego again; the game should more properly be called Tactico. Unless your father breaks in, commandeering your pieces, as shown on the original game box. Then you’re back to the strategematick.)

In any case, I’ve been thinking about tactics lately. More to the point, I’ve been thinking about the tactics of collaboration. And to make an even finer point, I’ve been thinking about tactical collaboration.

This line of inquiry began in May, amidst the one-week creation of the crowdsourced anti-collection, Hacking the Academy, edited—though curated might be the better term—by Dan Cohen and Tom Scheinfeldt at the Center for History and New Media at George Mason University. The idea of crowdsourcing a scholarly book (to be published, it’s worth nothing, by Digital Culture Books, an imprint of the University of Michigan Press and University of Michigan Library) generated much excitement, many questions, and some worthwhile skepticism incorporated into the book itself.

It’s one of these critiques of Hacking the Academy that prompted my thoughts about tactical collaboration. Jennifer Howard, a senior reporter for The Chronicle of Higher Education, asked three key questions that “the forces of change” should consider during the course of hacking the academy. It was Howard’s last question that resonated most with me:

Have you looked for friends in the enemy camp lately? Or: Maybe you will find allies where you don’t expect any. As a journalist, I’m no stranger to generalizations. Still, it’s disconcerting to go to different conferences and hear Entire Category X—administrators/university presses/librarians/journal editors/fill in the blank—written off as part of the problem when at least a few daring souls might not mind being part of a solution. It may not be *your* solution. You might have to venture a closer look to find out. I can’t say what you will discover. It may not be at all what you expect. It might be exactly what you expect. Let me know.

[pullquote align=”right”]The enemy of your enemy may be your friend. But your enemy may be your friend as well.[/pullquote] Have you looked for friends in the enemy camp lately? We all know that the enemy of your enemy may be your friend. But your enemy may be your friend as well when you want to be a force for change. I read Howard’s question and immediately began thinking about collaboration in a new way. Instead of a commitment, it’s an expedience. Instead of strategic partners, find immediate allies. Instead of full frontal assaults, infiltrate and disseminate. In academia we have many tactics for collaboration, but very little tactical collaboration:

Tactical Collaboration: fleeting, fugitive collaboration that takes place suddenly, across ideologies, disciplines, pedagogies, and technologies.

I’m reminded of de Certeau’s vision of tactics in The Practice of Everyday Life. Unlike a strategy, which operates from a secure base of its own, a tactic, as the Jesuit scholar writes,

must play on and with a terrain imposed on it and organized by the law of a foreign power. It does not have the means to keep to itself, at a distance, in a position of withdrawal, foresight, and self-collection: it is a maneuver “within the enemy’s field of vision,” as von Bülow puts it, and within enemy territory. It does not, therefore, have the options of planning general strategy…. It operates in isolated actions, blow by blow. It takes advantages of “opportunities” and depends on them, being without any base where it could stockpile its winnings, build up its own position, and plan raids. What it wins it cannot keep…. It must vigilantly make use of the cracks that particular conjunctions open in the surveillance of proprietary powers. It poaches in them. It creates surprises in them. It can be where it is least expected.

Now I understand what a tactic is. Strategies, like institutions, depend upon dominance over space—physical space as well as discursive space. But tactics rely upon momentary victories in and over time, a temporalization of resistance. Because tactics are of the moment, they require agility, nimbleness, feigned retreats as often as real retreats. And they require collaborations that the more strategically-minded might otherwise discount. Recalling some of my recent writings on the state of academia, such as my underconference manifesto and my eulogy for the digital humanities center, I realize that what I have been thinking about all along are tactical collaborations. As I wrote in March,

Don’t hope for or rely upon institutional support or recognition. To survive and thrive, digital humanists must be agile, mobile, insurgent. Decentralized and nonhierarchical.

Stop forming committees and begin creating coalitions. Seek affinities over affiliations, networks over institutes.

I was speaking then specifically about the digital humanities, but I’d argue that my call for mobility over centralization is crucial for any humanist seeking to hack the academy, any scholar seeking to poach from the institutional reserves of knowledge production, any teacher seeking to challenge the ever intensifying bureaucratization and systematization of learning, any contingent faculty seeking to forge success and stability from contingency.

We need tactical collaborations, and we need them now. And now, and now. The strategematick may be the domain of emperors and institutions, but let the tactick be the ruse and the practice of you and me.

[Stratego Family image courtesy of Frederick Bauman, Creative Commons Licensed]

Fight Club Soap, Sold by SD-6

Bethany Nowviskie has aptly summed up the current standoff between the University of California system and the Nature Publishing Group as a case of fight club soap. Bethany explains the metaphor much better than I can (I urge you to read her post), and she boils it down with even more economy on Twitter: “Fight club soap = our own intellectual labor sold back to us as a costly product.” As Bethany elaborated in another Twitter post, it’s an allusion to “overpriced soap [in the movie Fight Club] marketed to rich women, made from [the liposuctioned fat of] their own bodies.” In the case of Nature and other scientific journals with premium subscription models, it means “universities buying back the labor they already paid for.”

As news about the conflict was making the rounds, Tom Scheinfeldt noted that the “Nature Publishing Group is a division of Macmillan, the company that played hardball w/ Amazon over ebook pricing.” Inspired by Bethany’s pop culture metaphor and Tom’s observation about the corporate structure of the NPG, I recalled a scene from the first season of JJ Abrams’ television series Alias. Secret agent Sidney Bristow has begun working as a double-agent for the CIA, trying to take down SD-6, the spy organization Sidney works for and which she thought was a legitimate government entity—but which, it turns out, is a criminal organization. In the second episode, “So It Begins,” Sidney draws a map of SD-6’s structure for her CIA handler, Michael Vaughn:

Sidney naively believes her diagram represents the entirety of SD-6. To Sidney’s dismay, however, Vaughn reveals that her legal pad rendering is a tiny piece of a much larger organization:

This is honestly the only scene I remember from five seasons of Alias. For some reason it stuck with me through the years. Perhaps because I see that larger map as a metaphor for all of JJ Abrams’ work—incomprehensible conspiratorial structures bound to collapse under their own weight.

Why am I digging out the metaphor of SD-6 now?

Because this is how the publishing industry looks (minus the criminal activity, mostly).

If we were to draw a corporate map of the Nature Publishing Group, it would look more like Michael Vaughn’s intricate diagram than Sidney Bristow’s crude—I’d say quaint—sketch.

The Nature Publishing Group is owned, as Tom points out, by Macmillan. But who owns Macmillan? The answer is Verlagsgruppe Georg von Holtzbrinck—the Georg von Holtzbrinck Publishing Group. I won’t list all of what Verlagsgruppe Georg von Holtzbrinck controls here, but it includes many of the biggest names in publishing: Farrar, Straus & Giroux, Palgrave Macmillan, Picador, Tor, Bedford/St. Martin’s, and of course Nature and a host of other academic journals.

Aside from realizing after all these years that Alias was a send-up of corporate America rather than some post-Cold-War spy drama, there is an important conclusion to draw here.

[pullquote align=”left”]We have privatized the distribution of knowledge. We have blackwatered knowledge.[/pullquote]

Thinking about the relationship between Nature and the sprawling multinational corporation that owns it reveals the extent to which we have privatized the distribution of knowledge. We have blackwatered knowledge. Knowledge that should belong to the people and universities that produced it.

The government has increasingly outsourced many services to private outfits like Blackwater, KBR, and Halliburton; in the same way, universities, colleges, and libraries have let go of whatever tenuous grasp they once held over their intellectual property. Public and private institutions of higher learning have ceded control to profit-driven enterprises like the Nature Publishing Group, EBSCO, and Reed Elsevier. And like SD-6, whose tentacles are wide-reaching yet difficult to trace, these publishers ruthlessly dominate their respective markets, leaving students, researchers, librarians, and journalists few alternatives.

Yes, I have just likened the publishing industry to a fictional criminal organization. The real question is, what are you going to do about it?

Haunts: Place, Play, and Trauma

Foursquare and its brethren (Gowalla, Brightkite, Loopt, and so on) are the latest social media darlings, but honestly, are they really all that useful? Sharing your location with your friends is not very compelling when you spend your life in the same four places (home, office, classroom, coffee shop). Are these apps really even fun? Does becoming the Mayor of a Shell filling station or earning the Crunked badge for checking into four different airport terminals on the same night* count as fun? I hope not. In truth, making fun of Foursquare is more fun than actually using Foursquare.

*The Crunked badge is for checking into four separate locations during a single evening. They don’t all have to be airport terminals. That’s just my own quirk.

Aside from the free chips I got for checking into a California Tortilla, the only redeeming value of these geolocation apps is that they offer the slightest glimmer—a glimmer!—of creative and pedagogical use. While some of the benefits of geolocation have been immediately seized upon by museums and historians—think of the partnership between Foursquare and the History Channel—very few people have considered using geolocation in a literary context. Even less attention has been paid to the ways geolocation can foster critical and creative thinking. So I’ve been pondering re-purposing Foursquare and its ilk in ways unintended and unforeseen by their creators.

[pullquote align=”right”]Let’s turn locative media into platforms for renegotiating space and telling stories[/pullquote]Following Rob MacDougall’s call for playful historical thinking, I’ve been imagining what you could call playful geographic thinking. Let’s turn locative media from gimmicky Entertainment coupon books and glorified historical guidebooks into platforms for renegotiating space and telling stories.

Let’s turn them into something that truly resembles play. And here I’ll use Eric Zimmerman and Katie Salen’s concept of play: free movement within a more rigid structure.

In this case, that rigid structure comes from the core mechanics of the different geolocation apps: checking in and tagging specific places with tips or comments. What’s supposed to happen is that users check in to bars or restaurants and then post tips on the best drinks or bargains. But what can happen, given the free movement within this structure, is that users can define their own places and add tips that range from lewd to absurd.

This is exactly what Dean Terry is doing. Along with his colleagues and students at the Emerging Media and Communication program at the University of Texas at Dallas, Dean has been renaming spaces and making his own places. Even better, Dean and his group at the MobileLab at UT Dallas are not only testing the limits of existing geolocation apps, they’re building one of their own.

I’m not designing my own app, but I am playing with the commercial apps. And again, by playing, I mean moving freely within a larger, more constrained structure. For instance, within my dully named campus office building, Robinson A, I’ve created my own space, The Office of Incandescent Light and Industrial Runoff. Which is pretty much how I think of my office. And I’m mayor there, thank you very much.

Likewise, when I’m home, I often check into the Treehouse of Sighs. I have an actual treehouse there, but the Treehouse of Sighs is not that one. The Treehouse of Sighs exists only in my mind. It’s a metaphysical Hotel California. You can check in any time you like, but you can never be there.

Just as evocative as creating your own space is tagging existing spaces with virtual graffiti, which you can use to create a counter-factual history of a place. Anyone who checks into the Starbucks on my campus can see my advice regarding the fireplace there. Also on GMU’s campus, I’ve uncovered Fenwick Library’s dirty little secret. And sometimes I leave surrealist tips in public places, like this epigram in yet another airport terminal:

All of this play has led me to think about using geolocative media with my students. Next spring I’m teaching an undergraduate class called “Textual Media,” a vague title that I’ve taken to describing as post-print fiction. My initial idea for using Foursquare was to have students add new venues to the app’s database, with the stipulation that these new venues be Foucauldian “Other Spaces”—parking decks, overpasses, bus depots, etc.—that stand in sharp contrast to the officially sanctioned places on Foursquare (coffee shops, restaurants, bars, etc.). One of the points I’d like to make is that much of our lives are actually spent in these nether-places that are neither here nor there. Tracking our movements in these unglamorous but not unimportant unplaces could be a revelation to my students. It might actually be one of the best uses of geolocation—to defamiliarize our daily surroundings.

I recently participated in a geolocation session at THATCamp that helped me refine some of these ideas. We had about fifteen historians, librarians, archivists, literary scholars, and other humanists at the session. We broke off into groups, with the mission of hacking existing geolocation apps for teaching or learning. I worked with Christa Willaford and Christina Jenkins, and as befits brainstorming about space, we left the windowless room, left the building entirely, and stood out near a small field (that’s not even on the outdated satellite image of the place) and came up with the idea we called Haunts.

Haunts is about the secret stories of spaces.

Haunts is about locative trauma.

Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.

The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.

[pullquote align=”left”]They’ve stumbled upon a fictional world haunting the real one.[/pullquote]Emerge for whom? For the other teams in the class. But also for random strangers using the apps, who have no idea that they’ve stumbled upon a fictional world augmenting the real one. A fictional world haunting the real one.

There are several twists that make Haunts more than simple place-based creative writing. For starters, most fiction doesn’t require any kind of breadcrumb trail more complicated than sequential page numbers. In Haunts, however, students will need to create clues to act as what Marc Ruppel calls migratory cues—nudging participants from one locale to the next, from one medium to the next. These cues might be suggestive references left in a tip, or perhaps obliquely embedded in a photograph taken at the check-in point. (Most geolocation apps allow photographs to be associated with a place; Foursquare is a holdout in this regard, though third-party services like picplz offer a work-around.)

Another twist subverts the tendency of geolocation apps to reward repeat visits to a single locale. Check in enough times at your coffee shop with Foursquare and you become “mayor” of the place. Haunts disincentivizes multiple visits. Check in too many times at the same place and you become a “ghost.” No longer among the living, you are stuck in a single place, barred from leaving tips anywhere else. Like a ghost, you haunt that space for the rest of the game. It’s a fate players would probably want to avoid, yet players will nonetheless be compelled to revisit destinations, in order to fill in narrative gaps as either writers or readers.

[pullquote align=”right”]Imagine the same traumatic kernel, being told again and again, from different points of views.[/pullquote]The final twist is that Haunts does not rely only upon Foursquare. All of the geolocative apps have the same core functionality. This means that one team can use Foursquare, while another team uses Gowalla, and yet another Brightkite. Each team will weave parallel yet diverging stories across the same series of spaces. Each Haunt hosts a number of haunts. The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.

There is still much to figure out with Haunts. But I find the project compelling, and even necessary. The endeavor turns a consumer-based model of mobile computing into an authorship-based model. It is a uniquely collaborative activity, but also one that invites individual introspection. It imagines trauma as both private and public, deeply personal yet situated within shared semiotic domains. It operates at the intersection between game and story, between reading and writing, between the real and the virtual. And it might finally make geolocation worth paying attention to.

Forget Unconferences, Let’s Think about Underconferences

In a few days the latest iteration of THATCamp will convene on the campus of George Mason University, hosted by the Center for History and New Media. Except “convene” really isn’t the right word. Most of my readers will already know that The Technology and Humanities Camp is an “unconference,” which as Ethan Watrall explains on ProfHacker, is “a lightly organized conference in which the attendees themselves determine the schedule.” You can’t really convene such a self-emergent event. But 75 or so participants will nonetheless be there on Saturday morning, and we will indeed get started, figuring out the sessions democratically and then sharing ideas and conversation. This format takes the place of “sharing” (by which I mean dully reading) 20-minute papers that through a bizarre rift in the space-time continuum take 30 minutes to read, leaving little time for discussion.

[pullquote align=”left”]If you go into a panel knowing exactly what you’re going to say or what you’ve already said, there’s little room for exploration or discovery.[/pullquote]

The unconference obviously stands in contrast to the top-down, largely monologic model of the traditional conference. Most THATCamp attendees rave about the experience, and they find themselves craving similar open-ended panels at the more staid academic conferences in their respective fields. Change is slow to come, of course. What happens for the most part are slight tweaks to the existing model. Instead of four people reading 20-minute papers during a session, four people might share 20-minute papers beforehand, with the session time dedicated to talking about those 20-minute papers. Yet this model still relies on the sharing of prepared material. If you go into a panel knowing exactly what you’re going to say or what you’ve already said, there’s very little room for actual exploration or discovery. It reminds me of Nietzsche’s line that finding “truth” is like someone hiding an object in a bush and later being astonished to find it there. That’s the shape of disingenuous discovery at academic conferences.

So what’s a poor idealistic professor to do?

Let’s forget about unconferences, even as they gain momentum, and start thinking about underconferences.

What’s an underconference?

Before I answer that, let’s run through some other promising alternative conference models:

The Virtual Conference: This is the conference held entirely online, in which the time and space limitations of the real world can be broken at will. The recent Critical Code Studies Working Group, held over six weeks this spring, was a good example, though the conference was, unfortunately, only open to actual participants. The proceedings will be published on Electronic Book Review, however, and at least one research idea seeded at the virtual conference may see the light of the day in a more traditional publishing venue. HASTAC (Humanities, Arts, Science, and Technology Advanced Collaboratory) has had success with its virtual conference as well.

The Simulated Conference: Like Baudrillard’s simulacrum, this is the simulation of a conference for which there is no original, the conference for which there is no conference. This sounds impossible, but in fact I hosted an entirely simulated conference one weekend in February 2010. It was a particularly conference-heavy weekend for the digital humanities, and since I couldn’t attend any of them, I created one of my own: MarksDH2010. Spurred on at first by Ian Bogost and Matt Gold, the simulated conference turned into a weekend affair, hosted entirely on Twitter, and catered by Halliburton. Dozens of participants spontaneously joined in the fun, and in the very act of lampooning traditional conferences (e.g. see my notes on the fictional Henri Jenquin’s keynote), I humbly suggest we advanced forward the humanities by at least a few virtual inches. As I later explained, MarksDH2010 “was a folie à deux and then some.” You can read the complete archives, in chronological order, and decide for yourself about that characterization.

The Unconference: Do I need to say more about the unconference? Read about the idea in theory, or see it in practice by following the upcoming THATCamp Prime on Twitter.

The Underconference: The virtual conference and the simulated conference are both made possible by technology. They take place at a distance, mediated by screens. The final model I wish to consider is the opposite, rooted in physical space, requiring actual—not virtual—bodies. This is not the unconference, but the underconference. The prerequisite of the underconference is the conference. There is the official conference—say, the MLA—and at the very same time there is an entirely parallel conference, running alongside—no, under—the official conference. Think of it as the Trystero of academia. Inspired by the Situationists, Happenings, flash mobs, Bakhtin, ARGs, and the absurdist political theater of the Yippies, the underconference is the carnival in the churchyard. Transgressive play at the very doorstep of institutional order. And like most manifestations of the carnivalesque, the underconference is at its heart very serious business.

[pullquote align=”right”]The participants of the underconference are also participants in the conference. They are not enemies, they are co-conspirators.[/pullquote]

Let me be clear, though. The underconference is not a chaotic free-for-all. Just as carnival reinforces many of the ideas it seems to make fun of, the underconference ultimately supports the goals of the conference itself: sharing ideas, discovering new texts and new approaches, contributing to the production of knowledge, and even that tawdry business of networking. The participants of the underconference are also participants in the conference. They are not enemies, they are co-conspirators. The underconference is not mean-spirited; in fact, it seeks to overcome the petty nitpicking that counts as conversation in the conference rooms.

The Underconference is:

  1. Playful, exploring the boundaries of an existing structure;
  2. Collaborative, rather than antagonistic; and
  3. Eruptive, not disruptive.

What might an underconference actually look like?

  • Whereas the work of the conference takes place in meeting rooms and exhibit halls, the underconference takes place in “the streets” of the conference: the hallways and stairwells, the lobbies and bars.
  • The underconference begins with a few “seed” shadow sessions, planned and coordinated events that occur in the public spaces of the conference venue: an unannounced poetry reading in a lobby, an impromptu Pecha Kucha projected inside an elevator, a panel discussion in the fitness room.
  • As the underconference builds momentum, bystanders who find themselves in the midst of an unevent are encouraged to recruit others and to hold their own improvised sessions.
  • The underconference has much to learn from alternate reality games (ARGs), and should incorporate scavenger hunts, geolocation, environmental puzzles, and even a reward or badge system that parodies the official system of awards and prizes.
  • I have reason to believe that at least a few of the major academic conferences would look the other way if they were to find themselves paired with an underconference, if not openly sanction a parallel conference. Support might eventually take the form of dedicated space, perhaps the academic equivalent of Harry Potter’s Room of Requirement.

Do you get the idea? It’s a bold and ambitious plan, and I don’t expect many to think it’s doable, let alone worthwhile. Which is exactly why I want to do it. My experiences with virtual conferences, simulated conferences, and unconferences have convinced me that good things come from challenging the conventions of academic discourse. For every institutionalized practice we must develop a counter-practice. For every preordained discussion there should be an infusion of unpredictability and surprise. For every conference there should be an underconference.

On the Death of the Digital Humanities Center

Two or three years ago it’d be difficult to imagine a university shuttering an internationally recognized program, one of the leading such programs in the country.

Oh, wait. Never mind.

That happens all the time.

My own experience tells me that it’s usually a marginalized field, using new methodologies, producing hard-to-classify work, heavily interdisciplinary, challenging many entrenched institutional forces, and subject to an endless number of brutal personal and professional territorial battles. American Studies, Cultural Studies, Folklore Studies. It’s happened to them all.

Sometimes the programs die a slow death, downsized from a department to a program, then to a center, and finally to a URL. They’re dismantled one esteemed professor at a time, their budgets and their space shrinking ever smaller, their funding for graduate students dwindling to nothing. Sometimes the programs die spectacularly fast but no less ignobly, the executioner’s axe visible only in the instant replay. The recession makes this quick death easy to rationalize from a state legislator’s or university administrator’s perspective. Today’s cutting edge initiative is tomorrow’s expendable expenditure.

Indeed, financial considerations seem to have driven a provost-appointed task force’s recommendation that the renowned film studies program at the University of Iowa be eliminated. Such drastic cutbacks make me wonder about innovative programs at my own university, where the state is sharply curtailing public funding. (The state has funded up to 70% of George Mason University’s budget in the recent past, but now Virginia only provides 25%, a figure that is certain to fall even lower in the years ahead.) And then I wonder about innovative programs and initiatives at other colleges and universities.

And then I fear for the digital humanities center.

There is no single model for the digital humanities center. Some focus on pedagogy. Others on research. Some build things. Others host things. Some do it all. Regardless, in most cases the digital humanities center is institutionally supported, grant dependent, physically situated, and powered by vision and personnel. A sudden change in any one of these underpinnings can threaten the existence of the entire structure.

Despite the noise at last year’s MLA Convention that the digital humanities were an emerging recession-proof, bubble-proof, bullet-proof field in academia, I fear for this awkward new hybrid. Funding is tight and it’s only going to get tighter. Sustainability is the biggest issue facing digital humanities centers across the country. Of course, digital humanities centers are often separate from standard academic units. I don’t know whether this auxiliary position will help or hurt them. In either case, it’s not unreasonable to assume that some of the digital humanities centers around today will ultimately disappear.

The death of the digital humanities center. It’s not inevitable everywhere, but it will happen somewhere.

Let me be clear: I am a true believer in the value of the digital humanities center, a space where faculty, students, and researchers can collaborate and design across disciplines, across technologies, across communities. I cut my own chops in the nineties working on the American Studies Crossroads Project, one of the only groups at the time seriously looking at how digital tools were transforming research and learning. I’m grateful to have friends in several of the most impressive digital humanities outfits on the East Coast. I have the feeling that the Center for History and New Media will always be around. The Maryland Institute for Technology in the Humanities is not going anywhere. The Scholars’ Lab will continue to be a gem at the University of Virginia.

There will always be some digital humanities center. But not for most us.

Most of us working in the digital humanities will never have the opportunity to collaborate with a dedicated center or institute. We’ll never have the chance to work with programmers who speak the language of the humanities as well as Perl, Python, or PHP. We’ll never be able to turn to colleagues who routinely navigate grant applications and budget deadlines, who are paid to know about the latest digital tools and trends—but who’d know about them and share their knowledge even if they weren’t paid a dime. We’ll never have an institutional advocate on campus who can speak with a single voice to administrators, to students, to donors, to publishers, to communities about the value of the digital humanities.

There will always be digital humanities centers. But not for us.

Fortunately even digital humanities centers themselves realize this—as well as funders such as the NEH’s Office of Digital Humanities and the Mellon Foundation—and outreach has become a major mission for the digital humanities.

And fortunately too, a digital humanities center is not the digital humanities. The digital humanities—or I should say, digital humanists—are much more diverse, much more dispersed, and stunningly resourceful to boot.

So if you’re interested in the transformative power of technology upon your teaching and research, don’t sit around waiting for a digital humanities center to pop up on your campus or make you a primary investigator on a grant.

Act as if there’s no such thing as a digital humanities center.

Instead, create your own network of possible collaborators. Don’t hope for or rely upon institutional support or recognition. To survive and thrive, digital humanists must be agile, mobile, insurgent. Decentralized and nonhierarchical.

Stop forming committees and begin creating coalitions. Seek affinities over affiliations, networks over institutes.

Centers, no. Camps, yes.

The Archive or the Trace: Cultural Permanence and the Fugitive Text

We in the humanities are in love with the archive.

My readers already know that I am obsessed with archiving otherwise ephemeral social media. I’ve got multiple redundant systems for preserving my Twitter activity. I rely on the Firefox plugins Scrapbook and Zotero to capture any online document that poses even the slightest flight risk. I routinely backup emails that date back to 1996. Even my  recent grumbles about the Modern Language Association’s new citation guidelines were born of an almost frantic need to preserve our digital cultural heritage.

I don’t think I am alone in this will to archive, what Jacques Derrida called archive fever. Derrida spoke about the “compulsive, repetitive, and nostalgic desire for the archive” way back in 1994, long before the question of digital impermanence became an issue for historians and librarians. And the issue is more pressing than ever.

Consider the case of a Hari Kunzru short story that Paul Benzon described in an MLA presentation last month. As Julie Meloni  recently recounted, Kunzru had published “A Story Full of Fail” online. Then, deciding instead to find a print home for his piece, Kunzru removed the story from the web. Julie notes that there’s no Wayback Machine version of it, nor is the document in a Google cache. The story has disappeared from the digital world. It’s gone.

Yet I imagine some Kunzru fans are clamoring for the story, and might actually be upset that the rightful copyright holder (i.e. Kunzru) has removed it from their easy digital grasp. The web has trained us to want everything and to want it now. We have been conditioned to expect that if we can’t possess the legitimate object itself, we’ll be able to torrent it, download it, or stream it through any number of digital channels.

We are archivists, all of us.

But must everything be permanent?

Must we insist that every cultural object be subjected to the archive?

What about the fine art of disappearance? Whether for aesthetic reasons, marketing tactics, or sheer perversity, there’s a long history of producing cultural artifacts that consume themselves, fade into ruin, or simply disappear. It might be a limited issue LP, the short run of a Fiestaware color, or a collectible Cabbage Patch kid. And these are just examples from mass culture.

Must everything be permanent?

In the literary world perhaps the most well-known example is William Gibson’s Agrippa (A Book of the Dead), a 300-line poem published on a 3.5″ floppy in 1992 that was supposed to erase itself after one use. Of course, as Matthew Kirschenbaum has masterfully demonstrated, Gibson’s attempt at textual disintegration failed for a number of reasons. (Indeed, Matt’s research has convinced me that Kunzru’s story hasn’t entirely disappeared from the digital world either. It’s somewhere, on some backup tape or hard drive or series of screen shots, and it would take only a few clicks for it to escape back into everyday circulation).

I have written before about the fugitive as the dominant symbolic figure of the 21st century, precisely because fugitivity is nearly impossible anymore. The same is now true of texts. Fugitive texts, or rather, the fantasy of fugitive texts, will become a dominant trope in literature, film, art, and videogames, precisely because every text is archived permanently some place, and usually, in many places.

We already see fantasies of fugitive texts everywhere, both high and low: House of Leaves, The Raw Shark Texts, Cathy’s Book, The Da Vinci Code, and so on. But what we need are not just stories about fugitive texts. We need actual texts that are actual fugitives, fading away before our eyes, slipping away in the dark, texts we apprehend only in glimpses and glances. Texts that remind us what it means to disappear completely forever.

The fugitive text stands in defiant opposition to the archive. The fugitive text exists only as (forgive me as I invoke Derrida once more) a trace, a lingering presence that confirms the absence of a presence. I am reminded of the novelist Bill Gray’s lumbering manuscript in DeLillo’s Mao II. Perpetually under revision, an object sought after by his editor and readers alike, Gray’s unfinished novel is a fugitive text.

Mao II is an extended meditation on textual availability and figurative and literal disappearance, but it’s in DeLillo’s handwritten notes for the novel — found ironically enough in the Don DeLillo Papers archive at the University of Texas at Austin — that DeLillo most succinctly expresses what’s at stake:

Reclusive Writer: In the world of glut + bloat, the withheld work of art becomes the only meaningful object. (Spiral Notebook, Don DeLillo Papers, Box 38, Folder 1)

Bill Gray’s ultimate fate suggests that DeLillo himself questions Gray’s strategy of withdrawal and withholding. Yet, DeLillo nonetheless sees value in a work of art that challenges the always-available logic of the marketplace — and of that place where cultural objects go, if not to die, then at least to exist on a kind of extended cultural life support, the archive.

Years ago Bruce Sterling began the Dead Media Project, and I now propose a similar effort, the Fugitive Text Collective. Unlike the Dead Media Project, however, we don’t seek to capture fleeting texts before they disappear. This is not a project of preservation. There shall be no archives allowed. The collective are observers, nothing more, logging sightings of impermanent texts. We record the metadata but not the data. We celebrate the trace, and bid farewell to texts that by accident or design fade, decay, or simply cease to be.

Let the archive be loved. But fugitive texts will become legend.

The Modern Language Association Wishes Away Digital Différance

This is the first academic semester in which students have been using the revised 7th edition of the MLA Handbook (you know, that painfully organized book that prescribes the proper citation method for material like “an article in a microform collection of articles”).

From the moment I got my copy of the handbook in May 2009, I have been skeptical of some of the “features” of the new guidelines, and I began voicing my concerns on Twitter:

But not only does the MLA seem unprepared for the new texts we in the humanities study, the association actually took a step backward when it comes to locating, citing, and cataloging digital resources. According to the new rules, URLs are gone, no longer “needed” in citations. How could one not see that these new guidelines were remarkably misguided?

To the many incredulous readers on Twitter who were likewise confused by the MLA’s insistence that URLs no longer matter, I responded, “I guess they think Google is a fine replacement.” Sure, e-journal articles can have cumbersome web addresses, three lines long, but as I argued at the time, “If there’s a persistent URL, cite it.”

Now, after reading a batch of undergraduate final papers that used the MLA’s new citation guidelines, I have to say that I hate them even more than I thought I would. Although “hate” isn’t quite the right word, because that verb implies a subjective reaction. In truth, objectively speaking, the new MLA system fails.

The MLA apparently believes that all texts are the same

In a strange move for a group of people who devote their lives to studying the unique properties of printed words and images, the Modern Language Association apparently believes that all texts are the same. That it doesn’t matter what digital archive or website a specific document came from. All that is necessary is to declare “Web” in the citation, and everyone will know exactly which version of which document you’re talking about, not to mention any relevant paratextual material surrounding the document, such as banner ads, comments, pingbacks, and so on.

The MLA turns out to be extremely shortsighted in its efforts to think “digitally.” The outwardly same document (same title, same author) may in fact be very different depending upon its source. Anyone working with text archives (think back to the days of FAQs on Gopher) knows that there can be multiple variations of the same “document.” (And I won’t even mention old timey archives like the Short Title Catalogue, where the same 15th century title may in fact reflect several different versions.)

The MLA’s new guidelines efface these nuances, suggesting that the contexts of an archive are irrelevant. It’s the Ghost of New Criticism, a war of words upon history, “simplification” in the name of historiographic homicide.

Digital Humanities Sessions at the 2009 MLA

Below are all of the upcoming 2009 MLA sessions related to new media and the digital humanities. Am I missing something? Let me know in the comments and I’ll add it to the list. You may also be interested in following the Digital Humanities/MLA list on Twitter. (And if you are on Twitter and going to the MLA, let Bethany Nowviskie know, and she’ll add you to the list.)

MONDAY, DECEMBER 28

116. Play the Movie: Computer Games and the Cinematic Turn

8:30–9:45 a.m., 411–412, Philadelphia Marriott

Presiding: Anna Everett, Univ. of California, Santa Barbara; Homay King, Bryn Mawr Coll.

  1. “The Flaneur and the Space Marine: Temporal Distention in First-Person Shooters,” Jeff Rush, Temple Univ., Philadelphia
  2. “Viral Play: Internet Humor, Viral Marketing, and the Ubiquitous Gaming of The Dark Knight,” Ethan Tussey, Univ. of California, Santa Barbara
  3. “Playing the Cut Scene: Agency and Vision in Shadow of the Colossus,” Mark L. Sample, George Mason Univ.
  4. “Suture and Play: Machinima as Critical Intimacy for Game Studies,” Aubrey Anable, Hamilton Coll.

120. Virtual Worlds and Pedagogy

8:30–9:45 a.m., Liberty Ballroom Salon C, Philadelphia Marriott

Presiding: Gloria B. Clark, Penn State Univ., Harrisburg

  1. “Rhetorical Peaks,” Matt King, Univ. of Texas, Austin
  2. “Virtual Theater History: Teaching with Theatron,” Mark Childs, Warwick Univ.; Katherine A. Rowe, Bryn Mawr Coll.
  3. “Realms of Possibility: Understanding the Role of Multiuser Virtual Environments in Foreign Language Curricula,” Julie M. Sykes, Univ. of New Mexico
  4. “Information versus Content: Second Life in the Literature Classroom,” Bola C. King, Univ. of California, Santa Barbara
  5. “Literature Alive,” Beth Ritter-Guth, Hotchkiss School
  6. “Virtual World Building as Collaborative Knowledge Production: The Online Crystal Palace,” Victoria E. Szabo, Duke Univ.
  7. “Teaching in Virtual Worlds: Re-Creating The House of Seven Gables in Second Life,” Mary McAleer Balkun, Seton Hall Univ.
  8. “3-D Interactive Multimodal Literacy and Avatar Chat in a College Writing Class,” Jerome Bump, Univ. of Texas, Austin

For abstracts and possibly video clips, visit www.fabtimes.net/virtpedagog/.

141. Locating the Literary in Digital Media

8:30–9:45 a.m., Liberty Ballroom Salon A, Philadelphia Marriott

  1. “‘A Breach, [and] an Expansion’: The Humanities and Digital Media,” Dene M. Grigar, Washington State Univ., Vancouver
  2. “Locating the Literary in New Media: From Key Words and Metatags to Network Recognition and Institutional Accreditation,” Joseph Paul Tabbi, Univ. of Illinois, Chicago
  3. “Digital, Banal, Residual, Experimental,” Paul Benzon, Rutgers Univ., New Brunswick
  4. “Genre Discovery: Literature and Shared Data Exploration,” Jeremy Douglass, Univ. of California, San Diego

170. Value Added: The Shape of the E-Journal

10:15–11:30 a.m., Liberty Ballroom Salon C, Philadelphia Marriott

Speakers: Cheryl E. Ball, Kairos, Keith Dorwick, Technoculture, Andrew Fitch and Jon Cotner, Interval(le)s, Kevin Moberly, Technoculture, Julianne Newmark, Xchanges, Eric Dean Rasmussen and Joseph Paul Tabbi, Electronic Book Review

The journals represent a wide range of audiences and technologies. The speakers will display the work that can be done with electronic publications.

For summaries, visit www.ucs.louisiana.edu/~kxd4350/ejournal.

212. Language Theory and New Communications Technologies

12:00 noon–1:15 p.m., Jefferson, Loews

Presiding: David Herman, Ohio State Univ., Columbus

  1. “Learning around Place: Language Acquisition and Location-Based Technologies,” Armanda Lewis, New York Univ.
  2. “Constructing the Digital I: Subjectivity in New Media Composing,” Jill Belli, Graduate Center, City Univ. of New York
  3. “French and Spanish Second-Person Pronoun Use in Computer-Mediated Communication,” Lee B. Abraham, Villanova Univ.; Lawrence Williams, Univ. of North Texas

245. Old Media and Digital Culture

1:45–3:00 p.m., Washington C, Loews

Presiding: Reinaldo Carlos Laddaga, Univ. of Pennsylvania

  1. “Paper: The Twenty-First-Century Novel,” Jessica Pressman, Yale Univ.
  2. “First Publish, Then Write,” Craig Epplin, Reed Coll.
  3. “Digital Literature and the Brazilian Historic Avant-Garde: What Is Old in the New?” Eduardo Ledesma, Harvard Univ.

For abstracts, write to craig.epplin@gmail.com.

254. Web 2.0: What Every Student Knows That You Might Not

1:45–3:00 p.m., Liberty Ballroom Salon C, Philadelphia Marriott

Presiding: Laura C. Mandell, Miami Univ., Oxford

Speakers: Carolyn Guertin, Univ. of Texas, Arlington; Laura C. Mandell; William Aufderheide Thompson, Western Illinois Univ.

For workshop materials, visit www.mla.org/web20.

264. Media Studies and the Digital Scholarly Present

1:45–3:00 p.m., 411–412, Philadelphia Marriott

Presiding: Kathleen Fitzpatrick, Pomona Coll.

  1. “Blogging, Scholarship, and the Networked Public Sphere,” Chuck Tryon, Fayetteville State Univ.
  2. “The Decline of the Author, the Rise of the Janitor,” David Parry, Univ. of Texas, Dallas
  3. “Remixing Dada Poetry in MySpace: An Electronic Edition of Poetry by the Baronness Elsa von Freytag-Loringhoven in N-Dimensional Space,” Tanya Clement, Univ. of Maryland, College Park
  4. “Right Now: Media Studies Scholarship and the Quantitative Turn,” Jeremy Douglass, Univ. of California, San Diego

For abstracts, links, and related material, visit http://mediacommons.futureofthebook.org/mla2009 after 1 Dec.

265. Getting Funded in the Humanities: An NEH Workshop

1:45–3:45 p.m., Liberty Ballroom Salon A, Philadelphia Marriott

Presiding: John David Cox, National Endowment for the Humanities; Jason C. Rhody, National Endowment for the Humanities

This workshop will highlight recent awards and outline current funding opportunities. In addition to emphasizing grant programs that support individual and collaborative research and education, this workshop will include information on new developments such as the NEH’s Office of Digital Humanities. A question-and-answer period will follow.

268. Lives in New Media

3:30–4:45 p.m., 305–306, Philadelphia Marriott

Presiding: William Craig Howes, Univ. of Hawai‘i, Mānoa

  1. “Blogging the Pain: Disease and Grief on the Internet,” Bärbel Höttges, Univ. of Mainz
  2. “New Media and the Creation of Autistic Identities,” Ann Jurecic, Rutgers Univ., New Brunswick
  3. “‘25 Random Things about Me’: Facebook and the Art of the Autobiographical List,” Theresa A. Kulbaga, Miami Univ., Hamilton

322. Looking for Whitman: A Cross-Campus Experiment in Digital Pedagogy

7:15–8:30 p.m., 410, Philadelphia Marriott

Presiding: Matthew K. Gold, New York City Coll. of Tech., City Univ. of New York

Speakers: D. Brady Earnhart, Univ. of Mary Washington; Matthew K. Gold; James Groom, Univ. of Mary Washington; Tyler Brent Hoffman, Rutgers Univ., Camden; Karen Karbiener, New York Univ.; Mara Noelle Scanlon, Univ. of Mary Washington; Carol J. Singley, Rutgers Univ., Camden

Visit the project Web site, http://lookingforwhitman.org.

338. Beyond the Author Principle

7:15–8:30 p.m., Liberty Ballroom Salon C, Philadelphia Marriott

Presiding: Bruce R. Smith, Univ. of Southern California

  1. “English Broadside Ballad Archive: A Digital Home for the Homeless Broadside Ballad,” Patricia Fumerton, Univ. of California, Santa Barbara; Carl Stahmer, Univ. of Maryland, College Park
  2. “The Total (Digital) Archive: Collecting Knowledge in Online Environments,” Katherine D. Harris, San José State Univ.
  3. “Displacing ‘Shakespeare’ in the World Shakespeare Encyclopedia,” Katherine A. Rowe, Bryn Mawr Coll.

TUESDAY, DECEMBER 29

361. Making Research: Limits and Barriers in the Age of Digital Reproduction

8:30–9:45 a.m., 411–412, Philadelphia Marriott

Presiding: Robin G. Schulze, Penn State Univ., University Park

  1. “The History and Limitations of Digitalization,” William Baker, Northern Illinois Univ.
  2. “Moving Past the Hype of Hypertext: Limits of Scholarly Digital Ventures,” Elizabeth Vincelette, Old Dominion Univ.
  3. “Transforming the Study of Australian Literature through a Collaborative eResearch Environment,” Kerry Kilner, Univ. of Queensland
  4. 4. “A Proposed Model for Peer Review of Online Publications,” Jan Pridmore, Boston Univ.

413. Has Comp Moved Away from the Humanities? What’s Lost? What’s Gained?

10:15–11:30 a.m., 411–412, Philadelphia Marriott

Presiding: Krista L. Ratcliffe, Marquette Univ.

  1. “Turning Composition toward Sovereignty,” John L. Schilb, Indiana Univ., Bloomington
  2. “Composition and the Preservation of Rhetorical Traditions in a Global Context,” Arabella Lyon, Univ. at Buffalo, State Univ. of New York
  3. “What Composition Can Learn from the Digital Humanities,” Olin Bjork, Georgia Inst. of Tech.; John Pedro Schwartz, American Univ. of Beirut

For abstracts, visit www.marquette.edu/english/ratcliffe.shtml.

420. Digital Scholarship and African American Traditions

10:15–11:30 a.m., 307, Philadelphia Marriott

Speaker: Anna Everett, Univ. of California, Santa Barbara

For abstracts, visit www.ach.org/mla/mla09/ after 1 Dec.

490. Links and Kinks in the Chain: Collaboration in the Digital Humanities

1:45–3:00 p.m., 410, Philadelphia Marriott

Presiding: Tanya Clement, Univ. of Maryland, College Park

Speakers: Jason B. Jones, Central Connecticut State Univ.; Laura C. Mandell, Miami Univ., Oxford; Bethany Nowviskie, Univ. of Virginia; Timothy B. Powell, Univ. of Pennsylvania; Jason C. Rhody, National Endowment for the Humanities

For abstracts, visit http://lenz.unl.edu/mla09 after 1 Dec.

512. Journal Ranking, Reviewing, and Promotion in the Age of New Media

3:30–4:45 p.m., Liberty Ballroom Salon C, Philadelphia Marriott

Presiding: Meta DuEwa Jones, Univ. of Texas, Austin

Speakers: Daniel Brewer, L’Esprit Créateur; Mária Minich Brewer, L’Esprit Créateur; Martha J. Cutter, MELUS; Mike King, New York Review of Books; Joycelyn K. Moody, African American Review; Bonnie Wheeler, Council of Editors of Learned Journals

560. (Re)Framing Transmedial Narratives

7:15–8:30 p.m., Congress A, Loews

Presiding: Marc Ruppel, Univ. of Maryland, College Park

  1. “From Narrative, Game, and Media Studies to Transmodiology,” Christy Dena, Univ. of Sydney
  2. “To See a Universe in the Spaces In Between: Migratory Cues and New Narrative Ontologies,” Marc Ruppel
  3. “Works as Sites of Struggle: Negotiating Narrative in Cross-Media Artifacts,” Burcu S. Bakioglu, Indiana Univ., Bloomington

For abstracts, visit www.glue.umd.edu/~mruppel/Ruppel_MLA2009_SpecialPanelAbstracts.docx.

575. Gaining a Public Voice: Alternative Genres of Publication for Graduate Students

7:15-8:30 p.m., Room 405, Philadelphia Marriott

Presiding:  Jens Kugele, Georgetown Univ.

  1. “Animating Audiences: Digital Publication Projects and Their Publics,” Jentery Sayers, Univ. of Washington, Seattle
  2. “Blogging Beowulf,” Mary Kate Hurley, Columbia Univ.
  3. “Hope Is Not a Husk but Persists in and as Us: A Proposal for Graduate Collaborative Publication,” Emily Carr, Univ. of Calgary
  4. “The Alternative as Mainstream: Building Bridges,” Katherine Marie Arens, Univ. of Texas, Austin

WEDNESDAY, DECEMBER 30

625. Making Research: Collaboration and Change in the Age of Digital Reproduction

8:30–9:45 a.m., Grand Ballroom Salon L, Philadelphia Marriott

Presiding: Maura Carey Ives, Texas A&M Univ., College Station

  1. “What Is Digital Scholarship? The Example of NINES,” Andrew M. Stauffer, Univ. of Virginia
  2. “Critical Text Mining; or, Reading Differently,” Matthew Wilkens, Rice Univ.
  3. “‘The Apex of Hipster XML GeekDOM’: Using a TEI-Encoded Dylan to Help Understand the Scope of an Evolving Community in Digital Literary Studies,” Lynne Siemens, Univ. of Victoria; Raymond G. Siemens, Univ. of Victoria

632. Quotation, Sampling, and Appropriation in Audiovisual Production

8:30–9:45 a.m., 406, Philadelphia Marriott

Presiding: Nora M. Alter, Univ. of Florida; Paul D. Young, Vanderbilt Univ.

  1. “‘We the People’: Imagining Communities in Dave Chappelle’s Block Party,” Badi Sahar Ahad, Loyola Univ., Chicago
  2. “Pinning Down the Pinup: The Revival of Vintage Sexuality in Film, Television, and New Media,” Mabel Rosenheck, Univ. of Texas, Austin
  3. “Playful Quotations,” Lin Zou, Indiana Univ., Bloomington
  4. “For the Record: The DJ Is a Critic, ‘Constructing a Sort of Argument,’” Mark McCutcheon, Athabasca Univ.

643. New Models of Authorship

8:30–9:45 a.m., Grand Ballroom Salon K, Philadelphia Marriott

Presiding: Carolyn Guertin, Univ. of Texas, Arlington

  1. “Authors for Hire: Branded Entertainment’s Challenges to Legal Doctrine and Literary Theory,” Zahr Said Stauffer, Univ. of Virginia
  2. “The Digital Archive in Motion: Data Mining as Authorship,” Paul Benzon, Temple Univ., Philadelphia
  3. “Scandalous Searches: Rhizomatic Authorship in America’s Online Unintentional Narratives,” Andrew Ferguson, Univ. of Tulsa

For abstracts, visit https://mavspace.uta.edu/guertin/mla-models-of-authorship.html.

655. Today’s Students, Today’s Teachers: Technology

10:15–11:30 a.m., 410, Philadelphia Marriott

Presiding: Christine Henseler, Union Coll., NY

  1. “Ning: Teaching Writing to the Net Generation,” Nathalie Ettzevoglou, Univ. of Connecticut, Storrs; Jessica McBride, Univ. of Connecticut, Storrs
  2. “Online Tutoring from the Ground Up,” William L. Magrino, Jr., Indiana Univ. of Pennsylvania; Peter B. Sorrell, Rutgers Univ., New Brunswick
  3. “Using Facebook for Online Discussion in the Literature Classroom,” Emily Meyers, Univ. of Oregon

676. The Impact of Obama’s Rhetorical Strategies

12:00 noon–1:15 p.m., Grand Ballroom Salon K, Philadelphia Marriott

Presiding: Linda Adler-Kassner, Eastern Michigan Univ.

  1. “Keeping Pace with Obama’s Rhetoric: Digital Ecologies in the Writing Program and the White House,” Shawn Casey, Ohio State Univ., Columbus
  2. “Classroom 2.0 Connecting with the Digital Generation: Pedagogical Applications of Barack Obama’s Rhetorical Use of Twitter,” Jeff Swift, Brigham Young Univ., UT
  3. “Obama Online: Using the White House as an Exemplar for Writing Instruction,” Elizabeth Mathews Losh, Univ. of California, Irvine
  4. “Made Not Only in Words: The Politics and Rhetoric of Barack Obama’s New Media Presidency as a Moment for Uniting Civic Rhetoric and Civic Engagement,” Michael X. Delli Carpini, Univ. of Pennsylvania; Dominic DelliCarpini, York Coll. of Pennsylvania

Respondent: Linda Adler-Kassner

703. Teaching Literature by Integrating Technology

12:00 noon–1:15 p.m., Commonwealth Hall A1, Loews

Presiding: Peter Höyng, Emory Univ.

  1. “Tatort Technology: Teaching German Crime Novels,” Christina Frei, Univ. of Pennsylvania
  2. “Old Meets New: Teaching Fairy Tales by Using Technology,” Angelika N. Kraemer, Michigan State Univ.
  3. “The Role of E-Learning in Excellence Initiatives: Ideal Scenarios and Practical Limitations,” David James Prickett, Humboldt-Universität

Respondent: Caroline Schaumann, Emory Univ.

706. Digital Africana Studies: Creating Community and Bridging the Gap between Africana Studies and Other Disciplines

12:00 noon–1:15 p.m., Adams, Loews

Presiding: Zita Nunes, Univ. of Maryland, College Park

Speakers: Kalia Brooks, Inst. for Doctoral Studies in the Visual Arts; Bryan Carter, Univ. of Central Missouri; Kara Keeling, Univ. of Southern California

For abstracts, visit www.ach.org/mla/mla09/ after 1 Dec.

710. Frontiers in Business Writing Pedagogy: New Media and Literature Strategies

12:00 noon–1:15 p.m., 308, Philadelphia Marriott

Presiding: James K. Archibald, McGill Univ.

  1. “New Media and Business Writing,” Harold Henry Hellwig, Idaho State Univ.
  2. “Bringing Second Life to Business Writing Pedagogy,” R. Dirk Remley, Kent State Univ., Kent
  3. “The Literature of Business: An Approach to Teaching Literature-Based Writing-Intensive Courses,” Scott J. Warnock, Drexel Univ.

Respondent: Mahli Xuan Mechenbier, Kent State Univ., Kent

For abstracts, write to kwills@iupuc.edu.

Unthinking Television Screens

Last spring I participated in an interdisciplinary symposium called Unthinking Television: Visual Culture[s] Beyond the Console. I was an invited guest on a roundtable devoted to the vague idea of “Screen Life.” I wasn’t sure what that phrase meant at the time, and I still don’t know. But I thought I’d go ahead and share what I saw then — and still see now — as four trends in what we might call the infrastructure of screens.

Moving from obvious to less obvious, these four emergent structural changes are:

  1. Proliferating screens
  2. Bigger is better and so is smaller
  3. Screens aren’t just to look at it
  4. Our screens are watching us

And a few more details about each development:

1. Proliferating screens

I can watch episodes of The Office on my PDA, my cell phone, my mp3 player, my laptop, and even on occasion, my television.

2. Bigger is better and so is smaller

There is a much greater range in sizes of screens that we encounter on a daily basis. My high definition videocamera has a 2″ screen and I can hook that up directly via HDMI cable to my 36″ flat screen, and there are screen sizes everywhere in between and beyond.

3. Screens aren’t just to look at

We now touch our screens. Tactile response will soon be just as important as video resolution.

4. Our screens are watching us

Distribution networks like Tivo, Dish, and Comcast have long had unobtrusive ways to track what we’re watching, or at least what our televisions were tuned to. But now screens can actually look at us. I’m referring to screens that aware of us, of our movements. The most obvious example is the Wii and its use of IR emitters in its sensor bar to triangulate the position of the Wiimote, and hence, the player. GE’s website has been showcasing an interactive hologram that uses a webcam. In both cases, the screen sees us. I think this is potentially the biggest shift in what it means to have a “screen life.” In this trend and my previous trend concerning the new haptic nature of screens, we are completing a circuit that runs between human and machine, machine and human.