The Uncanny Valley of Action in Videogames

The agility afforded by blogging means nothing if you sit on your ideas for months or let half-written posts rot in your draft folder. That’s a lesson I learned today when I discovered a host of recent references to Masahiro Mori’s famous graph of the uncanny valley, mostly in reference to zombies (see posts by Nathan Gale, Ian Bogost, and Fabio Cunctator). I’d been toying with some ideas about the uncanny valley since April and I even had a bit written in July, but I set it aside while I went on to other things. And then boom! — the uncanny valley is everywhere.

I had planned on delving into Freud’s lovely essay on the unheimlich, or the uncanny. I’ll save that for a later date, only mentioning that for Freud, the uncanny was that which “ought to have remained secret and hidden but has come to light” — in other words, the return of the repressed. Right now, though, I”ll get straight to my own small contribution to the discussion on the uncanny.

Game designers and players talk about the uncanny valley a lot, and thanks to a wooden CG’ed Tom Hanks in Polar Express and TV shows like Thirty Rock, the concept is becoming familiar to even non-gamers. The distilled version of the theory goes like this: the more images or objects resemble humans, the more familiar and comfortable we are with those images or objects. That is, up to a certain point. There is a moment, just before full human semblance is achieved, when the image or object actually becomes unsettling — this is the valley of the uncanny:

Mori - Uncanny Valley

Mori derived this theory from his observations about the way some robots (the ones that seemed almost human but not quite) would freak people out while others (the cute ones) would not. What I like about Mori’s graph is that he distinguishes between moving and still objects. Moving objects can seem more lifelike, but by the same token, they plunge deeper into the uncanny valley than a still object, a difference shown on the graph by the space between corpse and zombie. This difference between moving and still objects is something that’s left out of the popular conception of the uncanny valley, where it is usually applied to visual representation, i.e. a kind of cinematic or photorealism. But what about realism in movement? Realism in actions?

So back in April I came up with a separate graph (larger image), intended to help us think through the way actions enacted in a videogame can be uncanny.

The Uncanny Valley of ActionOn the near side of the graph I used SimCity and The Sims as illustrative games in which the actions of a player bear some resemblance to real world actions, though they are flattened, or to use a more evaluative term, impoverished (in comparison to planning a city in real life or going on a date). But as the simulations move from representation to enactment, the activities become more lifelike. Bounding across the valley, we come to “playing house” as children might do, a simulated activity to be sure, but one that is more faithful to the real world domestic household than a videogame. (And why is it more faithful? That’s a discussion for a later day, but I’d argue it has to do with the objects involved, the real world material things the children play with.)

The question raised by the graph, then, is what kind of simulated actions plunge us into the uncanny valley? Though it’s not a videogame (yet) and it breaks the house metaphor (The German word for uncanny, unheimlich, literally means un-homelike), waterboarding fits the bill as a kind of uncanny simulated activity — so close to real world drowning but, ah, not quite. Back to games, using your WiiMote to saw off the head of a deranged murderer in Manhunt 2 would likely qualify as an uncanny action to many people, gamers and non-gamers alike. But that’s an obvious example. What else might go down in this valley? I’m looking for games in which the bodily motion of the gamer engaged in a simulated activity approaches asymptotically the real world motions of the source activity.

Zombies are great matches for the current level of technology…

And an even more critical question to ask: should designers make games that take us into that valley? Now that we have Wii Remotes and Wii Boards and the Wii MotionPlus, games no longer have to rely on the haptic density of button-mashing. We can and do play games that involve our whole bodies, games that traverse the left side of the graph. My answer is absolutely yes, games should take us into the uncanny valley of action. Whereas the uncanny valley in visual representation is something designers strive to avoid (unless they’re programming zombies, in which case, you want uncanniness, which in my mind accounts for the popularity of zombies in videogames and CG-based movies — zombies are great matches for the current level of technology!) — the uncanny valley is something to strive towards when it comes to motion and action. Discomfiting bodily actions required by future games might have a Brechtian effect on the gamer, exposing what Ian Bogost calls in Unit Operations the “simulation gap” — the “gap between the rule-based representation of a source system and a user’s subjectivity” (107). And the revelation of this gap would not simply be an intellectual realization, but a felt bodily experience, flooding through the entire presence of a person. This is something books can’t do, nor movies. Until we have holodecks, uncanny games may be the best way to understand the physical lives of other people.

On Hacking and Unpacking My (Zotero) Library

Many of my readers in the humanities already know about Zotero, the free open-source citation manager that works within Firefox and scares the hell out of Endnote’s makers. If you are a student or professor and haven’t tried Zotero, then you are missing out on an essential tool. I use it daily, both for my research and in my teaching. [Full disclosure: I am not an entirely impartial evangelist for Zotero, as its developers are colleagues at George Mason University, in the incomparable Center for History and New Media.]

The latest version of Zotero allows you to “publish” your library, so that anybody can see your collection of sources (and your notes about those sources, if you choose). In my case, I’ve not only published my library on the zotero.org site, I’ve updated the main sidebar on this very blog with a news feed of my “Recently Zoteroed” books and articles. As I gather and annotate sources for my teaching and research, the newest additions will always appear here, with links back to the full bibliographic information in the online version of my library.

How did I do this?

Why did I do this?

What follows is an attempt to answer these two questions. Before I address the how-to, though, I’ll explain the why-to: why I’m making the sources I use for my teaching and research public in the first place.

Sharing my Library in theory

Like many scholars in the humanities (I imagine), I initially had qualms about sharing my library online — checking that little box in my Zotero privacy settings that would “make all items in your library viewable by anyone.” Emphasizing the gravity of the decision, zotero.org adds this warning: “Be very sure you want to do this.”

I do want to do this, I do, I do.

But why? We are accustomed, in the humanities, to being very secretive about our research. Oh sure, we go to conferences and share not-yet-published work. But these conference papers, even if they’re finished the morning of the presentation with penciled-in edits, they’re still addressed to an audience, meant to be shared. But imagine publishing your research notes and only the notes, shorn of context or rhetoric or (especially or) the sense of a conclusion we like to build into our papers. Imagine sharing only your Works Cited. Or, imagine sharing the loosest, most chaotic collection of sources, expanded way beyond the shallows of Works Cited, past the nebulous Works Consulted, deep into the fathomless Works Out There.

Proprietary software like Endnote reinforces the notion that the engine of scholarship is competition.
A paranoid academic (and most of us are paranoid) might worry that by sharing our pre-publication sources, whether they’re primary or secondary sources, we are exposing our research before its time. My sense is that we like to keep our collection of sources private as long as possible, holding them close to our chest as if we were gamblers in the great poker game of academia. And in this game, our colleagues are not colleagues, but opponents sitting across the table from us, bluffing perhaps, or maybe holding a royal flush. Proprietary software like Endnote, which by default encloses research libraries within a walled garden, reinforces this notion, that the engine of scholarship is competition rather than collaboration.

Or, to switch metaphors, sharing our sources in advance of the final product is like sharing the blueprints to a house we haven’t yet built — a house we may not even have the money to build, and meanwhile you just know there’s somebody out there, more clever or less scrupulous or just damn faster, who can take those blueprints and erect an edifice that should have been ours while we’re still at town hall getting zoning permits. We’ve all had that experience of reading a journal article or — damn it! — a mother effing blog in which the author tackles clearly, succinctly and without pause some deep research concern that we’ve been pondering for years, waiting for it to blossom into a Beautiful Idea in our writing before going public with it. And POOF! somebody else says it first, and says it better.

Keeping our sources private is the talisman against such deadly blows to our research, akin to some superstitious taboo against revealing first names. We academics are true believers in occult knowledge.

To put it in the starkest terms possible: before I published my library I was concerned that someone might take a look at my sources and somehow reverse engineer my research.

Let’s face it, I’m an English professor. It’s not as if I’m working on the Manhattan Project.
Are we in the humanities really that ridiculous and self-important? Let’s face it, I’m an English professor. It’s not as if I’m working on the Manhattan Project. My teaching and research adds only infinitesimally incrementally to the storehouse of human knowledge. I don’t mean to belittle what scholars in the humanities do à la Mark Bauerlein. On the contrary, I think that what we do — striving to understand human experience in a chaotic world — is so crucial that we need to share what we learn, every step along the way. Only then do all the lonely hours we spend tracing sources, reading, and writing make sense.

Looked at prosaically, public Zotero libraries may be the equivalent of a give-a-penny, take-a-penny bowl at a local store. This convenience alone would be useful, but the creators of Zotero are much more inspired than that. They know that sharing a library is crowdsourcing a library. The more people who know what we’re researching before we’re done with the research, the better. Better for the researchers, better for the research. Collaboration begins at the source, literally. And as more researchers share their libraries, we’re going to achieve what the visionaries in the Center for History and New Media call the Zotero Commons, a collective, networked repository of shareable, annotatable material that will facilitate collaboration and the discovery of hidden connections across disciplines, fields, genres, and periods.

And that is why I’m sharing my library.

Sharing my Library in Practice

Now, how am I sharing it? I’ve taken what seems to be an unnecessarily complicated route in order to incorporate my library into my blog. There is an easy way to do what I’ve done: Zotero has native RSS feeds for users’ collections, and all you need is to subscribe to that feed using a widget on your blog. In my case I could have used the default WordPress RSS sidebar widget. But I didn’t. I wound up working with both Dapper and Yahoo Pipes, and here’s why.

I didn’t like how the RSS feed built into zotero.org included everything I added, including duplicate citations, snapshots that I later categorized as something else, and PDFs unattached to metadata (even if I retrieved that metadata later). In short, the default RSS stream looked messy in WordPress (but it looks great in Google Reader). [UPDATE: Patrick Murray-John’s awesome Zotero WordPress plugin solves these problems and makes the Pipes solution below unnecessary—though still cool.]

The online mash-up tool Yahoo Pipes is perfect for combining and filtering RSS feeds and that’s what I wanted to use. I can’t program my way out of a paper bag, but Pipes is simple enough that even I can use it. So why did I also use Dapper, another online tool that lets you do fun things with RSS feeds? Because Pipes for some reason would not accept the Zotero RSS feed as valid. I haven’t been able to confirm this, but I’m guessing it has something to do with Zotero’s API using a secure HTTPS rather than HTTP. Or maybe it’s because the Zotero feed is actually XML rather than RSS. Again, I’m not a programmer and I’m just fumbling my way around this hack. In any case I ran my Zotero feed through the Dapp Factory, which did accept it.

Next I dumped the Dapper feed into Yahoo Pipes, using several of Pipe’s operators to filter duplicates and attachment file names that were cluttering the RSS feed. Here’s is a map of my Pipe.

Using Yahoo Pipes to filter a Zotero library
Using Yahoo Pipes to filter a Zotero library

It’s quite simple, and with some experimentation I may improve my hack (for example, I’m toying with Feedburner as a substitute for Dapper, which may preserve more of the original XML, giving Pipes more raw data to manipulate and mash). But even right now in its kludged form, the result is exactly what I set out to do.

In addition to its simplicity, one of the advantages of Yahoo Pipes is the variety of output formats available. For my blog’s sidebar I have Pipes generate an RSS feed, but I could just as easily create an interactive Flash “badge” with it:

I find the possibilities of a portable, embeddable version of my Zotero library extremely evocative. It’s a kind of artifact from the future that our methodological and pedagogical approaches haven’t caught up with yet. Here is where the theory and practice of a collaborative library have yet to meet — and I want to end my manifesto/guide with a simple appeal: let’s begin thinking about the untapped power of this intersection and what we can do with it, for ourselves, our students, and our scholarship.

Facebook versus Twitter

Facebook is the past, Twitter is the future.

Or phrased less starkly, Facebook reconnects while Twitter connects.

All of my friends on Facebook are exactly that: friends from real life, or at the very least, people whom I actually know. Colleagues, students, family members, former classmates, childhood friends. A significant chunk of those Facebook friends are ghosts from the past, people whom I haven’t seen, spoken to, or even thought of in years, maybe decades. Facebook has reconnected me — albeit in a very superficial sense — to these people. I’d even estimate that friends from my past now outnumber current friends and acquaintances on Facebook. Given the exponential algorithm driving the growth of social networks (like the old Faberge shampoo commercial, you tell two friends, and they’ll tell two friends, and so on), it’s not surprising that these reconnections to the past began with a single high school friend, last seen at our graduation in 1989. The ripple effect from this single act of “friending” led to dozens of acquaintances from my hometown.

The reciprocal nature of “friendship” on Facebook reinforces the site’s re-networking aspect. You can only befriend people who have befriended you. Facebook’s insistence upon reciprocity appealed to me at first, ensuring that nobody could lurk on my profile without likewise surrendering their own profile to me. Yet this feature, that I found so comforting when I first dipped into social networking, I now find to be confining, perhaps even the greatest limitation of Facebook. Reciprocity guarantees a closed platform, a fixed loop that cannot expand beyond itself.

This stands in contrast to Twitter, where reciprocity is not required. You can follow someone without them following you. The effect of this asymmetric system is that many of the people I follow I have never met. And I may never. Likewise many of my followers are absolute strangers. Yet many of them share interests with me: pedagogy, literature, digital humanities, even music (at least two of my followers added me after I wrote about the band Shearwater). So this is what I mean when I say Twitter connects.

There is another crucial difference between Facebook and Twitter that associates the former with the past and the latter with the future. Even with its new layout and feed, Facebook does not truly operate in real time. Facebook is still something like a bulletin board. My status updates, therefore, tend to be sly comments or key links that I’ve thought about and pondered and that I want to remain “active” for a day or two. Constant status updates would quickly get lost in the clutter of irritating quiz results, meaningless gift hugs, and holiday Peeps that populate the Facebook news feed.

Twitter conversely offers a more stream-of-consciousness aesthetic. If I immediately follow one tweet with another, I’m not so concerned that the first is going to get lost, as my followers are seeing a feed of my tweets in whatever application they’re using. It is also a matter of one or two clicks (depending on your Twitter client) to see my Twitter posts in aggregate, something much more difficult to achieve in Facebook.

So, to be systematic about the differences between Facebook and Twitter, I present this chart:

Facebook Twitter
past future
reconnect connect
static dynamic
closed open
pond stream

The last distinction — pond versus stream — evokes the dominant ecology of each social network. And I for one would rather be in the flowing stream than the stagnant pond.

Television Emulation for the Atari VCS

This is absolutely stunning: Ian Bogost had his computer science students at George Tech modify Stella, the opensource Atari 2600 emulator, to reproduce the same kind of visual artifacts you would’ve seen when you played the VCS on a CRT television (those big boxy TVs with tubes, for those of you who don’t remember). Their CRT Emulator will soon be a configurable option in Stella.

Now we’ll finally be able to recapture the original experience of playing Yar’s Revenge on your parents’ 19″ Magnovox, minus the wood console.

crt_yars

The crisp image in the bottom half is what we see when we play an Atari 2600 game on Stella now. The top image is what we would have seen playing in the late seventies on a television — and what we’ll soon be able to experience with Stella (Click the image for a larger version).

And here’s a question I’ll be asking my videogame students today: why would degrading the graphics on a game actually be a good thing?

Electronic Literature Course Description

A few of my English department colleagues and myself are preparing to propose a new Electronic Literature course, to replace a more vaguely named “Textual Media” class in the university course catalog. Here is an incredibly first draft version of the course description, building in part on language from the Electronic Literature Organization’s own description of electronic literature:

Electronic Literature (3 credits) Electronic literature refers to expressive texts that are born digital and can only be read, interacted with, or otherwise experienced in a digital environment. Contemporary writers, artists, and designers are producing a wide range of electronic literature, including hypertext fiction, kinetic poetry, interactive fiction, computer-generated poetry and stories, digital mapping, and online collaborative writing projects via SMS, emails, and blogs. In all of these cases, electronic literature takes advantage of the capabilities and contexts of stand-alone or networked computers. Such literary texts often demand new reading and interpretative practices, which this class will develop in students.

I’m eager to hear any feedback about this purposefully generic description.

Southern States Web Expo and Exchange

While it seems like Web 2.0 outfits are dying left and right and venture capital for dot coms has all but dried up, I’ve noticed that there is still a market in Web 2.0 events: demos, expos, workshops, summits, conferences and so on, with tickets running $200/head. There may be no money in perpetually beta products, but there’s plenty of money in events about these beta products.

So I think I am going into the event organizing business. And I even see a niche that needs to be filled: the tech industry of the southern states. So many conferences and workshops are either West Coast or East Coast-based, but what about the south? Surely there are entrepreneurs and start-ups in the south, brewing important and innovative Web 2.0 products?

I therefore propose a Southern States Web Expo and Exchange (SSWExEx, pronounced “Swequex”). There’ll be plenty of swag, live blogging, and backchanneling. It will be fun. Techcrunch, Gizmodo, and Xeni Jardin will be there. Your name tags will be wacky colors.

If you can’t tell whether I’m joking or not, that’s okay. I can’t either.

Seriously, is somebody interested in getting this off the ground with me?

What happens on Facebook when we die?

Anybody who follows Facebook has probably heard about the user who found it impossible to delete his account; even after he deactivated his profile, it showed up in searches and various Facebook news feeds.

If you can’t get out of Facebook when you’re alive, what happens when you die?

What happens to your Facebook profile when you die?

Sadly, this is not a rhetorical question. I’ve had two Facebook friends—first a colleague (and true friend) and second a former student last month—pass away. Yet their Facebook pages persist, digital ghosts with mini-feeds still growing, updated with the usual nonsense and noise (“Mike joined the group Free David Hasselhoff” and “Barrald Terrence and Will Navidson are now friends”) that fill anybody’s Facebook feeds.

In fact, in the second case, I only found out about my student’s death from a terse, surreal update to her profile by a family member, which then showed up in my Facebook news feed. Her Facebook profile has since become a kind of memorial, with dozens of friends writing their goodbyes on her “wall.”

In the first case, nobody has written on my friend’s wall in the six months since he died, though he was loved and respected by hundreds of people across the country. I suspect the difference between my student and my friend’s post-mortem Facebook activity is generational; digital mourning, at least in a consumer-oriented space like Facebook, is considered insensitive or insincere by anyone over the age of 30. And so my friend’s profile is eerily silent, his feed simply stating with no irony that he “has no recent activity.”

I imagine that eventually Web 2.0 will catch up with real life and incorporate grieving into its ecological landscape. Maybe this will be the beta version of Web 3.0.

I don’t know which is creepier: a Facebook engine that doesn’t know when we die and carries on as if we hadn’t; or a Facebook engine that somehow taps into public records and newspaper obituaries, detecting when we die, and initiates a sort of prescribed last will and testament profile update, a more tactful 404 error message.

What I hate about books about videogames

There’s been a burst of scholarly books about videogames in the past two years, and I’ve been going through as many as I can get my hands on. While there are astonishingly bright spots in individual books, the books overall have repeatedly been disappointing. I’ve begun noticing trends of things I hate about academic books about videogames. Here are just a few of the problems I see:

  1. The books adopt an overly defensive stance, spending far too much time justifying their object of study, instead of, well, studying it. Countless books about videogames begin by quoting industry-wide sales figures. The books invariably draw some comparison to the film industry (as in videogames soon or have already overtaken the film industry in revenue generated). My problem with this defensive posture is, who cares? Would any self-respecting Joyce scholar begin an academic study by mentioning sales figures of Finnigan’s Wake? It doesn’t matter how big or little a part of our culture videogames are; the fact that they exist alone justifies their study.
  2. Once the books convince themselves that they’re worth taking seriously, they begin the same way, by talking about games and play. I have read countless rehashes of Huizinga and Callois and not once has a book said something new or somehow added something original to the discussion about play. And very few books seem aware of the latest anthropological models of play.
  3. The books strive to do too much, theoretically-speaking, and they miss their mark. There seems to be a deep urge to force literary and philosophical theoretical models upon videogames. This is not entirely bad, and I agree that critical theory has much to teach us about gaming. But many books are relentless in their pursuit of theory: Aristotle, Plato, Socrates, Spinoza, Hegel, Marx, Heidegger, Deleuze, Foucault, Derrida, Baudrillard, Butler, Zizek, Badiou–and these might all be in the same book! The more ambitious books don’t just name drop, they also attempt to formulate an all-encompassing, master theoretical model (often composed a la carte from bits and pieces of different–and sometimes opposing–theoretical traditions).
  4. The cost of all this theory is that the books don’t do what we arguably need most: deep, close readings of individual games. And I don’t just mean “reading” in a literary studies sense, analyzing plot, themes, subtext, etc.; I also mean in a “ludic” sense, that is, attentiveness to the game-like elements of the work (structure, rules, interface, etc.). Many of the books are so hung up on proposing theoretical models that they don’t end up saying anything about videogames. If they do finally get around to examining games themselves, they do it in a breezy manner, saying a few words about GTA III and then moving immediately on to a few sentences about another game. Sustained, coherent, and innovative close readings are hard to come by. (To be fair, this criticism applies to other fields in the humanities, like literary and film studies.)

Of course, I admit that I’m generalizing here. And also (upon rereading what I’ve written) I believe I might sound a bit cranky.

As I’ve said, I have encountered a few eureka moments in these books. But my overall impression leaves me despairing. The field as a whole is spinning its wheels. Maybe I’m expecting too much too quickly? The field is young, after all. Or maybe I just haven’t read the right book yet?

The Amazing Amazon Mechanical Turk

Clive over at Collision Detection reports on the new Amazon service called Amazon Mechanical Turk, which allows companies to hire (via Amazon) “Turks” who, in their spare time, do seemingly mindless tasks online, for example, tag photographs of shoes according to color. The tasks are mindless–but only for humans who have minds. For computers, the tasks are monumental. AI and visual pattern recognition just hasn’t reached this stage yet. Anyone can sign up as a “Turk” and whenever they have a spare moment at their cubicle, click away and earn as much as $30 a day.

What intrigues me most about this service is the name: Amazon Mechanical Turk. This is a nod to a famous 18th century hoax. As Amazon explains:

In 1769, Hungarian nobleman Wolfgang von Kempelen astonished Europe by building a mechanical chess-playing automaton that defeated nearly every opponent it faced. A life-sized wooden mannequin, adorned with a fur-trimmed robe and a turban, Kempelen’s “Turk” was seated behind a cabinet and toured Europe confounding such brilliant challengers as Benjamin Franklin and Napoleon Bonaparte. To persuade skeptical audiences, Kempelen would slide open the cabinet’s doors to reveal the intricate set of gears, cogs and springs that powered his invention. He convinced them that he had built a machine that made decisions using artificial intelligence. What they did not know was the secret behind the Mechanical Turk: a human chess master cleverly concealed inside.

I had heard of this story before…from the German critic Walter Benjamin. In his “Theses on the Philosophy of History,” Benjamin writes:

The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove. A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created the illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet’s hand by means of strings.

Benjamin goes on to compare this “automaton” to a certain view of history, which fails to see through the illusions that veil the real mechanisms of power.

I’m no conspiracy theorist and I see no conspiracy here. But I can’t help but gleefully wonder if some coder at Amazon was familiar with this Benjamin passage, and that the name of Amazon’s version of artificial artificial intelligence was inspired by a vision of a Turkish puppet smoking a hookah.

D.C. Area Humanities Forum on Video Games

Taking Games Seriously: The Impact of Gaming Technology in the Humanities
Monday, May 15th from 4-6pm
Location: Car Barn 316, 3520 Prospect St. NW, near Georgetown University

Overview and Participants:
Please join Michelle Lucey-Roper (Federation for American Scientists) and Jason Rhody (National Endowment for the Humanities) for a discussion moderated by Mark Sample (George Mason University) on gaming and the humanities. Discussion will center on gaming and its implications for education; thinking about ways to exploit aspects of video game technology to create innovative learning spaces; and games as a possible conduit to online archives or museum collections.

Panelist: Michelle Lucey-Roper is the Learning Technologies Project Manager for the Discover Babylon Project and the Digital Promise Project at the Federation of American Scientists (FAS) in Washington, DC. She has created and managed several technology projects and research initiatives that helped to improve public access to primary source materials. While working towards her doctorate on the interaction of word and image, Lucey-Roper researched and designed curricula for a wide range of subject areas and created new information resources. Before joining FAS, she worked as a librarian, teacher and most recently at the Library of Congress as a research associate. She earned her B.A. at Trinity College, Hartford, CT; her M.A at King’s College, London; and received a doctorate from Oxford University.

Panelist: Jason Rhody, a Ph.D. candidate in the Department of English at the University of Maryland, is currently writing his dissertation, entitled Game Fiction. He has taught courses and given conference presentations on new media, electronic literature, and narrative. He currently works on a web-based education initiative, EDSITEment, for the National Endowment for the Humanities. He previously worked for the Maryland Institute for Technology in the Humanities, an institute dedicated to using technology to enable humanities research and teaching. Jason writes about games and literature on his blog, Miscellany is the Largest Category.

Moderator: Mark Sample teaches and researches both contemporary American literature and New Media/Digital Culture, and he is always exploring how literary texts interact with, critique, and rework visual and media texts. His current research projects include a book manuscript on the early fiction of Don DeLillo and Toni Morrison, exploring their engagement with consumer culture, particularly how they use what Walter Benjamin calls “dialectical images” to reveal the latent violence of everyday things. Another project concerns the interplay between video games, the War on Terror, and the production of knowledge. Professor Sample received an M.A. in Communication, Culture, and Technology from Georgetown University (1998) and his Ph.D. from the University of Pennsylvania (2004).

RSVP for dinner:
There will be an informal dinner after the forum, at a cost of $10 per person.
You must RSVP for dinner by May 8th.

Directions:

  1. Directions to campus
  2. Parking options adjacent to the Car Barn: Parking options Street parking around campus is severely limited and strictly enforced by the DC police (MPD) and the DC Department of Public Works (DPW). Most streets require a Zone 2 residential permit issued by the District of Columbia for parking for longer than two hours. A limited number of metered spaces are available on Reservoir Road, 37th Street and Prospect Street. For those up for a short walk, the Southwest Garage is accessible from Canal Road or Prospect St.
  3. Map to the Car Barn.
  4. The nearest metro station is Rosslyn, across Key Bridge.

About the Forum:
Co-sponsored by the Center for New Designs in Learning & Scholarship (CNDLS– http://cndls.georgetown.edu) at Georgetown University and George Mason University’s Center for History and New Media, the DC Area Technology & Humanities Forum explores important issues in humanities computing and provides an opportunity for DC area scholars interested in the uses of new technology in the humanities to meet and get acquainted.

For more information, contact Susannah McGowan, CNDLS, sm256@georgetown.edu

The Best Spam. Ever.

Yesterday I received this bizarre spam, from someone “named” Solly Brit. The subject heading was “time card celibacy” and this nonsense phrase only hints at the random strings of English in the message, which reads like some methed-up computer-generated poetry slam:

zest, detect pronoun imperfection and lens radically, in as historian disposable of rest home the
four milk chocolate. with insure to tact gatecrasher rainbow tower that collectible misunderstand hither a recollection, learned nude, to an teem
committed shirt extracurricular,… progress. stale, a smelly, as wounded

jovial minority the an healer,
big deal rebel financing stepbrother as!!! adversary lethally a the and tinderbox bounce supply and demand Jun. the fishing rod, putt
left-wing unzip platter. council

rape welter player a public school esoteric ventriloquist that inadvisable but pant gazelle as chinos, stockholder of capitalization, at undressed and
pacifist as precautionary of baptismal black market as
kiln gallon, nomination, not of an G-string, to as geriatric the
meningitis the O! footpath tawny, of bluegrass, wrestle,. to bash.

Jackson Mac Low, watch out!

Professors, students, and emails

A recent article in the New York Times details some of the changes that email has wrought upon professor-student relationships in higher ed:

At colleges and universities nationwide, e-mail has made professors much more approachable. But many say it has made them too accessible, erasing boundaries that traditionally kept students at a healthy distance.

I agree with the first statement: email can create virtual open office hours, and there is no doubt that I hear from (and respond to) students who would never–often for very practical reasons–be able to make my real world office hours.

But I have problems with the second statement: that students should somehow be kept at a healthy distance, as if they carried a transmittable disease that I, in my pure, uncontaminated Ivory Tower, must be protected from.

Yes, it can be annoying when I receive emails like some of the ones mentioned in the articles: naive students asking what kind of binder to buy for class, drunken students offering excuses for absences from class, and angry students writing about a grade. But these kinds of messages are extremely rare. And when I do get one, I don’t feel as if the hallowed walls of academia are under assault by a new generation of disrespectful hooligans.

But perhaps what bothered me most about the Times article is how it ends:

Meg Worley, an assistant professor of English at Pomona College in California, said she told students that they must say thank you after receiving a professor’s response to an e-mail message.

“One of the rules that I teach my students is, the less powerful person always has to write back,” Professor Worley said.

This directive–that “the less powerful person always has to write back”–I find especially troubling. There’s the simple practical matter that the fewer trivial email messages I receive, the better. If I received a “thank you” message every time I emailed a student, I’d be wading in a flood of insignificant, gnat-like emails. But my real concern is that this directive encapsulates a Miss Manners type of social hierarchy full of scraping and bowing. Yes, in a way, I am more powerful than my students, since I have a Ph.D. and I am evaluating them. But, in another way, I could care less that I’m more powerful than my students, and foregrounding that kind of power relation short-circuits my pedagogical approach to the classroom. Add to this the fact that, truth be told, most students could care less themselves that I’m more powerful than them–since it’s only symbolic capital that I yield–and you are left with a directive that seems to be more about stroking professors’ egos than about conveying respect.

I’d rather drop the farce, treat my students as adults, and put up with the occasional annoying (but usually hilarious) email.

Career Killing Blogs

Slate has a new article on academics who blog, Attack of the Career-Killing Blogs – When academics post online, do they risk their jobs? by Robert S. Boynton. The article mentions the infamous (among a very small circle of academic bloggers) Chronicle article, Bloggers Need Not Apply, which essentially argues that academics who have their own blogs ultimately damage their careers.

Boynton’s take is much more nuanced, recognizing both how the academic publishing industry is rapidly changing (downsizing is more like) and what underlies the tension between blogging and universities (which academic blogger John Holbo calls in Boyton’s article the last vestige of the “medieval guild system”).

As for me, I don’t expect my blog–this blog–to affect my career one way or another. It’s not like I’m spreading gossip, sharing dark fantasies, or posting my neuroses.

Many of my posts are simply observations–the kind I would talk about with a group of friends, if I still had the time. But I’m too busy teaching and writing to sit around anymore and talk about these kinds of things. So I steal a few random minutes, spit them out on my blog, and then, I forget about them.

The posts that aren’t simply observations are usually ideas in incubation that will eventually surface (peer-reviewed, documented, cited, leeched of personality) in a conference paper, journal article, or someday a book. The posts are placeholders, in a sense, for the real intellectual work that lies ahead.

What my colleagues make of all this, I have no idea. I suppose the real problem with academics who blog is that they leave evidence that they’re not at that precise moment engaged in research or teaching. A blog is an index to one’s daily “unproductive” activity. If all of our other unproductive time (eating, commuting, watching television, basic personal hygiene) was likewise plotted and mapped for the world to see, then everyone would realize that everyone else is also making space for things other than “work.”

WordCount Poetry

My students and I have been playing with WordCount, Jonathan Harris’s slick database of the 86,000 or so most commonly used words in the English language, ranked according to frequency.

As Harris points out (playfully calling it a “conspiracy“), there are many sequences of adjacent words in the ranked list of 86,800 words that are either eerily prescient, beautiful poetry, are both.

For example, sequence 1941-1945 reads “faith establish facts requires membership” — which does in fact seem to say something about the notion of faith in today’s America.

What other found poetry awaits in the list of words?

Here are a few I discovered:

love means upon areas effect likely (words 384-389)
hate ease shadow inevitably loose (3107-3111)
langley channelled haemorrhage (14867-14869)
unfortunately noise revolution index rare (2172-2176)

And actually, come to think of it, this compilation of lines seems to have a dark undercurrent of meaning flowing through it, too.

Using Understanding Comics to Understand New Media

A few weeks ago I posted some thoughts about the rhetoric of the hyperlink, which I was working on with my Textual Media course. I’ve complicated my students thinking (I hope) by suggesting that we can use Understanding Comics, Scott McCloud’s wonderfully insightful dissection of comics (itself in comics form) to understand new media.

Among the many useful keywords and concepts McCloud provides is a rubric of panel-to-panel transitions, in other words, techniques for tying together two distinct panel frames on a page. Inspired, I think, partly by an awareness of how cuts work in film, McCloud gives us these six categories:

  1. moment-to-moment (showing the passing of time)
  2. action-to-action (showing cause and effect)
  3. subject-to-subject (in film, an example would be a cut to closeup or a wide shot)
  4. scene-to-scene (shifting the action across significant time and/or space)
  5. aspect-to-aspect (what McCloud calls a “wandering eye”; these transitions are rarely used in Western comics, but they appear much more frequently in Japanese comics, usually to evoke a mood or atmosphere)
  6. non-sequitur (with “no logical relationship” between panels)

Now, I wonder — and I’ll be asking my students this soon — what are the new media analogs of these transitions? How, say, can simply using text and hypertext evoke these different transitions? Some are easier to imagine than others. Hypertext on the World Wide Web makes it incredibly easy to create non-sequitur links. But what would an aspect-to-aspect link look like?