Unthinking Television
Four Trends with our Screens

Digging through some old files I came across notes from a roundtable discussion I contributed to in 2009. The occasion was an “Unthinking Television” symposium held at my then-institution, George Mason University. If I remember correctly, the symposium was organized by Mason’s Cultural Studies and Art and Visual Technology programs. Amazingly, the site for the symposium is still around.

The roundtable was called “Screen Life”—all about the changing nature of screens in our lives. I’m sharing my old notes here, if for nothing else than the historical perspective they provide. What was I, as a new media scholar, thinking about screens in 2009, which was like two epochs ago in Internet time? YouTube was less than five years old. The iPhone was two years old. Avatar was the year’s highest grossing film. Maybe that was even three epochs ago.

Do my “four trends” still hold up? What would you add to this list, or take away? And how embarrassing are my dated references?

Four Trends of Screen Life

Coming from a literary studies perspective, I suppose everyone expects me to talk about the way screens are changing the stories we tell or the way we imagine ourselves. But I’m actually more interested in what we might call the infrastructure of screens. I see four trends with our screens:

(1) A proliferation of screens
I can watch episodes of “The Office” on my PDA, my cell phone, my mp3 player, my laptop, and even on occasion, my television.

(2) Bigger is better and so is smaller
We encounter a much greater range in screen sizes on a daily basis. My new high definition videocamera has a 2” screen and I can hook that up directly via HDMI cable to my 36” flat screen, and there are screen sizes everywhere in between and beyond.

(3) Screens aren’t just to look at
We now touch our screens. Tactile response is just as important as video resolution.

(4) Our screens now look at us
Distribution networks like Tivo and Dish and Comcast have long had unobtrusive ways to track what we’re watching, or at least what our televisions were tuned to. But now screens can actually look at us. I’m referring to screens that aware of us, of our movements. The most obvious is the Wii and its use IR emitters in its sensor bar to triangulate the position of the Wiimote, and hence, the player. GE’s website has been showcasing an interactive “hologram” that uses a webcam. In both cases, the screen sees us. This is potentially the biggest shift in what it means to have a “screen life.” In both this case and my previous trend concerning the new haptic nature of screens, we are completing a circuit that runs between human and machine, machine and human.

Electronic Literature Think Alouds
2015 ELO Conference, Bergen

ELO 2015 PosterI’m at the Electronic Literature Organization’s annual conference in Bergen, Norway, where I hope to capture some “think aloud” readings of electronic literature (e-lit) by artists, writers, and scholars. I’ve mentioned this little project elsewhere, but it bears more explanation.

The think aloud protocol is an important pedagogical tool, famously used by Sam Wineburg to uncover the differences in interpretative strategies between novice historians and professional historians reading historical documents (see Historical Thinking and Other Unnatural Acts, Temple University Press, 2001).

The essence of a think aloud is this: the reader articulates (“thinks aloud”) every stray, tangential, and possibly central thought that goes through their head as they encounter a new text for the first time. The idea is to capture the complicated thinking that goes on when we interpret an unfamiliar cultural artifact—to make visible (or audible) the usually invisible processes of interpretation and analysis.

Once the think aloud is recorded, it can itself be analyzed, so that others can see the interpretive moves people make as they negotiate understanding (or misunderstanding). The real pedagogical treasure of the think aloud is not any individual reading of a new text, but rather the recurring meaning-making strategies that become apparent across all of the think alouds.

By capturing these think alouds at the ELO conference, I’m building a set of models for engaging in electronic literature. This will be invaluable to undergraduate students, whose first reaction to experimental literature is most frequently befuddlement.

If you are attending ELO 2015 and wish to participate, please contact me (samplereality at gmail, @samplereality on Twitter, or just grab me at the conference). We’ll duck into a quiet space, and I’ll video you reading an unfamiliar piece of e-lit, maybe from either volume one or volume two of the Electronic Literature Collection, or possibly an iPad work of e-lit. It won’t take long: 5-7 minutes tops. I’ll be around through Saturday, and I hope to capture a half dozen or so of these think alouds. The more, the better.

The Poetics of Non-Consumptive Reading

Ted Underwood's topic model of the PMLA, from the Journal of Digital Humanities, Vol. 2., No. 1 (Winter 2012)

“Non-consumptive research” is the term digital humanities scholars use to describe the large-scale analysis of a texts—say topic modeling millions of books or data-mining tens of thousands of court cases. In non-consumptive research, a text is not read by a scholar so much as it is processed by a machine. The phrase frequently appears in the context of the long-running legal debate between various book digitization efforts (e.g. Google Books and HathiTrust) and publishers and copyright holders (e.g. the Authors Guild). For example, in one of the preliminary Google Books settlements, non-consumptive research is defined as “computational analysis” of one or more books “but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within.” Non-consumptive reading is not reading in any traditional way, and it certainly isn’t close reading. Examples of non-consumptive research that appear in the legal proceedings (the implications of which are explored by John Unsworth) include image analysis, text extraction, concordance development, citation extraction, linguistic analysis, automated translation, and indexing. Continue reading

Electronic Literature after Flash (MLA14 Proposal)

I recently proposed a sequence of lightning talks for the next Modern Language Association convention in Chicago (January 2014). The participants are tackling a literary issue that is not at all theoretical: the future of electronic literature. I’ve also built in a substantial amount of time for an open discussion between the audience and my participants—who are all key figures in the world of new media studies. And I’m thrilled that two of them—Dene Grigar and Stuart Moulthrop—just received an NEH grant dedicated to a similar question, which is documenting the experience of early electronic literature.

Electronic literature can be broadly conceived as literary works created for digital media that in some way take advantage of the unique affordances of those technological forms. Hallmarks of electronic literature (e-lit) include interactivity, immersiveness, fluidly kinetic text and images, and a reliance on the procedural and algorithmic capabilities of computers. Unlike the avant garde art and experimental poetry that is its direct forebear, e-lit has been dominated for much of its existence by a single, proprietary technology: Adobe’s Flash. For fifteen years, many e-lit authors have relied on Flash—and its earlier iteration, Macromedia Shockwave—to develop their multimedia works. And for fifteen years, readers of e-lit have relied on Flash running in their web browsers to engage with these works.

Flash is dying though. Apple does not allow Flash in its wildly popular iPhones and iPads. Android no longer supports Flash on its smartphones and tablets. Even Adobe itself has stopped throwing its weight behind Flash. Flash is dying. And with it, potentially an entire generation of e-lit work that cannot be accessed without Flash. The slow death of Flash also leaves a host of authors who can no longer create in their chosen medium. It’s as if a novelist were told that she could no longer use a word processor—indeed, no longer even use words. Continue reading

From a Murmur to a Drone

Not so long ago a video of a flock of starlings swooping and swirling as one body in the sky went viral. Only two minutes long, the video shows thousands of birds over the River Shannon in Ireland, pouring themselves across the clouds, each bird following the one next to it. The birds flew not so much in formation as they flew in the biological equivalent of phase transition. This phenomenon of synchronized bird flight is called a murmuration. What makes the murmuration hypnotic is the starlings’ seemingly uncoordinated coordination, a thousand birds in flight, like fluid flowing across the skies. But there’s something else as well. Something else about the murmuration that appeals to us at this particular moment, that helps to explain this video’s virality.

The murmuration defies our modern understanding of crowds. Whether the crazed seagulls of Hitchcock’s The Birds, the shambling hordes of zombies that seem to have infected every strain of popular culture, or the thousands upon thousands of protestors of the Arab Spring, we are used to chaotic, disorganized crowds, what Elias Canetti calls the “open” crowd (Canetti 1984). Open crowds are dense and bound to grow denser, a crowd that itself attracts more crowds. Open crowds cannot be contained. They erupt. Continue reading

CFP: Electronic Literature after Flash (MLA 2014, Chicago)

Attention artists, creators, theorists, teachers, curators, and archivists of electronic literature!

I’m putting together an e-lit roundtable for the Modern Language Association Convention in Chicago next January. The panel will be “Electronic Literature after Flash” and I’m hoping to have a wide range of voices represented. See the full CFP for more details. Abstracts due March 15, 2013.

Strange Rain and the Poetics of Motion and Touch

Here (finally) is the talk I gave at the 2012 MLA Convention in Seattle. I was on Lori Emerson’s Reading Writing Interfaces: E-Literature’s Past and Present panel, along with Dene Grigar, Stephanie Strickland, and Marjorie Luesebrink. Lori’s talk on e-lit’s stand against the interface-free aesthetic worked particularly well with my own talk, which focused on Erik Loyer’s Strange Rain. I don’t offer a reading of Strange Rain so much as I use the piece as an entry point to think about interfaces—and my larger goal of reframing our concept of interfaces.
Title Slide: Strange Rain and the Poetics of Touch and Motion

Today I want to talk about Strange Rain, an experiment in digital storytelling by the new media artist Erik Loyer.

The Menu to Strange Rain

Strange Rain came out in 2010 and runs on Apple iOS devices—the iPhone, iPod Touch, and iPad. As Loyer describes the work, Strange Rain turns your iPad into a “skylight on a rainy day.” You can play Strange Rain in four different modes. In the wordless mode, dark storm clouds shroud the screen, and the player can touch and tap its surface, causing columns of rain to pitter patter down upon the player’s first-person perspective. The raindrops appear to splatter on the screen, streaking it for a moment, and then slowly fade away. Each tap also plays a note or two of a bell-like celesta.

The Wordless Mode of Strange Rain

The other modes build upon this core mechanic. In the “whispers” mode, each tap causes words as well as raindrops to fall from the sky.

The Whisper Mode of Strange Rain

The “story” mode is the heart of Strange Rain. Here the player triggers the thoughts of Alphonse, a man standing in the rain, pondering a family tragedy.

The Story Mode of Strange Rain

And finally, with the most recent update of the app, there’s a fourth mode, the “feeds” mode. This allows players to replace the text of the story with tweets from a Twitter search, say the #MLA12 hashtag.

The Feeds Mode of Strange Rain

Note that any authorial information—Twitter user name, time or date—is stripped from the tweet when it appears, as if the tweet were the player’s own thoughts, making the feed mode more intimate than you might expect.

Another View of the Feeds Mode of Strange Rain

Like many of the best works of electronic literature, there are a number of ways to talk about Strange Rain, a number of ways to frame it. Especially in the wordless mode, Strange Rain fits alongside the growing genre of meditation apps for mobile devices, apps meant to calm the mind and sooth the spirit—like Pocket Pond:

The Meditation App Pocket Pond

In Pocket Pond, every touch of the screen creates a rippling effect.

A Miniature Zen Garden

The digital equivalent of a miniature zen garden, these apps allow us to contemplate minimalistic nature scenes on devices built by women workers in a FoxConn factory in Chengdu, China.

Foxconn Factory Explosion

It’s appropriate that it’s the “wordless mode” that provides the seemingly most unmediated or direct experience of Strange Rain, when those workers who built the device upon which it runs are all but silent or silenced.

The “whispers” mode, meanwhile, with its words falling from the sky, recalls the trope in new media of falling letters—words that descend on the screen or even in large-scale multimedia installation pieces such as Camille Utterback and Romy Achituv’s Text Rain (1999).

Alison Clifford's The Sweet Old Etcetera Text Rain

 

And of course, the story mode even more directly situates Strange Rain as a work of electronic literature, allowing the reader to tap through “Convertible,” a short story by Loyer, which, not coincidentally I think, involves a car crash, another long-standing trope of electronic literature.

Michael Joyce's Afternoon

As early as 1994, in fact, Stuart Moulthrop asked the question, “Why are there so many car wrecks in hypertext fiction?” (Moulthrop, “Crash” 5). Moulthrop speculated that it’s because hypertext and car crashes share the same kind of “hyperkinetic hurtle” and “disintegrating sensory whirl” (8). Perhaps Moulthrop’s characterization of hypertext held up in 1994…

Injured driver & badly damaged vehicle from Kraftwagen Depot München, June 1915

…(though I’m not sure it did), but certainly today there are many more metaphors one can use to describe electronic literature than a car crash. And in fact I’d suggest that Strange Rain is intentionally playing with the car crash metaphor and even overturning it with its slow, meditative pace.

At the same time as this reflective component of Strange Rain, there are elements that make the work very much a game, featuring what any player of modern console or PC games would find familiar: achievements, unlocked by triggering particular moments in the story. Strange Rain even shows up on the iOS’s “Game Center.”

Slide16: iOS Game Center

The way users can tap through Alphonse’s thoughts in Strange Rain recalls one of Moulthrop’s own works, the post-9/11 Pax, which Moulthrop calls, using a term from John Cayley, a “textual instrument”—as if the piece were a musical instrument that produces text rather than music.

Pax

We could think of Strange Rain as a textual instrument, then, or to use Noah Wardrip-Fruin’s reformulation of Cayley’s idea, as “playable media.” Wardrip-Fruin suggests that thinking of electronic literature in terms of playable media replaces a rather uninteresting question—“Is this a game?”—with a more productive inquiry, “How is this played?”

There’s plenty to say about all of these framing elements of Strange Rain—as an artwork, a story, a game, an instrument—but I want to follow Wardrip-Fuin’s advice and think about the question, how is Strange Rain played? More specifically, what is its interface? What happens when we think about Strange Rain in terms of the poetics of motion and touch?

Let me show you a quick video of Erik Loyer demonstrating the interface of Strange Rain, because there are a few characteristics of the piece that are lost in my description of it.

A key element that I hope you can see from this video is that the dominant visual element of Strange Rain—the background photograph—is never entirely visible on the screen. The photograph was taken during a tornado watch in Paulding County in northwest Ohio in 2007 and posted as a Creative Commons image on the photo-sharing site Flickr. But we never see this entire image at once on the iPad or iPhone screen. The boundaries of the photograph exceed the dimensions of the screen, and Strange Rain uses the hardware accelerometer to detect your motion, your movements. So that when you tilt the iPad even slightly, the image tilts slightly in the opposite direction. It’s as if there’s a larger world inside the screen, or rather, behind the screen. And this world is broader and deeper than what’s seen on the surface. Loyer described it to me this way: it’s “like augmented reality, but without the annoying distraction of trying to actually see the real world through the display” (Loyer 1).

This kinetic screen is one of the most compelling features of Strange Rain. As soon as you pick up the iPad or iPhone with Strange Rain running, it reacts to you. The work interacts with you before you even realize you’re interacting with it. Strange Rain taps into a kind of “camcorder subjectivity”—the entirely naturalized practice we now have of viewing the world through devices that have cameras on one end and screens on the other. Think about older videocameras, which you held up to your eye, and you saw the world straight through the camera. And then think of Flip cams or smartphone cameras we hold out in front of us. We looked through older videocameras as we filmed. We look at smartphone cameras as we film.

flipcamface

So when we pick up Strange Rain we have already been trained to accept this camcorder model, but we’re momentarily taken aback, I think, to discover that it doesn’t work quite the way we think it should. That is, it’s as if we are shooting a handheld camcorder onto a scene we cannot really control.

This aspect of the interface plays out in interesting ways. Loyer has an illustrative story about the first public test of Strange Rain. As people began to play the piece, many of them held it up over their heads so that “it looked like the rain was falling on them from above—many people thought that was the intended way to play the piece” (Loyer 1).

That is, people wanted it to work like a camcorder, and when it didn’t, they themselves tried to match their exterior actions to the interior environment of the piece.

There’s more to say about the poetics of motion with Strange Rain but I want to move on to the idea of touch. We’ve seen how touch propels the narrative of Strange Rain. Originally Loyer had planned on having each tap generate a single word, though he found that to be too tedious, requiring too many taps to telegraph a single thought (Loyer 1). It was, oddly enough in a work of playable media that was meant to be intimate and contemplative, too slow. Or rather, it required too much action—too much tapping—on the part of reader. So much tapping destroyed the slow, recursive feeling of the piece. It becomes frantic instead of serene.

Loyer tweaked the mechanic then, making each tap produce a distinct thought. Nonetheless, from my own experience and from watching other readers, I know that there’s an urge to tap quickly. In the story mode of Strange Rain you sometimes get caught in narrative loops—which again is Loyer playing with the idea of recursivity found in early hypertext fiction rather than merely reproducing it. Given the repetitive nature of Strange Rain, I’ve seen people want to fight against the system and tap fast. You see the same thought five times in a row, and you start tapping faster, even drumming using multiple fingers. And the piece paradoxically encourages this, as the only way to bring about a conclusion is to provoke an intense moment of anxiety for Alphonse, which you do by tapping more frantically.

I’m fascinated by with this tension between slow tapping and fast tapping—what I call haptic density—because it reveals the outer edges of the interface of the system. Quite literally.

Move from three fingers to four—easy to do when you want to bring Alphonse to a crisis moment—and the iPad translates your gestures differently. Four fingers tells the iPad you want to swipe to another application, the Windows equivalent of ALT-TAB. The multi-touch interface of the iPad trumps the touch interface of Strange Rain. There’s a slipperiness of the screen. The text is precipitously and perilously fragile and inadvertently escapable. The immersive nature of new media that years ago Janet Murray highlighted as an essential element of the form is entirely an illusion.

I want to conclude then by asking a question: what happen when we begin to think differently about interfaces? We usually think of an interface as a shared contact point between two distinct objects. The focus is on what is common. But what if we begin thinking—and I think Strange Rain encourages this—what if we begin thinking about interfaces in terms of difference. Instead of interfaces, what about thresholds, liminal spaces between two distinct elements. How does Strange Rain or any piece of digital expressive culture have both an interface, and a threshold, or thresholds? What are the edges of the work? And what do we discover when we transgress them?

Works Cited

Dramatic Clouds over the Fields