Four Points about the Infrastructures of Professional Development

On Thursday, January 5, I participated on a round table at the 2023 MLA convention, organized by the MLA itself. The panel was called “Infrastructures of Professional Development.” Here’s the panel description:

This roundtable includes leaders who have developed technical, pedagogical, administrative, and organizational structures with potential to serve as sites for professional development. Brief comments will be followed by an open forum on how the MLA can learn from and collaborate with these leaders and others to grow and enhance professional development offerings in service to members across the career arc.

I was joined by Kathleen Fitzpatrick and Sonja Rae Fritzsche, both from Michigan State University. We each delivered short remarks and the proceeded to have a wide-ranging discussion. Here are my prepared comments, such as they were.

➰➰➰

I appreciate the work that Jason Rhody and Janine Utell at the MLA have done to bring this group of panelists together. My opening comments are going to be brief, because I know the best part of these roundtables is the discussion that follows. There are four points I want to make today regarding “Infrastructures of Professional Development.” And I want to preface them by saying that I’m zooming out and offering broad generalizations here, rather than nuts-and-bolts details about professional development initiatives that I’ve been a part of or helped to build—though I’m happy to talk about those in the discussion. The reason I’m zooming out is because my own context—I’m at a relatively well-resourced small liberal arts college—may not be your context, and infrastructure, as well as what counts as professional development, is highly contextual.

I.

I want to call your attention to the word infrastructures in our panel title. It’s an odd word in this context. You hear infrastructure and you think roads, bridges, sewer lines, power substations. The underlying structures that make everything else possible. As Susan Leigh Star and Martha Lampland put it, infrastructure is “the thing other things ‘run on’” (Star and Lampland 17). And the funny thing about infrastructures—and I’m far from the first person to point this out—is that when they are working, you don’t think about them. They’re all but invisible. It’s when infrastructure breaks down that it becomes visible, or as Heidegger would put it (and forgive me for quoting Heidegger), they become “present-at-hand.” They are no longer transparent. They’re in your face. You don’t notice the road until there’s a pothole. You don’t pay attention to a bridge until it’s closed and you have to detour the long way around. You don’t think about the power until the lights don’t come on.

II.

Let’s think about two phase states of infrastructure when it comes to professional development. The first phase state: infrastructure when it works and is invisible; and the second phase state: infrastructure when it’s not working and very much visible. It presents a bit of a dilemma. Infrastructures for professional development, if they’re working well, you don’t even see them. You take them for granted. It makes professional development hard to talk about, to share ideas with others, to build on what’s working at other institutions or organizations. That sharing is one of the things I hope we get to do today.

The flip side occurs when the processes for professional development aren’t working—and I’m sure we all have war stories to trade. The infrastructure becomes visible, because it’s broken. But what’s needed to fix or repair or replace that infrastructure—we might call this speculative infrastructures—those possibilities remain out of sight. And in fact, discussion about speculative infrastructures is displaced by something else. I’m thinking of a dynamic that Sara Ahmed describes frequently in her work. When someone points out a problem, they become the problem, not the problem itself. Working in institutions, as many of us do, you’ve seen this. As Ahmed puts it in her “Feminist Killjoys” essay, when you are the one to point out a problem, it means “you have created a problem. You become the problem you create” (“Feminist Killjoys (And Other Willful Subjects)”). It’s almost as if the problem didn’t exist—or at least some people wanted to pretend it didn’t exist—until somebody pointed out the problem. So I think our challenge here, in addition to sharing infrastructures for professional development that are promising, is to diagnose infrastructures for professional development in a way in which our complaints don’t supersede the underlying problem. Ahmed’s latest book, Complaint!, is instructive here, especially since it’s centered on institutions. Ahmed observes that we often think about complaints as formal allegations—I lodged a complaint—but she shows how complaints are “an expression of grief, pain, or dissatisfaction, something that is a cause of a protest or outcry, a bodily ailment” (Complaint! 4). There is an affective and embodied dimension to complaints. So as we talk this afternoon, and if some complaints about institutions and organizations come up, let’s hear the complaints for what they really are, testimonies about our lived experiences.

III.

As a consequence of infrastructure often being invisible, the people who design, implement, and maintain those infrastructures remain invisible as well. This is true whether the infrastructure is a bridge or an online collective. If we think about, say, the work the MLA does to support professional development, most members of the MLA do not know who is actually doing that work to support professional development. Who are the faces? What are their names? We have Jason and Janine here, but most MLA members would be hard pressed to name the people, beyond Paula, who work to make the convention happen. And to be clear, the convention, whatever else it is, is an infrastructure for professional development. And if this were any other infrastructure, that invisibility would be something you’d want. If you don’t know the people making something work, if they can fade into the background while the thing functions seamlessly, that’s usually what you want. It means things are working. But, in my own subfields of digital humanities, media studies, and science and technology studies, there’s been a growing attention paid to the labor of people who make things, who make things run, and who fix the things when they’re broken. And this is something I think we in our respective institutions and organizations should consider when it comes to infrastructures for professional development. Not just, as I’m doing here, recognizing the work that everyone is putting in to provide opportunities for professional development, but actually putting forward the stories and aspirations of those of you, of us, who work on infrastructures that support professional development. In other words, step out from behind the curtain, and introduce ourselves to our constituents. Tell them our stories—your stories. What are our hopes and dreams, what do we get out of supporting you? Show how supporting professional development isn’t simply a transaction, but, it’s a relationship. Make it clear that whatever infrastructure you are providing is like Soylent Green, it’s made out of people.

IV.

This brings me to my fourth and final point. People. One of the lodestars for how I think about labor in the academy is Miriam Posner, at UCLA. Years ago Miriam wrote a blog post that I still think about all the time. The post is called “Commit to DH People, Not DH Projects.” Miriam is talking here specifically about the digital humanities, and critiquing the tendency to frame work in DH around projects. What if, she wonders, we put the emphasis on people, not projects? Let me quote her here: “What if,” Miriam writes, “we viewed digital methods as a contribution to the long arc of a scholar’s intellectual development, rather than tools we pick up in the service of an immediately tangible product? Perhaps we’d come up with better ways of investing in people’s long-term potential as scholars” (Posner). If we blur out the particulars of digital humanities scholarship here, and think more broadly about Miriam’s underlying point, it applies in so many ways to supporting professional development across the board, whether that development is focused on scholarly, pedagogical, creative, or even administrative pursuits. The infrastructures for professional development need to support people, not projects, not stages of their careers. People, not one-off workshops, not a conference here or there, not week-long institutes, not webinars. People, and people over a long period of time, people who evolve and grow over time. Professional development, in the end, is about people supporting people, people supporting each other.

Works Cited

Ahmed, Sara. Complaint! Duke University Press, 2021.

—. “Feminist Killjoys (And Other Willful Subjects).” The Scholar and Feminist Online, vol. 8, no. 3, Summer 2010, http://sfonline.barnard.edu/polyphonic/print_ahmed.htm.

Posner, Miriam. “Commit to DH People, Not DH Projects.” Miriam Posner’s Blog: Digital Humanities, Data, Labor, and Information, 18 Mar. 2014, https://miriamposner.com/blog/commit-to-dh-people-not-dh-projects/.

Star, Susan Leigh, and Martha Lampland, editors. Standard and Their Stories: How Quantifying, Classifying and Formalizing Practices Shape Everyday Life. Cornell University Press, 2009.

Babyface Dev Diary #2 – Choices

My last development diary entry looked at the origins of Babyface, my submission to the 2020 Interactive Fiction competition (IFComp). This dev diary entry looks at one of the first things reviewers say about Babyface: that it’s mostly linear. Usually this comes as a simple description of the game’s format, rather than a criticism. Because I did expect people to criticize the game for having such a linear-driven narrative. Babyface lacks some of the hallmarks of interactive fiction. There are no puzzles you can solve. And there are no choices that change the outcome of the game. This lack of choices is absolutely deliberate. So now I want to talk about my approach to the question of agency and choice (or lack thereof) in Babyface.

Babyface is categorized as “choice-based” on the IFComp list of entries. But I only selected “choice-based” as the category because Babyface clearly doesn’t fit the other available category, parser-based. Both categories carry with them a set of associations:

  • Parser-based games generally revolve around puzzles and occasionally riddles.
  • Choice-based games generally involve, well, choices, and ideally choices that have some sort of meaningful impact on the game.

If there had been a “hypertext” category, that’s what I would have selected for Babyface. I like the hypertext designation because it retains some of the expectations of the choice-based formats (you’re clicking on links rather than typing commands into a parser) but it shifts the focus away from the actual choices. “Hypertext” opens the door for thinking about links as something other than choices. In Babyface you can probably see this most clearly in the late Thursday night / early Friday morning sequence, when you find yourself back at the Babyface House. You find the same link multiple times in a row, and each time you select it the link remains the same, while the text slowly expands:

An animated GIF showing repeated clicks on the phrase "I find myself" in the game Babyface.

You could think of this as a kind of stretchtext. But rather than expanding the narrative by filling in gaps (say, like the stretchtext in Pry), it conveys a paradoxical sense of standing still while nonetheless moving forward.

If I were to drag in narrative theory (yes, I’m going to drag in narrative theory), I think about how Marie-Laure Ryan places interactivity in digital games into two distinct categories. There’s interactivity that corresponds to the player’s perspective: are they embedded in the game as a participant in the story (internal interactivity), or are they looking down from an omniscient godlike perch (external interactivity). And there’s interactivity that corresponds to the kind of actions available to the player, what Ryan calls exploratory versus ontological interactivity. Does the player probe an existing world or set of choices (exploratory interactivity), or does the player have the power to change the game world itself, as in Minecraft (ontological interactivity). I picture these kinds of interactivity like this, along two axes:


            Exploratory
		        |
		        |
		        |
		        |
Internal -------+------ External
		        |
		        |
		        |
		        |
            Ontological

A game like The Sims or Civ fits in the external-ontological quadrant. You might consider a first-person shooter to be internal-ontological, depending on how much you think killing NPCs changes the game world might. Babyface clearly fits within the internal-exploratory quadrant. You play as a character, about whom you can glean some details and personal history as the game progresses. That’s internal interactivity. And you can only move about in a world I have strictly delineated. That’s exploratory interactivity. There’s nothing you can do in the game world to change it.

The exploratory nature of the game is heightened by the occasional loop with in-game documents, like the old Polaroid photographs. When the narrator’s father hands her a set of four photos, you can click each multiple times, and each time reveals a different description. In this way I’m trying to convey a sense of discovery about the history of Babyface, one piece of evidence at a time. I programmed that sequence so that it always moves on to the next narrative beat before the narrator’s had a chance to see every description of the photographs. No matter what order you click the photos, there will always be one description you don’t get a chance to see. My hope is that that last piece of evidence becomes an Easter Egg that draws players with a completionist mindset to go through the game again.

So internal-exploratory interactivity. But there’s another kind of interaction with Babyface (ideally, though this won’t be true for all players) that the four quadrants of interactivity doesn’t capture. I call this epistemological interactivity. Epistemological, that’s a mouthful. What I mean by epistemological is the nature of knowledge and knowing in the game. I picture epistemology as a Z axis that juts forward and backward from my graph above, intersecting with the other two axes.

So there’s internal-ontological-epistemological interactivity. That’s solving puzzles in the game that would have an impact on the game world. Of which in Babyface there are none. Then there’s internal-exploratory-epistemological (IEE) interactions, which are mysteries in the game that you can try to figure out, but which won’t impact the game world itself. You could argue that learning how to work the interface is an internal-exploratory-epistemological puzzle. For example, figuring out that at certain points the narrative will only proceed after you click on the photographs several times each. Piecing together details of the narrator’s life is another IEE mystery, as is the story of her mother and Babyface.

But, while there are plenty of clues about the nature of the relationship between the narrator’s mother and Babyface, there’s nothing in the game to definitively explain it. At the heart of Babyface are several kernels of sheer irrationality. To reach any kind of narrative closure about the game, you have to reach beyond the game. That’s external-exploratory-epistemological interactivity. A little bit of research, a little bit of Googling, and some of the odd pieces of Babyface hopefully begin to—well, if not make sense, then at least cohere. One of my early taglines for the game hinted at this nondiegetic epistemological interactivity: “A Southern Gothic horror story, where the only puzzles are metaphysical.” What I meant was, the puzzles the game poses spill beyond the borders of the game.

Another way I conceived of Babyface early on was as a Southern Gothic creepypasta story. I stopped referring to the game as creepypasta, though, because that word (like parser-based or choice-based) carries associations I didn’t necessarily want attached to Babyface. Nevertheless, the game does share a family resemblance to creepypasta. As I see it, two key features of creepypasta (as a genre of fiction, not as a website or Internet phenomenon) are (1) irrationality disrupts the every day world; and (2) there’s a blurring between the inside and outside of the story, raising the specter that the story really happened. Both features operate within the realm of epistemology: What really happened? How do we know? Could it have really happened? Could it happen again?

Those epistemological questions are what I hope the close reader of Babyface walks away with, rattling in their heads. It’s an engagement with the story—interactivity—that happens outside the story itself. For the story I wanted to tell—which is ultimately a story about the year 2020 and the decades leading up to it—this kind of epistemological agency mattered more to me than player agency.

Babyface Dev Diary – Origins

Cover art of the game Babyface, featuring the title and a large gasmask
Babyface coverart

So, Babyface is a thing I made. It’s a creepypasta-style Southern Gothic horror story. I’ve entered the game into the 26th annual Interactive Fiction Competition (IFComp for short). You can play Babyface right now! I’ve followed IFComp for years—since at least 2007—but this is the first time I’ve made anything for the competition. Not that I haven’t wanted to, but finally everything lined up: my idea, the time to write it, the skills to do it, and finally the deliberate shift in my professional life from conventional scholarship to creative coding.

IFComp authors often share a “developer’s diary” that details their creative and coding process. I don’t really consider myself a developer. I’m more of a “I make weird things for the internet” person. But still, I thought I’d give this dev diary thing a try. If nothing else, than to debrief myself about the design process. I’ve blurred the text of any spoilers—just hover or tap on the blurred text to read it.

Babyface wasn’t supposed to be my game for IFComp. I was working on another game, a much larger game, a counterfactual history of eugenics in America. The game is basically asking what if CRISPR-like gene editing technology had been invented in the 1920s, the height of the eugenics movement. The game is heavily researched and includes meaningful choices (unlike Babyface, which is more or less on rails). But! But! But—I ended up talking about the game in conference talks and symposiums and showing it to enough people that it felt like it would be disqualified for IFComp, which has a strict rule that the competition must be the public debut of the game. So I released that game (or rather, the first “chapter” of it) back in May as You Gen #9. Play it, please!

Anyway, I was left without a game, which was fine. But then I had a horrific nightmare in May, and I couldn’t get one image out of my head. It literally haunted me. And then in July Stacey Mason on Twitter announced a fortnightly interactive fiction game jam. So I started playing around with my nightmare, trying to give it context and a narrative frame. Pretty quickly I realized the game was going to be too ambitious (LOL, it’s really quite a modest game, but it felt ambitious to me) to finish in two weeks for a game jam. So I continued working on the game all through August and September. On one hand three months to put together a polished game is not a lot of time. On the other hand, I had been working in Twine almost every day for the past year, and the story is modest (my best estimate is around 16,000 words, though it’s tough to measure word counts in a game with dynamic text). Plus there’s not a lot of state logic to keep track off. No complicated inventory systems, no clever NPCs. Just the narrator, a few interactions, and her memories.

I’ll talk more about specific design choices in a future post, but for now I wanted to say a few words about the setting. Like most Gothic fiction, the setting itself is a character in the game.

I was working with a concrete geography in Babyface. The old brick house is based on a real house in my small North Carolina college town. I could walk there right now in about 20 minutes, all on pleasant neighborhood streets. Less than five minutes by car. A recluse lived there, and the house, as in the game, is down the street from the local elementary school. The recluse died a few years ago, and it was some time before anybody even knew. Somebody eventually bought the property, tore down the old brick house, and put up a gaudy McMansion.

One detail I had wanted to include in the game but decided against, because it would have seemed too unbelievable: between the old house and the elementary school there’s a cemetery. I had considered incorporating the cemetery into the story as the narrator runs away from the house, but it just seemed too forced. One of those instances where real life out-narrativizes fiction, and in order to make the fiction more palatable, you have to dial back the realism.

The Southern backdrop is understated, though I dropped in enough clues that some readers might realize the centrality of the South in the game. More about those later…

An End of Tarred Twine, a Monstrous Moby-Dick Hypertext

In my previous post I listed all the digital creative/critical works I’ve released in the past 12 months. (Whew, it was a lot, in part because I had the privilege to be on sabbatical from teaching in the fall, my first sabbatical since 2006. I made the most of it.)

Now, I want to provide a long overdue introduction to each of my newest works, one post at a time. Let’s start with An End of Tarred Twine, a procedurally-generated hypertext version of Moby-Dick. I made An End of Tarred Twine for NaNoGenMo 2019 (National Novel Generation Month), in which the goal is to write code that writes a 50,000 word novel. Conceived by Darius Kazemi in 2013, NaNoGenMo runs every November, parallel to National Novel Writing Month. I’ve always wanted to participate in NaNoGenMo, but the timing was never good. It falls right during the crunch period of the fall semester. But, hey, I wasn’t teaching last fall, so I could hunker down and finally try something.

An End of Tarred Twine is what I came up with. The title is a line from Moby-Dick, where Captain Bildad, one of the Quaker owners of the Pequod is fastidiously preparing the ship for its departure from Nantucket. As sailmakers mend a top-sail, Bildad gathers up small swatches of sailcloth and bits of tarred twine on the deck that “otherwise might have been wasted.” That Captain Bildad saves even the smallest scrap of waste speaks to his austere—one might say cheap—nature. The line is also one of the few references to twine in the novel. This was important to me because An End of Tarred Twine is made in Twine, an open source platform for writing interactive, nonlinear hypertext narratives.

An End of Tarred Twine is like the white whale itself—at once monstrous and elusive. And that’s because all the links and paths are randomly generated. You start off on the well-known first paragraph of Moby-Dick—Call me Ishmael & etc.—but random links in that passage lead to random passages, which lead to other random passages. Very quickly, you’re lost, reading Moby-Dick one passage at a time, out of order, with no map to guide you. Or as Ishmael says about the birthplace of Queequeg, the location “is not down in any map; true places never are.”

A Monstrous Hypertext

Here, this GIF shows you what I mean. It starts with the start of Moby Dick but quickly jumps into uncharted waters.

An End of Tarred Twine
Clicking through the opening sequence of An End of Tarred Twine

This traversal starts off in chapter 1, jumps to chapter 24, then on to chapter 105, and so on. One paragraph at a time, in random order, with no logic behind the links that move from passage to passage. As a reading experience, it’s more conceptual than practical, akin to the Modernist-inflected hypertext novels of the 1980s. As a technical experiment, I personally think there’s some interesting stuff going on.

Look at these stats. An End of Tarred Twine has:

  • 250,051 words (the same as Moby Dick, minus the Etymology and Extracts that precede the body of the novel)
  • 2,463 passages (or what old school hypertext theorists would call lexias)
  • 6,476 links between the passages
  • 2.63 average links on any single passage

Another visual might help you appreciate the complexity of the work. One of the cool features of the official Twine app (i.e. where you write and code your interactive narrative) is that Twine maps each passage on a blueprint-like grid. For the typical Twine project, this narrative map offers a quick overview of the narrative structure of your story. For example, here’s what Masks, one of my other recent projects, looks like on the backend in Twine:

A map of the game Masks in Twine
A map of Masks in Twine

Each black line and arrow represents a link from one passage to the next. Now look at what An End of Tarred Twine looks like on the backend in Twine:

EKtmWCDWsAEsMH2
An End of Tarred Twine in Twine

The first passage (labeled 0) is the title screen, with the word “Loomings” linking to the second passage (1). You can see that passage then has outbound links as well as some inbound links. Here’s another view, deeper into the hypertext:

EKtmWCEWoAArtdo
Lost in the map of An End of Tarred Twine in Twine

There are so many links between passages by this point that the link lines become a dense forest of scribbles. You can almost image those lines as a detail taken from Rockwell Kent’s stunning illustration of Moby Dick breaching the ocean in his 1930 edition of Moby-Dick.

A whale breaching the ocean, illustration by Rockwell Kent

Workflow

Now, how did I create this unnavigable monstrosity? The point of NaNoGenMo is that you write the code that writes the novel. That’s really the only criteria. The novel itself doesn’t have to be good (it won’t be) or even readable (it won’t be).

Here’s how I made a several-thousand passage Twine with many more thousands of random connections between those passages:

  1. First, I downloaded a public domain plain text version of Moby-Dick from the Gutenberg Archive. I chopped off all the boilerplate info and also deleted the Etymology and Extracts at the beginning of the novel, because I wanted readers to dive right in with the famous opening line.
  2. Now, the Twine app itself doesn’t foster the editing of huge texts like Moby-Dick. And it doesn’t allow programmatic intervention—say, selecting random words, turning them into links, and routing them to random passages. But Twine is really just a graphical interface and compiler for a markup language called Twee. The fundamental elements of Twee are simple. Surround a word with double brackets, and the word turns into a link. For example, in Twee, [[this phrase]] would turn into a link, leading to a passage called “this phrase.” Or here, [[this phrase->new passage]] will have the text “this phrase” link to a new passage, clumsily called “new passage.” There are other compilers for Twee aside from the official Twine application. I use one called Tweego by Thomas Michael Edwards. With Tweego, you can write your Twee code in any text editor, and Tweego will convert it a playable HTML file. This means that you can take any text file, have a computer program algorithmically alter it with Twee markup, and generate a finished Twine project. So that’s what I did.
  3. I wrote this Python program. It does a number of things, which follow.
  4. First, it breaks Melville’s 1851 masterpiece into 2,463 individual Twine passages—basically every paragraph became its own standalone passage.
  5. The program also gives each passage a title using the simplest method I could think of: the first passage is 0, the next is titled 1, the third is 2, and so on. That’s why there are numbers in each passage block in the screenshots above.
  6. Next, the program uses the SpaCy natural language processing module to identify several named entities (i.e. proper nouns) and verbs in each passage.
  7. Finally, the program links those nouns and verbs to one of the other over 2,463 passages by surrounding them with double brackets. This technique makes it a simple matter to direct links to a random passage. You just have Python pick a random number between 1 and 2,462 and direct the link there. Note that I excluded 0 (the title passage) from the random number generation, because that would have created an endless loop. The title passage only appears once, at the start.
  8. After the Python has done all the work, I use Tweego on the command line to compile the actual Twine HTML file.

Sample Twee

You can check out the Python program that does the heavy-lifting on Github. But I thought people might also want to see what the Twee code looks like. It’s so simple. Here’s the first main passage. The double colons signify the passage title. So this passage is “1.” Then whenever you see double brackets, that’s a link to a different passage, which is also a number. For example, the name “Ishmael” becomes a link to passage #1626.

:: 1
Call me [[Ishmael->1626]]. [[Some years ago->2297]]--never mind how long precisely--having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly [[November->526]] in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people's hats off--then, I account it high time to get to sea as soon as I can. This is my substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me.

The links in this sample Twee code are different from the version of An End of Tarred Twine that I posted for NaNoGenMo and published on Itch. Because every time I run the Python script it creates an entirely new hypertext, with new links and paths through it. This what tickles me most about the project: anyone can take the source text and my Python program and generate their own version of An End of Tarred Twine. It reminds me of Aaron Reed’s recent novel Subcutanean, in which every printed version is different, algorithmically altered so that words, phrases, even entire scenes vary from one copy to the next—yet each version tells the fundamentally same story. In her review of Subcutanean, Emily Short suggests that the multitudinous machined variations fit the theme of the novel, of “the unknowable proliferation of motives and outcomes.”

Similarly, with An End of Tarred Twine we could have thousands of versions of the story, none alike. Just fork my code and make your own. A thousand different paths through Moby-Dick, none of them really Moby-Dick, but all of them monstrously “nameless, inscrutable, unearthly”—like the vengeful malice that drives Ahab himself to his ruin, dragging his beleaguered crew down with him.

Play This Stuff I Made

An endless list of dreams crushed by the coronavirus
The Infinite Catalog of Crushed Dreams (April 2020)

When you’re a college professor, you follow a different calendar from the rest of the grown-up world. There’s school and there’s summer, and that’s how you plot your time. Of course, a global pandemic wreaks havoc on this calendar. But usually, somewhere about now I stop thinking about the previous academic year and start looking ahead to the next one. My New Year begins on July 1, not January 1.

Since I’m closing the books on the 2019-2020 school year, I wanted to remind myself of all the projects I put out into the world during this time. Here in one place are all the critical-creative digital works I released in the past 12 months. I’ll write more about many of these projects later, so right now a blurb for each will have to suffice. Hopefully that’s enough to pique your interest…

  • Ring™ Log (October 2019) – imagines what a Ring “smart” doorbell cam might see on a Halloween night
  • An End of Tarred Twine (November 2019) – a randomly generated hypertext version of Moby Dick in Twine, with 2,463 pages and 6,476 links, and utterly impossible to make sense of
  • Masks (December 2019) – a short hypertext narrative inspired by the Hong Kong protests
  • @BioDiversityPix (February 2020) – A bot that tweets random illustrations from the Biodiversity Heritage Library
  • The Infinite Catalog of Crushed Dreams (April 2020) – An infinite list of hopes, dreams, and aspirations crushed by the coronavirus
  • Ring Pandemic Log (April 2020) – Using the same concept of Ring™ Log, this version imagines what a Ring camera might see during an early day of the coronavirus quarantine
  • You Gen #9 (May 2020) – the first chapter of a longer counterfactual interactive narrative about eugenics and gene-editing technology, set in the 1920s
  • Content Moderator Sim (June 2020) – A workplace horror game that puts you in the role of a subcontractor whose job is to keep your social media platform safe and respectable.

In general I was working in one of two modes for each project: procedural generation or interactive fiction. The former hopes to surprise readers with serendipitous juxtapositions and combinations, the latter hopes to entice readers with narrative impact. Whether I succeed at either is a question I’ll leave to others.

Fuck You, Silicon Valley

It’s not that I want to be angry, or despairing, but when I see this email in my inbox, on top of the daily hourly whalloping I get from the news and friends and family on the front lines of the coronavirus pandemic, I can’t help be angry, and despair:

GSV Virtual Summit Email Header

Hey! A virtual summit! You don’t call something a summit unless it’s important! And it’s virtual, so it must be doubly important! And is that lens flare in the logo? And concentric circles? Lens flare and concentric circles? Shit just got real.

But who’s this GSV, I wonder? Quick search!

gsv_search

Global Silicon Valley?

Huh.

I don’t get it. Silicon Valley is a place. A very specific place on the West Coast of the United States. You know, the headquarters of Google, Facebook, Apple, Twitter, etc. So what’s Global Silicon Valley?

Oh, wait, I do get it. Global refers to the ideology of Silicon Valley, not its geography. And Silicon Valley sees itself as exporting that ideology to the rest of the world. Or maybe colonizing is a better word. But do we really want the rest of the world to look like Silicon Valley? Here’s all you need to know about the ideology of Silicon Valley: they’ve got startup guys working on an app that gives you badges for multiple-day meditation streaks while outside nearly 30,000 homeless people scrape by.

With the third largest homeless population in the United States (behind NYC and LA), you can see why the tagline for this virtual summit declares “geography no longer matters.” It’s a kind of wishful thinking. You can fucking ignore what’s happening right outside your door. Because geography no longer matters.

Geography no longer mattersIsn’t that the most Silicon Valley thing you’ve ever heard? It’s like saying bodies no longer matter. When what you really mean is, only the right bodies matter. The same way some bodies get to shelter-in-place safely during the coronavirus lock-down, while other bodies risk their lives.

But maybe I’m being too harsh. I shouldn’t judge this summit solely based on its name and tagline, as off-putting as those may be. I should judge it based on its speakers. Who’s at this summit?

(Here, dear reader, I face a quandary. For if I just paste in the list of speakers there’s a good chance your eyes may catch fire and you’ll never be able to read again. Oh well.)

Eric Yuan
Founder & CEO, Zoom

Arne Duncan
Former U.S. Secretary of Education

Sal Khan
Founder & CEO, Khan Academy

Ted Mitchell
Former U.S. Undersecretary of Education

Joy Chen
U.S. Chief Investment Officer, TAL Education Group

Jeff Maggioncalda
CEO, Coursera

Sam Chaudhary
Co-Founder & CEO, ClassDojo

Michael Horn
Co-Founder & Distinguished Fellow, Christensen Institute

Marni Baker Stein
Provost & Chief Academic Officer, Western Governors University

Luis von Ahn
Co-Founder & CEO, Duolingo

Bridget Burns
Executive Director, University Innovation Alliance

Paul LeBlanc
President, Southern New Hampshire University

Josh Scott
President, Guild Education

Michael Moe
Co-Founder, GSV

? Hmmmm.

So none of the speakers for “The Dawn of the Age of Digital Learning” are…experts on digital learning?

I know what you’re saying! Sal Kahn, you’re saying, he’s an expert on digital learning.

No, Sal Kahn is an expert on content delivery.

But, what what about Luis von Ahn, the Duolingo guy? The reCaptcha guy? No, Luis von Ahn is an expert on turning unpaid human labor into machine learning training sets.

But what about Arne Duncan, you ask? (I joke. Nobody asked that.)

I’ll say this once: you can’t be an expert on “digital learning” if you’re not an expert on learning.

Fuck, I’ll say it again: you can’t be an expert on “digital learning” if you’re not an expert on learning.

The best we can say about these guest speakers is that many of them have sought to optimize the efficiency at which content can be put in front of the eyes of consumers.

You want an expert on digital learning? Get Audrey Watters on board. (LOL, good luck with that, Audrey scares these people shitless.) Get Tressie McMillan Cottom on the panel. Tressie has a thing or two to say about profiteering from learners.

You want an expert on digital learning? Get my student who sat through a 3-hour seminar on Zoom that fried her brain and of course you start to understand why Zoom includes a feature to detect if participants are in a window other than Zoom because that’s the only way to survive a 3-hour seminar on Zoom.

You want an expert on digital learning? Tell the CEOs to shut the fuck up and pay attention to every professor who ends their 50-minute Zoom class feeling like it was the worst class in their life, even worse than the previous worst class and can I just crawl in a hole and die now?

And it wasn’t the worst class because the professors don’t know how to teach. Or because students don’t know how to learn. It was the worst class because the technology sucks, the world sucks, we’re all burned out and tired and wondering if we’ll ever be in the same room with each other again. And meanwhile the shitty Global Silicon Valley folks have this to say in their announcement about their summit:

Being Digital has been a Megatrend for 30 years, and online learning has gone from a concept to a $100 billion industry. The fundamentals of the Knowledge Economy and Digital Infrastructure have been in place to see a massive market evolve—with COVID-19 clearly a catalyst for the market exploding right now.

There are people losing their jobs, people dying right now. A million crushed dreams and aspirations, my own seniors devastated that they’ll have no commencement in May. And Silicon Valley leaders want to talk about the massive market opportunities they see? This goes beyond poor taste. It’s predatory.

The email announcement for the summit concludes on a utopian note characteristic of Silicon Valley:

We had the World before Coronavirus. And we will have a New World after this challenge subsides. While we are all going through a turbulent storm right now, over the horizon is the Dawn of a New Age with great promise. The future is here.

New World. Horizon. Dawn. New Age. Are we talking about pedagogy or writing a crappy Ayn Rand ripoff? (Obviously, no, they’re not talking about pedagogy. They know shit about pedagogy.)

The future is here, and Silicon Valley circles overhead.

AI Dungeon and Creativity

AI Dungeon Logo

In early January I joined a group of AI researchers from Microsoft and my fellow humanist Kathleen Fitzpatrick to talk at the Modern Language Association convention about the implications of artificial intelligence. Our panel was called Being Human, Seeming Human. Each participant came to this question of “seeming human” from a different angle. My own focus was on creativity. Here’s the text of my prepared remarks.

Today I want to talk to talk briefly about artificial intelligence and creativity. And not just creativity as it pertains to AI but human creativity as well. So, has anyone heard or played AI Dungeon yet?

AI Dungeon was released just a few weeks ago and it has gone absolutely viral. It’s an online text adventure you play in your browser or run as an app on your phone. Now, text adventure, that was a popular kind of game in the 1980s. A lot of people know Zork. In these games the player is offered textual descriptions of a house, a cave, a spaceship, dungeon, whatever, and the player types short sentences like go east, get lamp, or kill troll in order to solve puzzles, collect treasure, and win the game. There’s a parser that understands these simple commands and responds with canned interactions prewritten by the game developers. Text adventures are also known as interactive fiction and there’s a rabid fan base online that’s part geek nostalgia, part genuine fondness for these text-based games.

Interactive fiction often revolves around choice, where players have multiple ways to transverse the world and solve the puzzles. Following this generic convention, AI Dungeon opens up with major choice, literally which genre of text adventure you want. Fantasy, mystery, apocalyptic, and so on.

Selecting the genre for AI Dungeon

So here I picked fantasy and immediately I’m thrust into a procedurally generated story: a fantasy world entirely written by a natural language processing program.

Generating a static dungeon on the fly is one thing. But what’s amazing about AI Dungeon is that it’s not a scripted world so much as an improv stage. You can literally type anything, and AI Dungeon will roll with it, generating an on-the-fly response.

Eating a dragon in AI Dungeon

So here, we have a stock feature of fantasy text adventures, a dragon. And I eat it. The game doesn’t bat an eye. It runs with it and lets me eat the dragon, responding with a fairly sophisticated sentence that aside from its subject matter, sounds like something you’d read in a classic text adventure. “You quickly grab the dragon’s corpse and tear of a piece of its flesh.”

Let me be clear. No human wrote that sentence. No human preconceived a scenario where the player might eat the dragon. The AI generated this. Semantically and grammatically, the AI nails language. It’s not as good at ontology. It lets me fly the dragon corpse to Seattle. The AI is a sponge that accepts all interactions. As you can imagine, people go crazy with this. The amount of AI dungeon erotica out there is staggering—and disturbing.

Later I run into some people and I ask them about the MLA convention.

Asking about the MLA convention in AI Dungeon

A man responds to my question about the MLA, “It’s a convention where all wizards use the same language. It’ll make things easier.”

Oh, that answer is both so right and so wrong.

So how does this all work? I obviously don’t have time to go into all the details. But it’s roughly this: AI Dungeon relies on GPT-2, an AI-powered natural language generator. The full GPT-2 set is trained on 1.5 billion parameters gleaned from over 40 gigabytes of text scrapped from the Internet. The training of GPT-2 took months on super-powered computers. It was developed by Open AI, a not-for-profit research company funded by a mix of private donors like Elon Musk and Microsoft, which donated $1 billion to Open AI in July.

One innovation of GPT-2 is that you can take the base language model and fine-tune it on more specific genres or discourse. For a while Open AI stalled on releasing the full GPT-2 set because of concerns it could be abused, say by extremist groups generating massive quantities of AI-written propaganda. In the more benign case of AI Dungeon, the AI is finetuned using text adventures scrapped from chooseyourstory.com.Summoning a Giraffe in AI Dungeon

There’s much more to be said about AI Dungeon, but I’ll leave you with just a few provocations.

  1. Games are often defined by their rules. So is AI Dungeon a game if you can do anything?
  2. Stories are often defined by their storytellers. Is AI Dungeon a story if no one is telling it?
  3. And finally, a mantra I repeat often to my students when it comes to technology: everything comes from somewhere else. Everything comes from somewhere else. GPT-2 didn’t emerge whole-cloth out of nothing. It’s trained on the Internet, specifically, sources linked to from Reddit. There’s money involved, lots of it. Follow the money. Likewise, AI Dungeon itself comes from somewhere else. On one hand its creator is a Brigham Young University undergraduate student, Nick Walton. On the other hand, the vision behind AI Dungeon—computers telling stories—goes back decades, a history Noah Wardrip-Fruin explores in Expressive Processing. The genre fiction invoked by AI Dungeon has an even longer history.

All this adds up to the fact that AI Dungeon turns out to be a perfect object of study for so many disciplines in the humanities. Whether you think it’s a silly gimmick, an abomination of the creative spirit, the precursor to a new age of storytelling, whatever, I think humanists ignore AI storytelling at our own peril.

Speculative Surveillance with Ring™ Log

Over the weekend I launched Ring™ Log, which is simultaneously a critique of surveillance culture and a parody of machine vision in suburbia. In the interactive artist statement I call Ring™ Log an experiment in speculative surveillance.

Animated GIF of Ring Log in Action

“Speculative” in this context means what if?

What if Amazon’s Ring™ doorbell cams began integrating AI-powered object detection in order to identify, catalog, and report what the cameras “see” as they passively await for friends, neighbors, and strangers alike to visit your home? This is the question Ring™ Log asks. And, given the season (I write this on October 29, 2019), what would the cameras see and report on Halloween, when many of the figures that appear on your front stoop defy categorization?

I dive into the technical details and my inspirations in the artist statement, so no need to repeat myself here. I will add that I was very much inspired by an old Twilight Zone episode, even including several Easter Eggs to that effect. I was also inspired by the ridiculous posts I see on NextDoor, where paranoid neighbors routinely share Ring™ videos of “suspicious” visitors to their houses. Finally, I’m in debt to Everest Pipkin, whose work “What if Jupiter had turned into a Star” provided some of the underlying JavaScript effects for Ring™ Log. Everest’s work, like my own, appears with a permissive copyright license that allows for the reuse and modification of the code. Wouldn’t it be awesome if creative coders borrowed from Jupiter and Ring™ Log and made their own adaptations of these works, similar to what happened with Nick Montfort’s Taroko Gorge?

(Yeah, that’s a hint about what my students will be doing in my Electronic Literature course next semester!)

Things Are Broken More Than Once and Won’t Be Fixed

I don’t want to get into everything that’s broken with Twitter and has been for a long time. I don’t even especially want to get into that small slice of Twitter that was once important to me and is broken, which is its creative bot-making potential. I’ve written about bots already once or twice, back when I was more hopeful than I am these days.

I used to make bots for Twitter. At the peak I had around 50 bots running at once, some poetry, some prose, some political, and all strictly following Twitter’s terms of service. I was one of the bot good guys.

When I say I made bots “for Twitter” I mean that two ways. One, I made bots designed to post to Twitter, the way a tailor cuts a suit for a specific customer. I made bespoke bots. Artisanal bots, if you will.

But two, I made bots for Twitter, as in I provided free content for Twitter, as in I literally worked, for free, for Twitter. You could say it was mutual exploitation, but in the end, Twitter got the better deal. They got more avenues to serve ads and extract data, and I’m left with dozens of silly programs in Python and Node.js that no longer work and are basically worthless. I’m like the nerdy teen in some eighties John Hughes movie who went to the dance with the date of his dreams, and she leaves him listless on the gymnasium wall while she goes off dancing with just about everyone else, including the sadistic P.E. teacher.

But, hey, this isn’t a pity party! I said I wasn’t going to go into the way Twitter made it really difficult to make creative bots! But trust me, they did.

Instead, I thought it’d be fun to talk about all the other things that are broken, besides Twitter! And I’m going to use one of my old Twitter bots as an example. But, this is not about Twitter!

So this is @shark_girls:

Screenshot of the Twitter @shark_girls account, with the tweet reading "Under the light / the tangled thread falls slack, / The mysteries remain"

I’ve written before about how @shark_girls works. There are these great white sharks tagged with tracking devices. A few of these sharks became social media celebrities, though of course, not really, it was just some humans tweeting updates about the sharks. I thought, wouldn’t it be cool to give these sharks personalities and generate creative tweets that seemed to come directly from the sharks. So that’s what I did. I narrativized the raw data from these two great white sharks, Mary Lee and Katharine. Mary Lee tweets poetry, and shows where she was in the ocean when she “wrote” it. Katharine tweets prose, as if from a travel journal, and likewise includes a time, date, and location stamp:

Katharine: The bliss of the moment. he shared it, however, in a silence even greater than her (28-Dec-2017)

To be clear: Mary Lee and Katharine are real sharks. They really are tagged with trackers that report their location whenever they surface longer than 90 seconds (the time needed to ping three satellites and triangulate their latitude and longitude). The locations and dates @shark_girls uses are lifted from the sharks’ tracking devices. You can see this data on the OCEARCH tracker, though my bot scrapes an undocumented backend server to get at it.

I’ve posted the code for the Mary Lee version of the bot. A whole lot of magic has to happen for the bot to work:

  1. The sharks’ trackers have to be working
  2. The sharks have to surface for longer than 90 seconds
  3. The backdoor to the data has to stay open (OCEARCH could conceivably close it at any time, though they seem to have appreciated my creative use of their data)
  4. The program queries the Google Maps API to get a satellite image of the pinged location
  5. The program generates the poetic or prose passage that accompanies the tweet
  6. The bot has to be properly authorized by Twitter

The @shark_girls bot hasn’t posted since August 20, 2018. That’s because it’s broken. To be specific: items 2, 4, 5, and 6 above no longer function. The bot is broken is so many ways that I’ll likely never fix it.

Let’s take it in reverse order.

The bot has to be properly authorized by Twitter

If I had just one or two Twitter bots, I could deal with fixing this. I need to associate a cellphone number with the bot. That’s supposed to ensure that it’s not a malicious bot, because for sure a Russian bot farm would never be able to register burner phone numbers with Twitter, no way, no how. But I’ve only got one phone number, and I already bounce it around the three or so bots that I have continued, in an uphill battle, to keep running. If I continue bouncing around the phone number, there’s a good chance Twitter could ban any bot associated with that number forever. The dynamic reminds a bit of the days in the early 2000s when the RIAA started suing what should have been its most valuable customers.

The program generates the poetic or prose passage that accompanies the tweet

Yeah, I could fix this easily too. The Mary Lee personality tweets poetry that’s a mashup of H.D.’s poetry. That system still works fine. The Katharine personality tweets from a remixed version of Virginia Woolf’s novel Night and Day. The bot reached the end of my remix. Katharine has no more passages to “write” right now. I could re-remix Night and Day, or select another novel and remix that. But I haven’t partially because of everything else that’s broken, partially because remixing a novel is a separate generative text problem, a rabbit hole I haven’t had time to go down lately. When I made the bot in 2015, it was Shark Week. Like is that a real holiday? I don’t know but the air was filled with shark energy. I was also living in a beach town in the southern Atlantic coast of Spain that summer. Spending hours making a bot about sharks just felt right. So I poured a lot of energy into the remix and into making the bot. It was a confluence of circumstances that created a drive that I no longer feel.

The program queries the Google Maps API to get an image of the pinged location

Nope, that’s not happening anymore. Google changed the terms of its map API, so that regular users like me can’t access it without handing over a credit card number. (API! That means Application Programming Interface. It’s essentially a portal that lets one program talk to another program, in this case how my bot talked to Google Maps and got some data out of it.) Google broke a gazillion creative, educational, and not-for-profit uses of its maps API when it started charging for access. Of course, what’s really crazy is that Google already charges us to use its services, though the invoice comes in the form of the mountains of data it extracts from us every day. There are open source alternatives to the Google Maps API that I technically could use for @shark_girls. But by this point, momentum is pushing me in the opposite direction. To just do…nothing.

The sharks have to surface for longer than 90 seconds

This is the least technical obstacle and totally out of my control. In a way, it’s a relief not to be able to do anything about this. The real Mary Lee and Katharine sharks have gone on radio silence. Mary Lee last surfaced and pinged the satellites over two years, though the OCEARCH team seems to believe she’s still out there.

Likely she’s surfacing less than the 90 seconds required to contact the satellites. Possibly something has gone wrong with the tracker (which would hit item #1 in the above list of what could go wrong). There’s always a chance that Mary Lee could be dead, though I hate to even consider that possibility. But eventually, that will happen.

When to Stop Caring about What’s Broken

Earlier I said this post isn’t about Twitter. It’s not really about Google either, even though the advertising giant deserves to be on my shit list too. This isn’t about any single broken thing made by humans. If anything, it’s about the things the humans didn’t make: two great white sharks, swimming alone in a vast ocean. Humans didn’t make the oceans, but we sure are trying to break them.

When do you stop caring about the things that are broken? I could spend hours trying to fix the bot, and I could pretty much succeed. Even the lack of new data from the sharks isn’t a problem, as I could continue using historical data of their locations, which is still accessible.

I could fix the bot, but what would that accomplish?

Twitter, Google, every other Internet giant will still do their thing, which is to run ramshod over their users. Meanwhile, real sharks are a vulnerable species, thanks to hunting for shark fins, trophy hunting, bycatching by industrial fishing, and of course, climate change and the acidification of the oceans.

Caring for this bot, its continual upkeep and maintenance, accommodating the constantly shifting goal posts of the platforms that powered it, it’s all a distraction. I’ve made a deliberate decision not to care about this broken bot so that I can care about other things.

It’s broken in so many ways. Knowing when to stop caring is itself an act of caring. Because there are things out there you can fix, broken things you can repair. Care for them while you still can.

(Yikes. I think I just set myself up for another post, which is about what I am working on lately. Way to go Mark, creating more work for yourself.)

How a Student Project on Conspiracy Theories Became a Conspiracy Theory

Great Awakening Conspiracy Map courtesy of Champ Pirinya

Maybe this post is only of local interest, but I wanted share some insight into a disturbing rumor that went viral at Davidson College after credible evidence emerged about neo-Nazi activity among a few Davidson students.

The rumors were scary. The gist was that plans for a school shooting were discovered on a whiteboard in the college library. As Carol Quillen, Davidson’s president, noted in a faculty forum last week, the whiteboard incident was investigated at the time (which was several weeks ago) and thought to be related to a course project. Nevertheless, students and faculty alike have been understandably concerned about campus safety—especially in light of the reports of neo-Nazi students, including one who had apparently attended the white supremacist Charlottesville rallies last year.

It’s difficult to convey to folks not on campus just how frightened students, staff, and faculty have been. Many students, especially Jewish students, students of color, and LGBTQ students, feel entirely unsafe. Even when assured that the whiteboard school shooting rumor was just that, a rumor. (Of course, they aren’t safe. Nobody in the U.S. is safe, thanks to a minority of American’s rabid obsession with firearms and rejection of sensible gun regulations.)

Yesterday some of my students connected the dots and realized that it was indeed a group project that caused the rumors. And not just any group project. It was their own group project. It took a while to reach this conclusion, because the rumors had so distorted reality that the students themselves didn’t recognize their own work as the basis for the rumors.

Bear with me as I explain.

The students are in DIG 101: Introduction to Digital Studies. In DIG 101 we spend several weeks learning about the spread and impact of internet conspiracy theories, including how online conspiracy theories can lead to ideological radicalization. As you can imagine, each new day provides fodder for class discussion.

The whiteboard in question contained a flowchart for a group project about conspiracy theories, specifically the tragic Parkland school shooting, which some internet conspiracy theorists claim never happened. The flowchart connected a variety of conspiracy elements (biased media, false flags, crisis actors, etc.) that sprung up in the aftermath of the Parkland shooting. The flowchart contained no inflammatory statements or threats. It was diagnosing a problem.

After brainstorming on the whiteboard and doing other work, the group presented their project to DIG 101 in the form of a case study on October 26. In class students considered school shooting conspiracy theories from various perspectives. These perspectives included a parent who had lost a child in the shooting and social media executives whose platforms have helped the spread of conspiracy theories. 

The students in this group designed the class study with incredible empathy toward with victims of school shootings and with enormous skepticism toward adherents of conspiracy theories. They are horrified that their own project about the dangers of internet conspiracies itself became the basis of a disturbing rumor. They never imagined their class project would contribute to a climate of fear on campus. 

As I said, this project took place several weeks ago, well before the Tree of Life synagogue shooting in Pittsburgh. It simply was not on the students’ minds last week, which is why they didn’t realize at first it was their group project at the heart of these rumors. Quite literally, one of the students in the group—in a class discussion about the whiteboard and the possibility that it was trolling or part of a class project—said with all earnestness to the rest of the class, “who would be stupid enough to draw up plans for a school shooting as part of a class project?” It bears repeating: the rumors had so distorted the contents of the whiteboard that even students in the group did not recognize their work as the basis for the rumors.

It wasn’t until two days ago that one of my students made the connection, purely coincidentally. That student just happened to be in another class that just happened to have a faculty member sitting in for the day who just happened to have an accurate description of the whiteboard from the campus police report. The faculty member shared that description with the class. Once the student heard that the whiteboard contained two diagrams, with the words “a school shooting”, “4Chan,” “reporting it”, etc., and appeared to reference how information about school shootings traveled online, everything clicked in place for the student. The student then contacted the campus chief of police.

As my fellow faculty members and college administrators have readily acknowledged, my students did absolutely nothing wrong (except perhaps forgetting to wipe their whiteboard, a lesson that will forever be burned into their souls). This was a legitimate course project, tackling a real world problem. Their case study and ensuing class discussion were excellent. The way their project about conspiracy theories yielded its own toxic stream of misinformation ironically highlights the need for critical media literacy.

Davidson College still faces many difficulties in the days and weeks to come, but at least one terrible revelation from the past week we can now consider from a more contemplative perspective. I and my students are grateful for this community and its vision for a better world.

Header image: Great Awakening Conspiracy Map courtesy of Champ Pirinya

A Link Blog, Finally

For years—like ever since I started blogging in 2003 or so—I’ve wanted to include a link blog on this site. You know, one of those side bars that just has cool links. Back in the day, Andy Baio‘s link blog was my jam, something I often paid more attention to than his main blog. It looks like Andy shut down his link blog (though you can see what it looked like circa 2006 via the Wayback Machine). As usual though, I’m behind the times by a few years, so I still want a link blog, even if they may be passé.

The main reason I want the link blog, honestly, is not to share the links, but to help me dig up links later on for teaching or research. And, like Andy’s original link blog, I wanted to provide brief annotations of the links—basically to remind myself why I saved the links in the first place. Now, I already save links with Pinboard, and if you look at my Pinboard feed, it is essentially a link blog. You can even use Pinboard’s “Description” field to add annotations to your bookmarks. But there are at least three problems with Pinboard as a link blog:

  1. It’s not very pretty.
  2. It’s not integrated into my existing blog.
  3. And it shows everything I save on Pinboard. But not every link I save is worth annotating or sharing.

What finally spurred me to make a true link blog was a recent post by Tim Owens, who describes how he annotates articles in his RSS reader (TinyRSS) and posts them on a separate blog. Tim’s method got me thinking. It’s a great setup, but one drawback is that the annotations happen in TinyRSS, while I want the ability to annotate links from multiple places, not just what happens to show up in my RSS reader. For example, I’m just as likely to want to add a note to and share a link I see on Twitter as I am a link that’s among my RSS feeds.

The solution was simple: continue using Pinboard, but automate the posting  of bookmarked links to my blog. But not every link, just the ones I want to share. Pinboard makes this stupid easy, because (1) you can tag your saved bookmarks with keywords, and (2) Pinboard generates a separate RSS feed for every tag. In other words, Pinboard can generate an RSS feed of the links I want to share, and I can use a WordPress plugin to monitor that RSS feed and grab its posts.

Here’s the step-by-step process:

  1. Add a link to Pinboard. However I add a bookmark—via browser bookmarklet, the Pinner app on my phone, even via email—I have the option to add a description. This becomes my annotation.
  2. Then, if I want the link to appear on my link blog, I tag it “links.”
  3. Pinboard creates an RSS feed for bookmarks tagged with links.
  4. Next, the FeedWordPress plugin on links.samplereality.com grabs the feed and posts it.

A few notes:

  • I configured FeedWordPress so that the title of each new RSS feed item links back to the original article. The downside to this is that each new link/note is not a separate post; the upside is that links to the original source are right there, easy to find and click.
  • My link blog is technically a separate blog from my main blog (what you’re reading now). There were a few reasons for this. One, I didn’t want every new annotated bookmark crowding out my regular posts, or worse, clogging up the inboxes of people who subscribe to my posts via email. Two, I wanted the link blog to have a theme of its own. Three, when I search my link blog, I can be sure it’s only searching my bookmarks and not my blog posts.

So that’s it: my new link blog.

Bonus Content! I also set up Zapier to posts my annotated bookmarks to Twitter as they come in. Basically, the free version of Zapier (which is similar to If This Then That) checks my Pinboard links feed every 15 minutes, and when something new appears, it posts the link, title, and description to Twitter.

I once read that NPR uses a digital strategy they call COPE. Which means Create Once, Publish Everywhere.

I like to think of my Pinboard > Blog > Twitter system as DOPE. Draft Once, Post Everywhere.

WRI 101: Monsters

The mob of angry townspeople in My Favorite Thing is Monsters

Every so often I have an opportunity to teach a section of Davidson College’s first year writing course, WRI 101. It’s the only required class that all Davidson students take, but each section is shaped around a different topic. In Fall 2018 topics will range from “Writing about Modern Physics and Technology” (Section A) to “Monsters” (Section Y). In between are classes devoted to democracy, medicine, Africa, and much more. In the past I’ve taught a WRI 101 course focused on graphic novels and another on toys and games. But this fall, I’m the guy behind Section Y, i.e. Monsters.

Why monsters? Because horror is the literary genre best-suited for our scary times. And to that end, I’ve decided to teach only 21st century works. This means I could leave behind the old standards like Frankenstein and Dracula that appear on almost every monster syllabus. I also decided that each of my works would somehow be reworking the genre. Here’s the list of major texts (which will be supplemented with key theoretical readings as well as short stories, games, and films like Get Out):

  • Tananarive Due’s The Good House (2003) reworks the haunted house;
  • Colson Whitehead’s Zone One (2011) reworks the zombie apocalypse;
  • Stephen Graham Jones’ Mongrels (2016) reworks werewolves;
  • Emil Ferris’s My Favorite Thing is Monsters (2017) reworks, wow, everything. This graphic novel is a powerful metatext about the role of monsters in social life, drawn from the point of view of a young girl who sees herself as a monster on the margins of society. The mob of angry townspeople in the drawing above appears early in the graphic novel.

You can see from the list that I also leave behind the usual suspects synonymous with horror. The Stephen Kings and the like. Now more than ever it is critical to read, watch, and play horror coming from perspectives that are not CIS white males. The powerful race and gender implications of monsters come into sharp focus with this approach. I’ll share the syllabus when it’s finalized, but for now, here’s the course description:

WRI 101: Monsters

Ghosts. Zombies. Vampires and werewolves. What is it about monsters? Why do they both terrify and delight us? Whether it’s the haunted house in Tananarive Due’s The Good House (2004), Kanye’s monster persona in My Beautiful Dark Twisted Fantasy (2010), the walking dead in Colson Whitehead’s Zone One (2011), Native American werewolves in Stephen Graham Jones’ Mongrels (2016), or even white suburbia in Get Out (2017), monsters are always about more than just spine-tingling horror. This writing class explores monstrosity in the 21st century, paying particular attention to intersections with race and gender. Through a sequence of writing projects we will explore a central question: what do monsters mean? Our first project asks students to reflect on the home as a space of monstrosity. Our second and third projects address the idea of the monstrous other. Our final project uses contemporary literary and media theory to understand how monsters expose the limits of what counts as human. Along the way, we’ll experiment with our own little Frankenstein-like compositional monsters.

What about Blogging Keeps Me from Blogging

Yesterday in Facebook Killed the Feed I highlighted the way Facebook and Twitter have contributed to the decline of scholarly blogging. In truth though, those specific platforms can’t take all the blame. There are other reasons why academic bloggers have stopped blogging. There are systemic problems, like lack of time in our ever more harried and bureaucratically-burdened jobs, or online trolling, doxxing, and harassment that make having a social media presence absolutely miserable, if not life-threatening.

There are also problems with blogging itself as it exists in 2018. I want to focus on those issues briefly now. This post is deeply subjective, based purely on an inventory of my own half-articulated concerns. What about blogging keeps me from blogging?

  1. Images. Instagram, Facebook, and the social media gurus have convinced us that every post needs to have an image to “engage” your audience. No image, no engagement. You don’t want to be that sad sack blogger writing with only words. Think of your SEO! So, we feel pressure to include images in our posts. But nothing squelches the mood to write more than hunting down an image. Images are a time suck. Honestly, just the thought of finding an appropriate image to match a post is enough to make me avoid writing altogether.
  2. Length. I have fallen into the length trap. Maybe you have too. You know what I’m talking about. You think every post needs to be a smart 2,000 word missive. Miniature scholarly essays, like the post I wrote the other week about mazes in interaction fiction. What happened to my more playful writing, where I was essentially spitballing random ideas I had, like my plagiarism allegations against Neil Gaiman. And what about throwaway posts like my posts on suburbia or concerts? To become an active blogger again, forget about length.
  3. Timing. Not the time you have or don’t have to write posts, but the time in between posts. Years ago, Dan Cohen wrote about “the tyranny of the calendar” with blogging, and it’s still true. The more time that passes in between posts, the harder it is to start up again. You feel an obligation for your comeback blog posts to have been worth the wait. What pressure! You end up waiting even longer then to write. Or worse, you write and write, dozens of mostly-done posts in your draft folder that you never publish. Like some indie band that feels the weight of the world with their sophomore effort and end up spending years in the studio. The solution is to be less like Daft Punk and more like Ryan Adams.
  4. WordPress. Writing with WordPress sucks the joy out of writing. If you blog with WordPress you know what I’m talking about. WordPress’s browser composition box is a visual nightmare. Even in full screen mode it’s a bundle of distractions. WordPress’s desktop client has promise, but mine at least frequently has problems connecting to my server. I guess I’d be prepared to accept that’s just how writing online has to be, but my experience on Medium has opened my eyes. I just want to write and see my words—and only my words—on the screen. Whatever else Medium fails at, it has a damn fine editor.

Individually, there are solutions to each of these problems. But taken together—plus other sticking points I know I’m forgetting—there’s enough accumulated friction to making blogging very much a non-trivial endeavor.

It doesn’t have to be. What are your sticking points when it comes to blogging? How have you tried to overcome them?

And if you say “markdown” you’re dead to me.

Facebook Killed the Feed

There’s a movement to reclaim blogging as a vibrant, vital space in academia. Dan Cohen, Kathleen Fitzpatrick, and Alan Jacobs have written about their renewed efforts to have smart exchanges of ideas take place on blogs of their own. Rather than taking place on, say Twitter, where well-intentioned discussions are easily derailed by trolls, bots, or careless ¯\_(ツ)_/¯. Or on Facebook, where Good Conversations Go to Die™.

Kathleen recently put it more diplomatically:

An author might still blog, but (thanks to the post-Google-Reader decline in RSS use) ensuring that readers knew that she’d posted something required publicizing it on Twitter, and responses were far more likely to come as tweets. Even worse, readers might be inspired to share her blog post with their friends via Facebook, but any ensuing conversation about that post was entirely captured there, never reconnecting with the original post or its author. And without those connections and discussions and the energy and attention they inspired, blogs… became isolated. Slowed. Often stopped entirely.

You can’t overstate this point about the isolation of blogs. I’ve installed FreshRSS on one of my domains (thanks to Reclaim Hosting’s quick work), and it’s the first RSS reader I feel good about in years—since Google killed Google Reader. I had TinyRSS running, but the interface was so painful that I actively avoided it. With FreshRSS on my domain, I imported a list of the blogs I used to follow, pruned them (way too many have linkrotted away, proving Kathleen’s point), and added a precious few new blogs. FreshRSS is a pleasure to check a couple of times a day.

Now, if only more blogs posts showed up there. Because what people used to blog about, they now post on Facebook. I detest Facebook for a number of reasons and have gone as far as you can go without deleting your Facebook account entirely (unfriended everyone, stayed that way for six months, and then slowly built up a new friend network that is a fraction of what it used to be…but they’re all friends, family, or colleagues who I wouldn’t mind seeing a pic of my kids).

Anyway, what I want to say is, yes, Google killed off Google Reader, the most widely adopted RSS reader and the reason so many people kept up with blogs. But Facebook killed the feed.

The kind of conversations between academics that used to take place on blogs still take place, but on Facebook, where the conversations are often locked down, hard to find, and written in a distractedsocialmediamultitaskingway instead of thoughtful and deliberative. It’s the freaking worst thing ever.

You could say, Well, hey, Facebook democratized social media! Now more people than ever are posting! Setting aside the problems with Facebook that have become obvious since November 2016, I counter this with:

No. Effing. Way.

Facebook killed the feed. The feed was a metaphorical thing. I’m not talking about RSS feeds, the way blog posts could be detected and read by offsite readers. I’m talking about sustenance. What nourished critical minds. The feed. The food that fed our minds. There’s a “feed” on Facebook, but it doesn’t offer sustenance. It’s empty calories. Junk food. Junk feeds.

To prove my point I offer the following prediction. This post, which I admit is not exactly the smartest piece of writing out there about blogging, will be read by a few people who still use RSS. The one person who subscribes to my posts by email (Hi Mom!) might read it. Maybe a dozen or so people will like the tweet where I announce this post—though who knows if they actually read it. And then, when I drop a link to this post on Facebook, crickets. If I’m lucky, maybe someone sticks the ? emoji to it before liking the latest InstantPot recipe that shows up next in their “feed.”

That’s it. Junk food.

The Maze and the Other in Interactive Fiction
On Labyrinths, the Infinite, and the Compass

Albayzin from Alhambra

I’m spending July in Cádiz, Spain, with my family and a bunch of students from Davidson College. The other weekend we visited Granada, home of the Alhambra. Built by the last Arabic dynasty on the Iberian peninsula in the 13th century, the Alhambra is a stunning palace overlooking the city below. The city of Granada itself—like several other cities in Spain—is a palimpsest of Islamic, Jewish, and Christian art, culture, and architecture.

Take the streets of Granada. In the Albayzín neighborhood the cobblestone streets are winding, narrow alleys, branching off from each other at odd angles. Even though I’ve wandered Granada several times over the past decade, it’s easy to get lost in these serpentine streets. The photograph above (Flickr source) of the Albayzín, shot from the Alhambra, can barely reveal the maze that these medieval Muslim streets form. The Albayzín is a marked contrast to the layout of historically Christian cities in Spain. Influenced by Roman design, a typical Spanish city features a central square—the Plaza Mayor—from which streets extend out at right angles toward the cardinal points of the compass. Whereas the Muslim streets are winding and organic, the Christian streets are neat and angular. It’s the difference between a labyrinth and a grid.

It just so happened that on our long bus ride to Granada I finished playing Anchorhead, Michael Gentry’s monumental work of interactive fiction (IF) from 1998. Even if you’ve never played IF, you likely recognize it when you see it, thanks to the ongoing hybridization of geek culture with pop culture. Entirely text-based, these story-games present puzzles and narrative situations that you traverse through typed commands, like GO NORTH, GET LAMP, OPEN JEWELED BOX, etc. As for Anchorhead, it’s a Lovecraftian horror with cosmic entities, incestual families, and the requisite insane asylum. Anchorhead also includes a mainstay of early interactive fiction: a maze.

Two of them in fact.

It’s difficult to overstate the role of mazes in interactive fiction. Will Crowther and Don Woods’ Adventure (or Colossal Cave) was the first work of IF in the mid-seventies. It also had the first maze, a “maze of twisty little passages, all alike.” Later on Zork would have a maze, and so would many other games, including Anchorhead. Mazes are so emblematic of interactive fiction that the first scholarly book on the subject references Adventure‘s maze in its title: Nick Montfort’s Twisty Little Passages: An Approach to Interactive Fiction (MIT Press, 2003). Mazes are also singled out in the manual for Inform 7, a high level programming language used to create many contemporary works of interactive fiction. As the official Inform 7 “recipe book” puts it, “Many old-school IF puzzles involve journeys through the map which are confused, randomised or otherwise frustrated.” Mazes are now considered passé in contemporary IF, but only because they were used for years to convey a sense of disorientation and anxiety.

And so, there I was in Granada having just played one of the most acclaimed works of interactive fiction ever. It occurred to me then, among the twisty little passages of Granada, that a relationship exists between the labyrinthine alleys of the Albayzín and the way interactive fiction has used mazes.

See, the usual way of navigating interactive fiction is to use cardinal directions. GO WEST. SOUTHEAST. OPEN THE NORTH DOOR. The eight points of the compass rose is an IF convention that, like mazes, goes all the way back to Colossal Cave. The Inform 7 manual briefly acknowledges this convention in its section on rooms:

In real life, people are seldom conscious of their compass bearing when walking around buildings, but it makes a concise and unconfusing way for the player to say where to go next, so is generally accepted as a convention of the genre.

Let’s dig into this convention a bit. Occasionally, it’s been challenged (Aaron Reed’s Blue Lacuna comes to mind), but for the most part, navigating interactive fiction with cardinal directions is simply what you expect to do. It’s essentially a grid system that helps players mentally map the game’s narrative spaces. Witness my own map of Anchorhead, literally drawn on graph paper as I played the game (okay, I drew it on OneNote on an iPad, but you get the idea):

My partial map of Anchorhead, drawn by hand
My partial map of Anchorhead, drawn by hand

And when IF wants to confuse, frustrate, or disorient players, along comes the maze. Labyrinths, the kind evoked by the streets of the Albayzín, defy the grid system of Western logic. Mazes in interactive fiction are defined by the very breakdown of the compass. Direction don’t work anymore. The maze evokes otherness by defying rationality.

When the grid/maze dichotomy of interactive fiction is mapped onto actual history—say the city of Granada—something interesting happens. You start to see the the narrative trope of the maze as an essentially Orientalist move. I’m using “Orientalist” here in the way Edward Said uses it, a name for discourse about the Middle East that mysticizes yet disempowers the culture and its people. As Said describes it, Orientalism is part of a larger project of dominating that culture and its people. Orientalist tropes of the Middle East include ahistorical images that present an exotic, irrational counterpart to the supposed logic of European modernity. In an article in the European Journal of Cultural Studies about the representation of Arabs in videogames, Vít Ŝisler provides a quick list of such tropes. They include “motifs such as headscarves, turbans, scimitars, tiles and camels, character concepts such as caliphs, Bedouins, djinns, belly dancers and Oriental topoi such as deserts, minarets, bazaars and harems.” In nearly every case, for white American and European audiences these tropes provide a shorthand for an alien other.

My argument is this:

  1. Interactive fiction relies on a Christian-influenced, Western European-centric sense of space. Grid-like, organized, navigable. Mappable. In a word, knowable.
  2. Occasionally, to evoke the irrational, the unmappable, the unknowable, interactive fiction employs mazes. The connection of these textual mazes to the labyrinthine Middle Eastern bazaar that appears in, say Raiders of the Lost Ark, is unacknowledged and usually unintentional.
  3. We cannot truly understand the role that mazes play vis-à-vis the usual Cartesian grid in interactive fiction unless we also understand the interplay between these dissimilar ways of organizing spaces in real life, which are bound up in social, cultural, and historical conflict. In particular, the West has valorized the rigid grid while looking with disdain upon organic irregularity.

Notwithstanding exceptions like Lisa Nakamura and Zeynep Tufekci, scholars of digital media in the U.S. and Europe have done a poor job looking beyond their own doorsteps for understanding digital culture. Case in point: the “Maze” chapter of 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (MIT Press, 2012), where my co-authors and I address the significance of mazes, both in and outside of computing, with nary a mention of non-Western or non-Christian labyrinths. In hindsight, I see the Western-centric perspective of this chapter (and others) as a real flaw of the book.

I don’t know why I didn’t know at the time about Laura Marks’ Enfoldment and Infinity: An Islamic Genealogy of New Media Art (MIT Press, 2010). Marks doesn’t talk about mazes per se, but you can imagine the labyrinths of Albayzín or the endless maze design generated by the 10 PRINT program as living enactments of what Marks calls “enfoldment.” Marks sees enfoldment as a dominant feature of Islamic art and describes it as the way image, information, and the infinite “enfold each other and unfold from one another.” Essentially, image gives way to information which in turn is an index (an impossible one though) to infinity itself. Marks argues that this dynamic of enfoldment is alive and well in algorithmic digital art.

With Marks, Granada, and interactive fiction on my mind, I have a series of questions. What happens when we shift our understanding of mazes from non-Cartesian spaces meant to confound players to transcendental expressions of infinity? What happens when we break the convention in interactive fiction by which grids are privileged over mazes? What happens when we recognize that even with something as non-essential to political power as a text-based game, the underlying procedural system reinscribes a model that values one valid way of seeing the world over another, equally valid way of seeing the world?

Header Image: Anh Dinh, “Albayzin from Alhambra” on Flickr (August 10, 2013). Creative Commons BY-NC license.