“Deep” Textual Hacks
A computational and pedagogical workshop

I put “deep” in scare quotes but really, all three words should have quotes around them—”deep” “textual” “hacks”—because all three are contested, unstable terms. The workshop is hands-on, but I imagine we’ll have a chance to talk about the more theoretical concerns of hacking texts. The workshop is inspired by an assignment from my Hacking, Remixing, and Design class at Davidson, where I challenge students to create works of literary deformance that are complex, intense, connected, and shareable. (Hey, look, more contested terms! Or at the very least, ambiguous terms.)

We don’t have much time for this workshop. In this kind of constrained setting I’ve found it helps to begin with templates, rather than creating something from scratch. I’ve also decided we’ll steer clear of Python—what I’ve been using recently for my own literary deformances—and work instead in the browser. That means Javascript. Say what you want about Javascript but you can’t deny that Daniel Howe’s RiTA library is a powerful tool for algorithmic literary play. But we don’t even need RiTA for our first “hack”:

What’s great about “Taroko Gorge” is how easy it is to hack. Dozens have done it, including me. All you need is a browser and a text editor. Nick never explicitly released the code of “Taroko Gorge” under a free software license, but it’s readily available to anyone who views the HTML source of the poem’s web page. Lean and elegantly coded, with self-evident algorithms and a clearly demarcated word list, the endless poem lends itself to reappropriation. Simply altering the word list (the paradigmatic axis) creates an entirely different randomly generated poem, while the underlying sentence structure (the syntagmatic axis) remains the same.

The next textual hack template we’ll work with is my own:

This little generator is essentially half of @_LostBuoy_. It generates Markov chains from a source text, in this case, Moby-Dick. What’s a Markov chain? It’s a probabilistic chain of n-grams, that is, words. The algorithm examines a source text and figures out which word or words are likely to follow another word or other words. The “n” in n-gram refers to the number of words you want the algorithm to look for. For example, a bi-gram Markov chain calculates which pair of words are likely to follow another pair of words. Using this technique, you can string together sentences, paragraphs, or entire novels. The higher the “n” the more likely the output is to resemble the source material—and by extension, sensible English. Whereas “Taroko Gorge” plays with the paradigmatic axis (substitution), Markov chains play along the syntagmatic axis (sentence structure). There are various ways to calculate Markov chains; creating a Markov chain generator is even a common Intro to Computer Science assignment. I didn’t build my own generator, and you don’t have to either. I use RiTA, a Javascript (and Processing) library that works in the browser, with helpful documentation.

The final deformance is a web-based version of the popular @JustToSayBot:

And I have a challenge here: thanks to the 140-character limit of Twitter, the bot version of this poem is missing the middle verse. The web has no such limit, of course, so nothing is stopping workshop participants from adding the missing verse. Such a restorative act of hacking would be, in a sense, a de-deformance, that is, making my original deformance less deformative, more like the original.

8 thoughts on ““Deep” Textual Hacks
A computational and pedagogical workshop

Comments are closed.