“Data” is often considered to be the domain of scientists and statisticians. But with the proliferation of databases across nearly all aspects of modern life, data has become an everyday concern. Bank accounts, FaceTime records, Snapchat posts, Xbox leaderboards, CatCard purchases, your DNA—at the heart of all them is data. To live today is to breathe and exhale data, wherever you go, online and off. And at the same time data has become a function of daily life, it has also become the subject of—and vehicle for—literary and artistic critiques.
This course explores the role of data and databases in contemporary culture, with an eye toward understanding how data shapes the way we perceive—and misperceive—the world. After historicizing the origins of modern databases in 19th century industrialization and census efforts, we will survey our present-day data landscape, considering data mining, data visualization, and database art. We will encounter nearly evangelical enthusiasm for “Big Data” but also rigorous criticisms of what we might call naïve empiricism. The ethical considerations of data collection and analysis will be at the forefront of our conversation, as will be issues surrounding privacy and surveillance. Continue reading “DIG 210: Data Culture”→
The Expressive Work of Spaces of Torture in Videogames
At the 2014 MLA conference in Chicago I appeared on a panel called “Torture and Popular Culture.” I used the occasion to revisit a topic I had written about several years earlier—representations of torture-interrogation in videogames. My comments are suggestive more than conclusive, and I am looking forward to developing these ideas further.
Today I want to talk about spaces of torture—dungeons, labs, prisons—in contemporary videogames and explore the way these spaces are not simply gruesome narrative backdrops but are key expressive features in popular culture’s ongoing reckoning with modern torture. Continue reading “Sites of Pain and Telling”→
A tentative syllabus for DIG 350: History & Future of the Book, a course just approved for the Digital Studies program at my new academic home, Davidson College. Many thanks to Ryan Cordell, Lisa Gitelman, Kari Kraus, Jessica Pressman, Peter Stallybrass, and many others, whose research and classes inspired this one.
“Non-consumptive research” is the term digital humanities scholars use to describe the large-scale analysis of a texts—say topic modeling millions of books or data-mining tens of thousands of court cases. In non-consumptive research, a text is not read by a scholar so much as it is processed by a machine. The phrase frequently appears in the context of the long-running legal debate between various book digitization efforts (e.g. Google Books and HathiTrust) and publishers and copyright holders (e.g. the Authors Guild). For example, in one of the preliminary Google Books settlements, non-consumptive research is defined as “computational analysis” of one or more books “but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within.” Non-consumptive reading is not reading in any traditional way, and it certainly isn’t close reading. Examples of non-consumptive research that appear in the legal proceedings (the implications of which are explored by John Unsworth) include image analysis, text extraction, concordance development, citation extraction, linguistic analysis, automated translation, and indexing. Continue reading “The Poetics of Non-Consumptive Reading”→