The Digital and the Humanities: For What Else Shall We Apologize? – Arnaud Zimmern

The insistently apologetic tone of Goldstone and Underwood’s piece sets the stage for my questions of this week’s readings. For how long and for what institutional reasons will Digital Humanists persist in apologizing for their unorthodox approaches to literary data? Can we predict a moment or a set of conditions in which a paradigm change might finally come about, whereby a piece of quantitative analysis could be published in PMLA without major hemming-and-hawing or sincere contrition? And finally, is it really a feature (not a bug) of good, skeptical, humanistic inquiry (as Timothy Burke defines it and as Tressie Cottom demands it) that Humanists should pay as much attention to the methodological vices of the texts they produce as they do to the literary virtues of the texts they study?

Before I offer my responses to those questions – reading Goldstone and Underwood’s piece in light of Burke, Finn, and Cottom – let me issue a two-part clarification (or apology, I suppose).

First, I don’t mean to be theoretical and hand-wavy by posing questions about scholarly methodology writ large and about the wide, open, uncertain future of the humanities. I hope rather that this will prove practical and lead us to draw in class on the conversations we have already been having, like our conversation on the feasibility of a universal digital edition of Shakespeare that could be everything for everyone or on video games as scholarly editions.

Secondly, I also don’t mean to be denominational by drawing a web of metaphors that connects Goldstone and Underwood’s apologetic moves to the language of Christian atonement (contrition, vice, virtue, etc.). I’m trying to point out simply that the ethical and anthropological values that our readings unanimously present as those of secular humanistic inquiry proper are couched in a discourse of human (im)perfectibility, an imperative of forgiveness, and a suspended messianic promise of Closure/Truth that have specific historical and cultural origins. Whether those origins and bases lie in Shakespeare’s own period – as the Reformation theorized total human depravity and the Scientific Revolution theorized human perfectibility – or earlier in the medieval period, or perhaps later in the Enlightenment, matters little for us in this class. What matters is whether we wish to continue operating the academic-industrial vessel of “the Humanities” on those bases or whether we want to invest in developing alternatives. Will we continue to work with a scholarly ethos of apology, forgiveness, and incremental purification that says “Ok, Goldstone and Underwood’s analysis wasn’t perfect this time around, but that’s to be expected, we can be grateful for this much, and the next version will be better, and the next one after that….” until we either find an incomplete but satisfactory solution or get bored with the question, whichever comes first? Or will we stop and say: “Wait, we’ve been through this forgiving-and-refining cycle before – it’s the same story whether we do it with quantitative methods or qualitative methods – and we always end up blaming the final unknowability of things on our notion of human nature as fundamentally broken. Why don’t we invest our energy elsewhere?”

Personally, I’m torn on the matter: I agree with the values and aspirations of the first option, but my training as a curious humanist makes me keen to explore and pursue alternatives that are premised on the idea that there is no fixed human nature. The ultimate practical question, however, is not so much which route you or I should pick, but which route will the academic-industrial complex pick. This has been a lengthy caveat, but I did want to make quite clear what I understand to be the anthropological stakes of the quantitative or “data-logical” turn in the Humanities.

QUESTION 1: For how long and for what institutional reasons will Digital Humanists persist in excusing themselves for their unorthodox approaches to literary data?

Short response: For as long as the standard-bearers and gatekeepers of humanist knowledge (cultural institutions, taste-makers, teachers) continue to believe that humans are creatures capable of unfathomable complexity but incapable of transmitting that complexity fully through language.

Long response: Tressie Cottom gives us a way to approach this question that I think is worth summarizing. The framing claim of her paper is that the “data-logical turn” that is anxiously bubbling up in literary departments is analogous to what has already effectively overtaken sociology departments. That “contamination” (my word, not hers) illustrates how a larger academic-industrial alliance is establishing an intellectual hegemony that avoids major theoretical questions about gender, race, humanity, etc. Citing Karabel and Halsey, Cottom concludes it “would be naive not to recognize that state patronage has contributed to promoting atheoretical forms of methodological empiricism and has given less encouragement to other approaches,” like the very small scale and “slow” approaches that humanists specialize in. The discrepancy between the two approaches and modes of knowledge (or what is claimed to be two distinct modes of knowledge) is massive. For the methodological empiricists, Cottom argues, knowledge is “data” or “quanta,” infinitely mobile and shapable, transposable without deformation to any human intelligence (or artificial intelligence) — the fallibility of language does not matter because ‘quanta’ is universally transposable, transportable, and a-political. It need not be theorized. For humanists, knowledge remains invariably “capta,” i.e. something that needs to be experienced and interpreted as embedded within a cultural-linguistic-social-political context, first and foremost within a language.

When Cottom further cites Miriam Posner to say that “most of the data and data models we’ve inherited [from business applications] deal with structures of power, like gender and race, with a crudeness that would never pass muster in a peer-reviewed humanities publication,” she points us to the importance of language/discourse. We’re all familiar with the hard work that cultural anthropologists and gender theorists have pursued in the last decades trying to undo an essentialist and biological-materialist understanding of male vs. female binaries in favor of linguistic constructions of gender that spread along a spectrum. Businesses, however, as they set about investigating big data trends, build data-parsing tools that make invisible assumptions and simplifications about the political-cultural phenomena of gender, race, etc. They seldom consult Judith Butler. If sociology and the humanities adopt those tools in turn without pausing to consider the built-in empiricist-materialist assumptions, both disciplines risk perpetuating theories of gender that scholarly consensus does not widely support. Cottom’s warning against such algorithmic black-boxing meets Ed Finn’s voucher for “fistulated algorithms,” but as a black female scholar, she is rightly more suspicious of the hegemonic motivations behind the rise of DH in the academy: “I suspect that we get a quantitative textual analysis that is very popular with powerful actors precisely because it does not theorize power relations. Given our current political economy, especially in the rapidly corporatized academy, one should expect great enthusiasm for distant reading and acritical theorizing.”

So for how long and for what institutional reasons will digital humanists be required to apologize for what they’re trying to do with language? Well, for as long as we, the arbiters of cultural knowledge, continue to believe language is a political power-construct that only fallibly represents the modes and possibilities of human existence. As long as “fallible,” “flawed,” and “politically-determined” remain the invariant qualities of our definition of language, Digital Humanists will be asked to apologize for importing an empiricist methodology that thinks of its language, mathematics, as universally transposable and neutral rather than what Cottom claims it is: politically contingent and very useful for avoiding questions of prejudice and marginalization.

QUESTION 2: Can we predict a moment or a set of conditions in which a paradigm change might finally come about, whereby a piece of quantitative analysis could be published in PMLA without hemming-and-hawing or sincere contrition?

Response: When Goldstone and Underwood shrug off the aura of scientific objectivity that their numbers and graphs and percentages impart to them and insist instead that topic modeling, albeit quantitative, is a fundamentally interpretive and “humanely” limited tool, they really do two things. The first is that they appeal to the incompleteness of human knowledge that Timothy Burke calls “the one universal that we might permit ourselves to accept without apology.” In so doing they reveal that they are on a mission to endear topic modeling and its interpretive-instability, illegibility, and slowness to the healthy skeptics in English departments. The second is that they re-articulate Ed Finn’s set of conditions under which quantitative analysis might enter the common parlance of literary scholarship. The first and more obvious requirement is that numbers, percentages, and computations lose their rhetorical aura of scientific objectivity and join mere language as elements of discourse requiring interpretation and context. The second is not just the advent of “fistulated algorithms” but of the “algorithmic literacy” Ed Finn invites us to foster for ourselves.

But a further condition, unmentioned in our reading, include also a re-equilibrating, perhaps even a toppling, of the hierarchy of modes of knowledge. What I mean is this. Goldstone and Underwood are hard at work confirming their quantitative results by cross-checking their model against well-attested “analog” histories of theory and criticism. The standard-bearer of accuracy or “truth,” in their situation, is the “analog Humanities.” The “upstart crows” Goldstone and Underwood have to couch their validity in the ethos and authority of those “analog” histories. But what happens when further quantitative studies begin to couch their authority in their ability to repeat and nuance Goldstone and Underwood’s work, disregarding the old “analog” histories? Are they “wrong” or invalid forasmuch, or should we be ready to accept quantitative findings that do not anchor themselves in our usual historical narratives? Should we be ready to accept findings built unapologetically on accumulated quantitative (not necessarily un-interpretative, but quantitative) models? As long as we cannot answer yes to those last questions, we will not see a DH piece that isn’t hard beset to validate itself methodologically.

QUESTION 3: Is it really a feature (not a bug) of good, skeptical, humanistic inquiry (as Timothy Burke defines it and as Tressie Cottom demands it) that Humanists should pay as much attention to the methodological vices of the texts they produce as they do to the literary virtues of the texts they study?

I trust this question pushes everyone’s buttons and seems horribly pretentious, perhaps downright asinine, because it suggests we should be less attentive to our own assumptions and more myopic than we currently already are. If we take a leaf from Kieran Healy, a sociologist at Duke, we might, however, acknowledge that “for the problems facing Sociology [and thus the sociology of literature] at present, demanding more nuance typically obstructs the development of theory that is intellectually interesting, empirically generative, or practically successful” (1). Frankly, I do find myself asking in great frustration whether Goldstone and Underwood could have had more room to expose their findings and theorize some causal explanations for the history of theory if a body of peer-reviewers (or a fear of peer-reviewers) hadn’t forced them to nuance every major claim or methodological innovation they stake in the paper. This kind of nuance-policing does strike me as a major bug, not a feature, of current Humanities scholarship and it testifies, quite palpably I think, to my concluding claim. If the Digital Humanities are condemned to apologizing for their quantitative methods, it has more to do with the Humanities than with the Digital. Digital Humanists must apologize because Humanists, par excellence, apologize – we’ve found few better ways than perpetual nuancing to think ourselves relevant and rigorous, and perhaps also (dixit Arthur Schopenhauer) few better ways to avoid getting bored.

Works Cited (besides assigned readings)

Healy, Kieran. “Fuck Nuance.” forthcoming in Sociological Theory. January 2016.
https://kieranhealy.org/files/papers/fuck-nuance.pdf