Computational Literary Studies: What’s the Point?

In “The Computational Case against Computational Literary Studies,” Nan Z. Da levels a serious critique of, if not the whole discipline of digital humanities, then its golden child, computational literary studies (CLS). By CLS, Da means the use of computational and algorithmic tools for the use of literary study, specifically “distant reading.” Da’s critique of CLS has the benefit of being clear and of voicing the suspicion many traditional literary scholars have of CLS: Da argues that in CLS “what is robust is obvious…and what is not obvious is not robust” (601). Put bluntly, for Da, CLS basically entails counting words/word frequency to make arguments, and this word counting either confirms what we already know (e.g., the dissimilarity between historical apocrypha and fiction or narratives) or suggests phenomenon that are dubious (e.g., the devotional structure of Augustine’s Confessions) (615; 614). Part of the problem, as Da sees it, is in the preparation of corpora for analysis: judgment calls abound in this process, and she argues that these decisions stack the deck for the results obtained (e.g., in defining what a haiku is in order to compare how prevalent this form is in East Asian poems) (619). In other words, Da is arguing that confirmation bias is a real problem for CLS.

While Da, as the title implies, is intending to use computational methods to argue against CLS, there are some important components of her argument that are not computational but are nevertheless important. For example, Da ends the first paragraph of her article by affirming that, “There is a fundamental mismatch between the statistical tools that are used [in CLS] and the objects to which they are applied” (601). Narrowly read, this statement would mean that the tools are not appropriate for the corpora (objects). But, her point, I think, is broader than that. Toward the end of the essay, Da claims that CLS work reduces literary studies to counting and states that, “In literary studies, there is no rationale for such reductionism; in fact, the discipline is about reducing reductionism” (638). On its face, this last statement seems true to me, but what I am trying to highlight is that she is making an implicit ontological argument: statistical tools come from and respond to a world (the natural sciences as well as a good portion of the social sciences) that is realist in its ontological mode and, thus, employs objectivist epistemology (such as statistics). Thus, as Da states early on, there is a “fundamental mismatch” between statistics and literature, between tools for a world that is stable and an object of study that while coherent is contingent, ineffable, irreducible. That is why, to my mind, she ends the article by stating that the utility of computational textual analysis is rendered more or less ineffectual by literature and “in particular, reading literature well” (639).

One response to Da’s provocative claims comes from Fotis Jannidis, who goes to some lengths to defend CLS. Jannidis argues well that Da seems to want for CLS to be able to explain a literary phenomenon in its totality and points out that a method cannot be ruled just because it does not account for the complexity of a phenomenon in total (6). It’s not quite clear to me that Da demands this of CLS. Instead, she seems to be responding to the rhetoric around CLS. That said, I think his point is well taken. Jannidis also argues, from a quantitative perspective, that Da has selected very few articles to analyze and indict a whole methodology; according to Jannidis, Da’s study is rife with selection bias (9-11). This point is also well taken. While Da does claim to analyze representative cases, Jannidis shows that she has focused on a small number of articles from the American academy, leaving the work of European scholars out (9-10).

Where Jannidis’ critique fails, I believe, is in how he deals with “complexity.” Jannidis challenges the notion that “literature is singularly complex” (3-4), which on its own seems…unfortunate. Literary art, by definition, resists unitary, stable readings. For example, metaphor attempts to describe an experience by invoking a completely different one. How metaphors work is not at all transparent, even if we are familiar with them and have little difficulty in interpreting them. Likewise, irony, while common, is literally counterintuitive: in irony, we see a non-coincidence between words and their meaning, so much so that a speaker often means the opposite of what they say. That is not straightforward. In addition, Jannidis collapses the complexity of literature with the complexity of other disciplines, namely sociology and psychology (3). While it is true that sociology attempts to “describe whole societies” and psychology “tries to understand the psyche of individuals as well as groups” (3), it is clear that those disciplines tend to use quantitative methods to reach conclusions. That is, they do not typically subject qualitative data (the closest thing to literature) to quantitative methods, not unless that qualitative data has been coded in ways that mean specific things and only those specific things. However, I think the more important point, I think, is that Jannidis does not address the ontological and epistemological differences of statistical methods and literature. That is the problem I personally can’t let go—and the problem that some CLS enthusiasts seem to neatly put to the side.

Ultimately, this debate brings me back to the question I’ve had for quite a while and continue to have: what can CLS, and digital humanities, offer literary studies that literary can’t do on its own already? If Da is right and CLS studies often reveal the obvious, what is the point? To use a shiny new object, a shiny new method? Or, better yet, to use a shiny new method that can both garner more funding while at the same time bolstering the validity of literary studies vis-à-vis the quantitative sciences? Is this a mad dash for funding as well as scientific and cultural relevancy? Or are we in the embryonic stage of methods that will one day more clearly aid in the work of interpretation and analysis?

Works Cited

Da, Nan Z. “The Computational Case against Computational Literary Studies,” Critical Inquiry, vol. 45, no. 3, 2019, pp. 601-639. https://doi.org/10.1086/702594. Accessed 15 Oct. 2021.

Jannidis, Fotis. “On the Perceived Complexity of Literature: A Response to Nan Z. Da,” Journal of Cultural Analytics, vol. 5, no. 1, 2020. https://doi.org/10.22148/001c.11830. Accessed 15 Oct. 2021.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: