There’s no intrinsic conflict between language and the numerical. But we do have a profoundly embedded cultural misinterpretation of their relation, one that structurally disadvantages the study of literature and language. This essay outlines the problem and suggests some conceptual and institutional steps toward creating epistemic parity among the technical and nontechnical disciplines.
The year 2019 brought the sixtieth anniversary of C. P. Snow’s famous lecture The Two Cultures. Speaking in 1959, Snow classed scientists as people with the future in their bones, and “literary intellectuals” as “natural Luddites,” people who are too focused on the price of progress to make any progress themselves (22). This idea of the division of human knowledge, and of literary intellectuals as critics rather than creatives, carries on today. Some of us have tried to get rid of the label—by means of the critique of critique, for example—but so far such actions have reinforced our perceived resistance to progress.
I.
In higher education, you’d think we would have solved this two-culture division long ago. Liberal arts education is all about general learning and creating capabilities in multiple domains. The liberal arts might define literacy and numeracy as automatic partners. Every humanities major would, in a proper world, graduate with the ability to use basic numerical techniques. French majors would know the basics of Stata or R for statistics, or of Python or Java for Web design. Similarly, every science major would interpret complex language and have competency in at least one second language. Why not? The world isn’t divided into two cultures such that we must live in only one of them. In many European countries, a version of these dual competencies is assumed to be the outcome of a high school diploma. In the United States, colleges and universities are said to offer an education in the liberal arts and sciences. The fields clustered as the digital humanities show not just the difficulties but also the intellectual reach of quantitative and qualitative methods combined.
Universities are under continuous pressure to make degrees as fast and as cheap as possible.
There’s another Sputnik-era aspiration to remember, though honored in the breach: liberal arts education was to be available at every type of college or university across the United States. “Today, we are concerned with general education—a liberal education—not for the relatively few but for a multitude,” wrote the Harvard president James B. Conant (qtd. in Meranze 1313). Full arts, sciences, and applied curricula would not be limited to flagships. If you lived in central Wisconsin and couldn’t afford to move to Madison—or didn’t want to—you could get an equivalent quality of liberal education at the University of Wisconsin, Stevens Point. You could study biology before applying to medical school like your Madison counterparts. And you could at the same time become proficient in German or Spanish or Arabic, depending on your interests. You could flip it around and study Arabic to join the foreign service while learning biology to better understand how the world fits together. Having both linguistic mobility and scientific competency was to be the hallmark of the educated person in the modern world. And that is to say nothing of the many practical but nonmonetary benefits of having, say, physicians who speak a second language and negotiators who understand ecology. There was a powerful egalitarian assumption in the university system of the United States: education quality would be spread widely in the student population, to reflect the wide distribution of human intelligence and society’s complex needs.
This egalitarian assumption was rarely supported by action. That began to change through social and political pressures that came from outside universities and inspired students and some faculty members within them. I’m referring to the black civil rights movement, the feminist movement, the antiwar movement, for starters. Many practitioners of liberal arts and sciences learned that they either connect to and study sociocultural and political movements or they do not correctly prepare students to live in their world as it actually is. Academic isolation generates flawed liberal arts research. Universities that aspired to integrate social groups had also to integrate heterogeneous knowledges across their multiplicity.
So how are we doing with this?
II.
Not too well right now. Let’s start with a few crude metrics. College completion rates in the United States have fallen from first in the world in the 1960s to sixteenth or so—middle of the pack—today. Racial disparities remain very large (Shapiro et al. 16). Under the current, half-privatized business model (permanently suboptimal public funding, tuition high enough to create debt burdens), inequality of completion rates across racial groups will get worse rather than better in the next ten years.1 And completion refers just to whether you get a degree or not. The content of that degree, its quality, its ties to your society and your inner life, are not captured by these statistics.
On the whole, higher education in the United States has an educational mediocrity problem. The response of university leaders has been weak. University officials mostly adopted efficiency measures that conceal the learning issues and then tried to meet them. Universities are scrambling to improve graduation rates and time to degree. For example, the share of college students who don’t complete a degree within six years remains 40% (Shapiro et al. 11). This is indeed a serious problem. But degree completion needs to be fixed without wrecking the education that goes into the degree. This is hard when the goal is narrowed to the metric of degree completion in tandem with time to degree. The easiest way to increase degree output is to reduce the requirements for learning. Universities are under continuous pressure to make degrees as fast and as cheap as possible. Administrations in the mass public higher education sector have not produced a counterdiscourse that explains the need for high-quality learning for large numbers of regular people. This has allowed the dominant media question to be, Why don’t they just get on with it, trim the custom features of higher learning, and crank out more degrees?
Fast, cheap, and more are not bad things, in themselves; it depends on how these attributes affect people and what they lose in getting them. To repeat, the fastest and cheapest way to get more degrees is to make the degree easier to get. Language study is an excellent example of the problem. Monolingual adults mostly find language learning to be difficult and embarrassing. Consequently, many English departments are trying to rebuild major numbers by diluting their foreign language requirements. The same is true of literary history: truncating requirements is thought to increase enrollments, especially when it is the earlier material that is truncated.
I’ve been thinking about the effects of metrics on language instruction since I lived through the supposedly unavoidable weakening of foreign language standards in the late 2000s in a study abroad program, the Education Abroad Program (EAP), which serves all the campuses of the University of California (UC) system. I started directing programs in France in the fall of 2008, right as the global financial system was collapsing. The Bordeaux center had been UC’s first EAP location in the early 1960s. In the program’s early decades, applicants had to pass an oral French exam and then had to succeed with a year of university courses in French. This was an immersion program in every sense. EAP was also a research exchange: as it happened, California-based professors brought the American concept of black studies to Bordeaux and helped generate a suite of race-focused international programs that continue to this day. Students worked with faculty members who were actively engaged in research. UC professors lived in Bordeaux for up to two years, and conducted individual advising. Though the result was never quantified, the program came closer than most to developing creative capabilities in its graduates. The program created graduates who had at least limited working proficiency in French (classified as proficiency level 2 by the Interagency Language Roundtable) and many who had professional working proficiency (level 3).
This began to change in the 2000s. Applicants no longer had to pass an oral language exam. Time of immersion for most students was shortened to half a year. Then, for revenue purposes, the program began to solicit students with no background at all in the language. Managers created English-language curricula in the UC center that included an intensive local language course like one they might have at home, with much reduced contact with their local university peers. The EAP program’s central office was overseen by UC’s office of the president (UCOP) and by officials who had no direct knowledge of language instruction or of this worldwide program’s intellectual activities. UCOP told the program it would have tuition funds but lose at least 85% of its existing state funds, and redirected other forms of support. The budget targets locked in the dilution of programs—not of risk management strategies, student insurance coverage, legal or budgetary staff, but of educational activities. Faculty directors were removed from most locations, and course expenditures were cut. In the 2010s, under budget austerity, and in the hope of widening recruitment, the EAP office created multisite travel programs that involved limited coursework and little if any language instruction. Although headcount enrollments increased, the typical student stays for a shorter time, so the numbers of students counted as so-called full-time equivalent have been stagnant for a decade. It’s quite possible that administrators underestimated study abroad students: they may be more likely to pay full tuition for the academic distinction of language proficiency but not for another “student experience.” No one seems to keep the data on what this has done to language attainment. The study abroad program has met its modest output metrics. And yet, it’s possible that a majority of the program’s graduates—many working-class students and students of color—get no new language proficiency from the program.
III.
But why do we need the concept of numerical culture to explain a fairly commonplace curtailing of ambition and quality in a widely admired humanities program? Isn’t it just about technology, money, and power? And about neoliberal austerity making a crisis out of competing costs for health care, incarceration, and tax cuts? Yes. But these answers beg the question of why this complex presides over educational processes to the extent that it does. Numerical culture provides axioms, affects, and formulations that weaken claims made about the value of studying literature and language. It is belief system or structure of feeling that allows neoliberalism’s general marketization of relationship-based processes like learning to move readily through academia, even when most academics dislike marketization. I’m reluctant to call it “neoliberalism’s operating system,” but in a sense it is that. Numerical culture has some core features that I’ll reify here for the sake of brevity. I’ll also suggest that humanities scholars can affect if not end numerical culture. But this will require levels of intensive, collective engagement from tenure-track faculty members that we haven’t seen before.
Reductively, the plot of numerical culture runs as follows. Numbers are better than words. In a head-on conflict with a narrative account, they are more authoritative by default: they enjoy epistemic superiority to narrative. This superiority anchors audit culture, in which managers can decide on the value of people’s jobs because they have a detached and objective knowledge about it that those people lack. One’s nonnumerical retort does not have the epistemic standing to check the usually greater organizational power of the auditor. Expertise or activities that systemically lack numerical grounds will also be in the back of the line for resources, which reinforces their lower status. Nonquantifiable benefits of nonnumerical knowledge creation will stay on the margins, which favors a focus on quantifiable results, particularly monetary returns.
Quantification is good at stripping out context, variation, specificity, individuality, and other qualities people associate with values, standpoints, beliefs, and other effects of not being value-free . . .
I’ll offer a few details on four elements here. First, numbers are superior to narrative because they claim an objectivism that narratives do not. This claim goes beyond the obvious, multifarious benefits of quantification to equate validity with being free of value. Quantification is good at stripping out context, variation, specificity, individuality, and other qualities people associate with values, standpoints, beliefs, and other effects of not being value-free: one can make long lists of the qualities that objectivism suppresses (Gorur; Davis et al.). The mistake that numerical culture encourages is to equate the two—to equate the purging of contexts with the purging of values that creates objectivity. Objectivism ties the process of knowledge creation to the elimination of what language and qualitative research do, which expresses “interpretation and critical evaluation, primarily in terms of the individual response and with an ineliminable element of subjectivity” (Small). Objectivism insists on truth as invariant, which makes little sense with social and cultural objects and processes. In practice, objectivism puts an impossible burden of proof on nonquantifiable effects. In my example of language instruction, program budgets define a common or objective reality, while the ability to use a foreign language is an intangible benefit whose value will change according to aims, practices, situation, and relation to other types of knowledge.
Second, numerical culture is an audit culture. The core assumption is that work is made more efficient through external controls focused on outputs defined in quantitative terms. Audit gradually came to be seen as a necessary (and often, as a sufficient) condition with transparency and the suppression of self-dealing and fraud. It fits nicely with the internalized self-discipline Wendy Brown and others see as a defining feature of neoliberalism: audit sets the targets and then lets you decide how you’re going to meet them. Audit is managerial in that the targets are set by supervisors and not by you. This is the alleged basis of audit’s objectivity, which grounds its power to override your subjective input (Power; Strathern; Shore and Wright). In the study abroad program I’m discussing, budget cuts were justified by a consultancy report that identified quantitative performance targets without reference to qualitative educational effects. Managers calculated the revenue drop and translated this into center closures and, at least in my remaining centers, doubled French-language class size. Predictably, I fought against this on the basis of my judgment that it would reduce educational quality. But audit always renders professional judgment as secondary to quantitative targets. My advocacy for smaller classes was easily disposed of as special pleading for my programs. Part of the budget plan was to eliminate most faculty directors, so most centers would soon not have an expert to argue in the first place.
Third, numerical culture is structurally indifferent to resource inequality and to concentrations of influence. It is predisposed to influence monopolies. This follows from its first feature, which is its belief in its objective validity in relation to situational claims about values and effects, and the second, which is the efficiency of quantified audit. In research, bibliometric analysis of outputs—citation frequency and location, for example—developed around a scholarly confidence in the Pareto distribution, which assumes that 80% of output will generally be produced by 20% of the researchers (if not fewer). Contemporary bibliometrics always finds that intellectual output and impact are concentrated rather than widely distributed (De Bellis). While citation analysis was often motivated by an interest in tracing intellectual genealogies within complicated intellectual networks, quantification made articulating substantive lineages seem inefficient and unnecessary. Bibliometric practices excel at quantifying and connecting occurrences of the same, while discovery, inventions, and dissent famously break from the same and standard affiliations and are often nourished by subcultures that the mainstream does not notice, approve, or cite. Instead of insisting on qualitative comparisons of highly diverse intellectual outcomes, numerical culture encourages standardization for the sake of ranking or “ordinalization” across difference (Fourcade). It naturalizes (and aims at) bibliometric inequality and sanctions an acceptance of unequal resources that reinforces inequalities between publication in English and in all other languages and between the topics and interests of the global north and those of the global south. Bibliometric practice structurally disadvantages the humanities in relation to STEM fields: book-based fields are undercounted, and arts and humanities disciplines, with next to no extramural funding to assemble research teams, have much lower per-author outputs. On the one hand, a professor of the Italian baroque can have an influential international career publishing three books and some limited number of articles, all of which may be, in terms of publishing metrics, nearly invisible. On the other hand, an experimental physicist I recently visited noted that he is listed as an author on 2,500 articles. Field-to-field comparisons of publication rates (invalidated by scientometrics practitioners but done anyway) increase administrative and social bias against non-STEM fields. A vicious cycle translates lower publication output into lower public visibility. Global academia is experiencing neocolonial research distribution of resources and influence. This will not change without systematic studies of the skewing effects of bibliometrics on minority knowledge production and resource inequalities, which numerical culture discourages.
Fourth, numerical culture’s elevation of quantification biases it toward the monetary effects of education. In this framework, the problem with teaching language proficiency is that there’s no money in it—except for the money universities spend helping make it happen. This is how it should be: laboratory research also loses money. Language proficiency and laboratory discoveries often have great value, but that value mostly takes nonmonetary form (or what economists call nonpecuniary value). Complicating things further, most of the value of research is indirect and external, meaning that it appears later, dispersed in society, and its returns often accrue to other people. This is particularly true in non-STEM fields, since these have no sight line to direct market returns down the road (McMahon). Put these factors together, and you have often nonmonetary and often nonindividual effects that simply can’t be quantified as a return on an individual investment. So numerical culture discourages interest in it: it’s not opposed to people learning other languages, but it can’t project a value.
Hence we have endless studies of the value of a college degree in terms of salary increments over high school diplomas, or of earnings by major. We know a great deal about the private-market benefits of higher education. And yet we have “no comprehensive system . . . for identifying public goods in higher education,” in part because the thing numerical culture insists that we do, quantify outputs, doesn’t capture the value of goods that are nonmonetary, collective, diffuse, or created through a series of steps (Marginson 105). The result is lower general interest in using public funds to pay for goods like foreign language proficiency: numerical culture has largely destroyed the public understanding of both nonprivate and nonmonetary effects that benefit the wider society in diffuse and uncalculable ways. Absent an understanding of public goods, institutions don’t know how to defend the funding of fields like language study, which are rich in personal and social nonmonetary effects but poor in direct cash value. The logical consequences continue to unfold: the conversion of literature and language departments into service units; further reductions in tenure-track faculty lines, which is deepening our employment crisis; and reduced university funding for all research that lacks quantifiable market returns, like all research in literature and language.
These four features obviously have many sources. But they are integrated and synthesized by this descendent of Snow’s two cultures, numerical culture. These features systematically disadvantage efforts to bring the numerical back to epistemic respect for the kind of knowledges produced in the study of languages, literatures, and culture.
IV.
The last broad point I want to make is that the situation must and can be fixed. But we can’t continue to fight one local battle after another—saving French-language class size here, departmental status there. After cutting nearly all humanities departments at the University of Wisconsin, Stevens Point, they spared English and Spanish in the final round, but not French (Flaherty). Ongoing fiscal and demographic pressures will make piecemeal defenses less likely to succeed.
I propose instead a radical and persistent reframing of numerical culture. This does not mean a purging of the numerical; quantitative data are as essential as language, and they are as potentially progressive, and transformative, as language. I’ve personally spent quite a bit of time trying to get numbers into debates about university policy. The problem is numerical culture, not numbers or their analytical use.
I’ll summarize four counterprinciples to numerical culture and then go back to the example of language instruction to explain how they might work (see table 1).
Table 1. Elements of Numerical Culture
wdt_ID | Feature | Counterprinciple |
---|---|---|
1 | Epistemic superiority (objectivism) | Subjective empiricism |
3 | Managerial audit | Collaborative self-governance |
4 | Influence monopolies | Epistemic parity |
5 | Concealed nonmonetary effects | Specified nonmonetary effects |
First, qualitative fields include endless amounts of irreducible, unquantified detail. They are both theoretical and empirical. But their empiricism is often small-scale, case-based, and focused on individual interiority. The methods of qualitative fields cannot be purged of the subjective—nor should they be, since the presence of detailed subjectivity is much of their strength. I think of this feature as subjective empiricism, and my point here is that it should be explained to other academics, professionals, and the public not just with clarity but with complete confidence and pleasure. There is much to be said—again and again—about the epistemological links between supposedly objective and subjective fields. Some will be critique, and some pursuit of claims about mutually constitutive relationships. Alain Desrosières’s work on the history of statistical reasoning is a leading example of the literature that studies the inevitable presence (and benefits) of subjectivity in quantified forms of knowledge. He shows that the calculation “that introduces rigor and natural science methods into human sciences” itself rests on the “reuniting of two distinct realms,” one a formal mathematical technique like probability calculus, the other a social or administrative practice like classifying and coding (10). The process of objectification is real, and yet it always occurs by denying while also retaining the results of a “joint conceptualization” between the numerical and society (52). Subjective empiricism includes those nonnumerical data in quantitative analysis. It also means tracking the continuous presence of empirical complexity in the flow of consciousness itself.2 Being open about subjective empiricism does not by any means insure my own longer-term goal, which is epistemic justice among disciplinary cultures. But it establishes a starting point—a kind of epistemic equity among the wrongly polarized modes.
Second, surpassing numerical culture requires assimilating audit into professional life. The accounting theorist Michael Power noted twenty years ago that using the numerical for arms-length management damages performance and managerial judgment together. Audit also makes democratic group relations seem unnecessary, or pointless, and impossible. In his recent survey of metrics applications, Jerry Muller found only one class of metrics that actually achieved the gains it sought to achieve. Metrics for transparency did not, and accountability metrics did not, both being administered externally. Metrics for improvement did improve outcomes, when used as information about the effects of various options and analyzed within a group.3 I would posit, as the basis of further research, that metrics causally improve performance only when they are routed through professional expertise by people directly involved in delivering the service—people who are guided by intrinsic motivation (for example, to improve patient benefits rather than receive a salary bonus) and who are possessed of enough autonomy to act on audit information according to their professional judgment. Audit can be useful but it also can and should be subordinated to—incorporated into—professional practice, which in turn should be self-managed in workplace groups that remain in continuous interaction with skeptical others.
Third, we live in a paradoxical time, when technology is supposed to rescue us from ourselves and the numerical is the foundation of technology, yet the interpretation of our lives in common has never been more rampant. The widespread public love of culture as such—novels, stories, movies, pop music, long-form television, quality journalism, terrible journalism, Reddit narration, YouTube meme roundups by Internet divas, infinite public explainings of various crises, endless online posts and articles of eight hundred to two thousand words on everything—is consumption turned into new forms of mass culture production. Where is the academic study of culture in all this? We need not be on the sidelines of efforts to make connections between the incomprehensible tech complexity known to mandarins and the unstoppable generation of meaning by the seven billion. The current default is that technological truth rescues politics and cultures from their backwardness (Davies). It would help for literary and cultural scholars to build the systematic case for epistemic equality between qualitative and quantitative analysis. These are familiar issues in the human sciences—pioneered by black studies avant la lettre (descending from slave narratives, music, and black Christian practices, among others), feminist standpoint theory, race studies, queer theory, postcolonial studies, and poststructuralist critique of various kinds. This work that diverse critical theories have been doing for decades now has a wider public role to play in the relegitimation of historical and literary reason. The powers of this reason need to be made more explicit. At times it will feel like the 1990s science wars all over again—we will need renewed critiques of tech objectivism among other things. But the cases for epistemic parity need to be made again, and better. The outcome I hope for is the coexistence of divergent epistemologies and their various conclusions, brought into relations of nonidentity. I don’t see how we can understand all the different things we need to grasp, in the different ways in which they are exclusively graspable, without the reciprocal respect for knowledges that depends on parity.
Finally, we will need to defeat the nearly exclusive focus on the private, pecuniary benefits of intellectual work. We can and must critique that focus, but we must also specify—reconstruct—the nonpecuniary benefits of literary and cultural study. There are many benefits—involving health, democracy, happiness, peace, reduced racism, literacy in climate statistics, children’s education, neighborhood problem solving—all of which are affected by college education. There are others in which literary and language study are deeply involved, including those that reflect the specific achievements of the last fifty years of literary criticism, ranging from reading methodologies for complex and contradictory texts, to knowledge of nondominant identities.
An example to which I often return is a process of embodied and situated cognition that I got from my three years of advising several hundred study abroad students in the program I mentioned above. The process identifies an intellectual interest and turns it into a research project that one pursues, completes, and then fights for in one’s divergent communities. For my own purposes, I’ve codified the main steps in the list below. Students need to
1. have knowledge of what a research question is;
2. have basic subject knowledge in a chosen topic area, e.g., its major research questions;
3. develop a capacity for being interested in questions where the answer is nonobvious;
4. have the ability to inquire into one’s own core interests;
5. develop the project topic research question (with self-reflexivity and metacognition);
6. identify a thesis or hypothesis about the topic (one that is interesting and nonobvious);
7. plan the investigation (identify steps and continually revise methods);
8. organize research (including recording and sorting of conflicting information);
9. interpret research results (including results that are contradictory, disorganized, unsanctioned, or anomalous);
10. develop one’s analysis and narrative into a coherent narrative (gaps included);
11. publicly or socially present findings and respond to criticism;
12. have the ability to reformulate conclusions and narrative in response to new information and contexts; and
13. have the ability to fight opposition, to develop within institutions, and to negotiate with society.
I have a pair of views about this list. First, once people see their point, these capabilities are widely desired. Second, having spent decades teaching and having reviewed the limited-learning literature that tries to estimate this, colleges and universities in the United States impart these capabilities to the happy few. The key reason is inadequate funding, which in turn doesn’t materialize, because (in part) not enough people know that universities are supposed to help create these powers and thus there is no public pressure to produce them. (This weak public grasp of advanced learning also underwrites the shocking overuse of contingent teaching labor.) People know about gaining specific units of practical knowledge, and earning the degree that will help with a future salary—the pecuniary gain—because universities and legislatures talk about these all the time. They know little about the individual nonpecuniary benefits of these complex cognitive capabilities, and even less about the collective impacts of these capabilities. Humanities disciplines furnish benefits that are largely nonpecuniary, in contrast to computer science, accounting, and statistics, among other fields.4 Explaining nonmonetary effects is in literary study’s direct self-interest. But in a broader sense, explaining these effects will help build a constituency for the full range of the university’s nonmonetary benefits to society, which needs all of them.
Had my university and I been less trapped in numerical culture as I’ve sketched it, we would have had an easier time maintaining and enhancing language study in the programs in France. I’ll quickly invoke the four counterprinciples.
First, transcending fiscal objectivism, I had dozens of detailed descriptions of student development: to name just two, there was the architecture student who figured out the historical reasons for Lyon’s variety of city grids and learned interview techniques in a new language, and the political science student who designed a multilingual research project comparing postcommunist economic changes in Russia and China.5 In a postnumerical framework, such benefits would have been as important to the program’s management as were the budget figures, and would have continually registered with them.
Second, in contrast to reshaping by audit, better decisions would have been made had local staff members, including people in our host institutions, been central to the deliberations that happened at a great distance at the Office of the President. The home office had the power to decide, but not the knowledge to decide well. Our local staff members and I had little problem developing an alternative budget that would have kept open a center that the program management had decided to close, while keeping up teaching quality. We worked through the numbers ourselves, and we reinterpreted them in the light of our situated knowledge of what we knew we couldn’t do without, given our specific educational goals. We used our detailed knowledge of the actual programs to make better savings than their cuts could, and also built interest in the programs to avoid cuts. Reconnection of managerial and professional expertise, and linking quantitative and qualitative knowledge, would have given our plan a real chance to succeed and, ironically, also attract more STEM majors to the strongest science center in France, which the program did indeed close.
Third, on the question of epistemic parity, had senior managers in the UC system held language study in as high regard as they did STEM and economics, they wouldn’t have been continuously squeezing the program in the first place. They would have been bragging about it as cutting-edge global education and trying to build it up. They would have tried to support the institutional sources of intangible benefits, even if some of them, like one-on-one faculty advising, were expensive. Epistemic parity would have at least made the administrative decision process multivocal and kept basic language instruction principles (small class size) in the discussion.
Finally, had nonpecuniary goals been front and center, the program’s narrow cost analysis wouldn’t have decided the fate of the program. It would have been more obviously a good investment to support the language learning in other countries that accelerates more or less every aspect of personal intellectual development. Study abroad is the dictionary definition of why anyone should go to college. This was particularly true, in my experience, of my working-class and first-generation students. They were the most likely to feel changed forever—to have new autonomy, new confidence, a new sense of equality with more privileged students, a new conviction that they could survive on their own and set their own direction. I think these are effects of literary and historical study in general—under the right conditions. They help people see where they fit into a world that this study helps them understand concretely, in its variety and detail, and feel some agency in it.
Obviously, a postnumerical culture will be decades in the making, and it offers no guarantees. But the humanities now lose at games of allocation that numerical supremacism has rigged. With premises that acknowledge not only the value of data but also the value of complementary interpretive systems, outcomes will become less skewed.
While he was insulting them, Snow also paid literary intellectuals the compliment of having influence over society’s debates about its future. Snow was right about this. But he forgot to add: though cultural and historical study should (and does) embrace the numerical, the dominance of numerical culture must be abolished.
Notes
1. On the built-in problems with privatization, see Newfield, Great Mistake. For a comprehensive modeling of future graduation rates under conditions of a projected decline in student enrollments, see Grawe.
2. Elsewhere I use the opening of Virginia Woolf’s To the Lighthouse as an example of subjective empiricism (“Student Debt”).
3. In this paragraph I draw on my review of Muller’s work (Review).
4. See the debatable but suggestive calculations of McMahon.
5. For more detail, see Newfield, “Humanities Creativity.”
Works Cited
Brown, Wendy. Undoing the Demos: Neoliberalism’s Stealth Revolution. Zone Books, 2015.
Davies, William. Nervous States: Democracy and the Decline of Reason. W. W. Norton, 2019.
Davis, Kevin, et al., editors. Governance by Indicators: Global Power through Classification and Rankings. Oxford UP, 2012.
De Bellis, Nicola. Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics. Scarecrow Press, 2009.
Desrosières, Alain. The Politics of Large Numbers: A History of Statistical Reasoning. Translated by Camille Naish, Harvard UP, 2002.
Flaherty, Colleen. “Cuts Reversed at Stevens Point.” Inside Higher Ed, 11 Apr. 2019, www.insidehighered.com/news/2019/04/11/stevens-point-abandons-controversial-plan-cut-liberal-arts-majors-including-history.
Fourcade, Marion. “Ordinalization.” Sociological Theory, vol. 34, no. 3, 2016, pp. 175–95.
Gorur, Radhika. “Seeing Like PISA: A Cautionary Tale about the Performativity of International Assessments.” European Educational Research Journal, vol. 15, no. 5, 2016, pp. 598–616.
Grawe, Nathan D. Demographics and the Demand for Higher Education. Johns Hopkins UP, 2018.
Marginson, Simon. Higher Education and the Common Good. Melbourne UP, 2016
McMahon, Walter W. Higher Learning, Greater Good: The Private and Social Benefits of Higher Education. Johns Hopkins UP, 2009.
Meranze, Michael. “Humanities Out of Joint.” American Historical Review, vol. 120, no. 4, 2015, pp. 1311–26.
Newfield, Christopher. The Great Mistake: How We Wrecked Public Universities and How We Can Fix Them. Johns Hopkins UP, 2016.
———. “Humanities Creativity in the Age of Online.” Occasion, vol. 6, Oct. 2013, arcade.stanford.edu/sites/default/files/article_pdfs/OCCASION_v6_Newfield_100113.pdf.
———. Review of The Tyranny of Metrics, by Jerry Z. Muller. The British Journal of Sociology, vol. 70, no. 3, June 2019, pp. 1091–94.
———. “Student Debt and the Social Functions of Consolidation College.” The Debt Age, edited by Jeffrey R. Di Leo et al., Routledge, 2018, pp. 197–213.
Power, Michael. The Audit Society: Rituals of Verification. Oxford UP, 1999.
Shapiro, Doug, et al. “Completing College: A National View of Student Completion Rates—Fall 2012 Cohort.” National Student Clearinghouse Research Center, Dec. 2018, nscresearchcenter.org/wp-content/uploads/SignatureReport16.pdf.
Shore, Cris, and Susan Wright. “Audit Culture Revisited: Rankings, Ratings, and the Reassembling of Society.” Current Anthropology, vol. 56, no. 3, June 2015, pp. 421–44.
Small, Helen. “The Public Value of the Humanities.” Typescript.
Snow, C. P. The Two Cultures and a Second Look: An Expanded Version of the Two Cultures and the Scientific Revolution. Kindle ed., edited by Stefan Collini, Cambridge UP, 1964.
Strathern, Marilyn. Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. Routledge, 2000.
Christopher Newfield is distinguished professor of English at the University of California, Santa Barbara, and the author of a higher education trilogy: Ivy and Industry (Duke UP, 2003), Unmaking the Public University (Harvard UP, 2008), and The Great Mistake (Johns Hopkins UP, 2016).
One comment on “The Trouble with Numerical Culture”