This post is part of a symposium on the Methods of Political Economy.
The introductory essay to this symposium repeats several standard views about neoclassical economics that I think are some combination of dated, inaccurate, or irrelevant. I think what legal scholars (including those who do “economic analysis of the law”) perceive as economics is a tragic caricature of what currently happens inside economics departments, and the list of people to blame for that is long. But I’ve written about that elsewhere, and so I’ll take advantage of this space to defend one thing I think current economics shares with its neoliberal doppelganger. The most salient reason neoclassical economics became dominant in policy circles is that it was allied with a general revanchist business interest squeezed by late-60s/early 70s crisis. But we should not forget that it also presented the policymakers dealing that crisis something that more heterodox approaches did not: parsimony.
A recurring theme in the criticism of neoclassical economics is its constraining precision and abstraction. Even when freed from laissez-faire dogma, economic models maim reality, and the narrow style of quantitative empirical work in economics does a great deal of epistemic violence to details, compressing qualitative differences into quantitative measures. The result is a discipline that is insensitive to a lot of the cultural, psychological, and institutional details that sibling social sciences take seriously. The criticisms from heterodox economists and other more qualitative social sciences perpetually remind us of this. We use math that doesn’t actually capture reality; we quantitatively measure entities that are immeasurably different; and we insist on a crude logical positivism even as economics is deeply constitutive of institutional realities. But the virtue of all these vices is simplicity, well worth the cost of the notorious blindspots.
My claim here is not just the usual one that every discipline requires blindspots—that to model reality is to miss some of it—but also that a discipline can only be useful in guiding governance if it has such blindspots. Any social science that aims to inform (and perform) the function of a complex social organization, like a state or corporation, that enforces even somewhat impartial rules needs to ruthlessly abstract from particularities. In particular, it must use mathematics, for making incommensurable claims commensurable, for representing the workings of fantastically complex adaptive systems, and for complementarity with technologies of organizational administration, like spreadsheets.
Relatedly but distinctly, a social science that is useful for the legal needs of a large administrative state operating in a complex heterogeneous society must also be parsimonious. This social science ought to be cognitively lightweight and context-independent so that citizens and experts and bureaucrats and judges and lawyers can easily communicate new situations across a large population in a common idiom. Late 20th century neoclassical economics provided a primitive, ideology-laden language for doing this, for example by making market prices inviolable adjudicators of value inside the regulatory state (e.g. via cost-benefit analysis). But its successor will not be found in pendulous, wordy treatises penned by ethnographers and humanists; it will be instantiated in formal organizational protocols and algorithms that are the logic of some mathematical social science. It will remedy some of the blindspots of modern economics but will have its own. The question is what form that mathematical social science will take, what normative principles will it encode, and how will it be fit into an administrative apparatus that is as transparent and legitimate as possible. I suspect an updated economics will be a part, but only a part, of whatever post-neoliberal administrative epistemology we wind up inhabiting.
I don’t pretend there are no disadvantages to an embrace of the bindspots required by making complex systems cognizable to administrators. James Scott’s oeuvre convincingly shows the hazards of standardization and abstraction of human interactions, whether backed by the full power of the state or the relentless demands of large markets governed by the profit motive. The straight lines of forest in Germany or NYC’s mini-Corbusier street grid give images of state regimentation visible from space. It is difficult not to be attracted to Scott’s work and his evident sympathy for those losing from the business end of modernization.
But Scott’s last book, on early civilization as basically a giant despotic labor camp, left me wary about the overall agenda. It is one thing to acknowledge and attempt to account for all of the human costs modernization and its governance technologies have imposed. It is another to reject modernization altogether. We can’t take Scott too seriously and still hope to have complex, technologically advanced societies. As Foucault observed, only somewhat critically, what Chicago school economics offered is a particular set of philosophical moves that allow governance, abstracting from complex motives and environments, and reducing everything to incentives and rationality, similar to disciplines like psychology and criminology in earlier moments in history.
Nowhere are these philosophical moves more critical than in societies that want alternatives to the market. Markets impose their own abstractions, carving reality into pieces that can be grasped by contracts, tradable property rights, and entrepreneurs seeking opportunities. A democratic government capable of exerting effective power over a massive social provisioning system without degenerating into corruption, capture, or chaos, should have a comprehensible logic to its rules, even if that logic is flawed. One of the attractions of price theory to judges in the Manne program on law and economics was that it simplified a vast amount of precedent in domains of the law (like antitrust) that seemed to be endlessly nuanced. And the simple/simplistic framework provided by marginal revenue and marginal cost curves imposed enough coherence on the subject that many judges decided to Fuck Nuance.
The analogy I have is from machine learning (inspired by this essay by Buterin and Weyl), where one has to trade-off between fidelity to the observed data and ability to predict outcomes in new data. Fit what you see too well, and you lose the ability to predict new stuff. And what machine learning does with more and more data is not just fit existing data better and better, but instead also learns what to forget, so that its representation of inputs is parsimonious and whittled down to only what is vital for the task at hand. A powerful representation of the world is one that is maximally sparse while correctly handling new situations.
Consider different views on inflation. A monetarist view is that inflation is driven by increases in the money supply. The theory is simple, sparse, and completely ignorant of myriad institutional details about Federal reserve practices, how the financial system gets money into the economy, and how businesses and consumers consider the prices at which they are willing to exchange. It gave one blunt tool to attack inflation, and that is to pull a Volcker shock: throwing millions of people out of work and breaking the back of unions to shock the system back into price stability. Another theory would believe that government panels, consisting of consumers, businesses, and workers, who could jointly tune a million knobs and set prices by fiat so that inflation doesn’t occur, as many countries did in the 1970s stagflation.
Both are wrong but in different ways. The first is too sparse, like fitting a straight line to a collection of points. The second is not sparse enough, like drawing a line through every single point. Practical theories, and institutions instantiating those theories, sit somewhere in between. Keynes himself—who laid the foundation for a thousand heterodoxies—was a maestro of balancing abstraction with institutional detail in the 20th century. On good days, I sometimes think the beginning of another body of practical theory is to be found in empirical microeconomics, which demands extremely detailed knowledge of particular institutions, but often in the service of extracting a theoretically portable and credible estimate (e.g. the “union premium in wages”) from data that can be compared to estimates from other institutional settings.
Perhaps thankfully, economics is done being a master metaphor for governance for the foreseeable future. Perhaps one post-neoliberal philosophical move will come from computer science, which will operationalize its own blindspots into the rational agents it is constructing. But the emergence of computational social science, algorithmic mechanism design, and artificial intelligence have further institutionalized economics into some of the algorithms and protocols of the digital economy, with scholars actively working on how to also incorporate principles of justice and fairness into these. Everything from auctions to dating apps to privacy protocols is modeled under economic assumptions, and even still-new methods of reinforcement learning eerily echo the representative agent from freshwater macroeconomics. So mainstream economics is going to only increase in reach and influence, but its audience may not be government policymakers so much as potentially even more powerful engineers. Economics may prove to be a discipline best suited for designing environments for robots to manage our economic affairs on our behalf. “Optimization plus equilibrium” economics could still be a grammar of governance, embodied in minds colder and faster and, if we’re lucky, more virtuous than ours. Perhaps these minds, equipped with a better economics, can take over the problem of allocating scarce resources whilst preserving democratic distributions of power, Iain Banks style, leaving us to pursue more exciting things.
Suresh Naidu (@snaidunl) is Associate Professor of Economics and Public Affairs at Columbia University.