Librarian of the Internet. Collector of Stories.
2995 stories

Throwing Shade on the Enlightenment

1 Share

Slate writer Jamelle Bouie caused quite a kerfuffle by discussing the racist elements within the Enlightenment, first with a series of tweet threads and then with a more detailed article including, as they say, receipts. Bouie contends that the Enlightenment played an outsized role in creating our modern concept of race, and that if we want to take the Enlightenment seriously we have to grapple with the ways it is intimately bound with racism.

That’s it. Bouie was inundated with gasps of incredulity, however, drawing responses from both Ben Domenech and Robert Tracinski at the Federalist, and Katie Kelaidis at Quillette. Despite Bouie’s clarity, both his Twitter and long-form critics persist in misrepresenting his arguments and assaulting strawmen. Below I stylize some of the common misrepresentations or confusions and offer my own clarifications. The italicized statements are the things Bouie did not argue!

The Enlightenment invented xenophobia, tribalism, and every other form of bigotry.

Modern racism is a very particular beast, and not simply interchangeable with fear of foreigners or even colorism. Race in the European and American context (the relevant areas covered by the “Western” Enlightenment) is a theory of biological inferiority (though there are environmental and cultural variants) of nonwhite people, especially black people. This racial hierarchy was initially used to justify slavery and colonialism, but has evolved to maintain white supremacy long after chattel slavery was ended.

Because John Locke, Immanuel Kant, and other heroes of the Enlightenment were racist or supported slavery means we should stop talking about their ideas and eject them from the curricula.

This is just a non-sequitur. Realizing someone had some bad ideas doesn’t mean they have nothing to contribute, and Bouie never said otherwise. Besides, odious views of historical figures doesn’t even necessarily mean we have to condemn them. There’s a difference between moral fault and culpability.

The Enlightenment was racist so we should ditch the Enlightenment altogether.

Again, criticizing something is not the same as condemning. At least some of the scholars Bouie cites (Charles Mills is the only one I’m very familiar with) are trying to work through the baggage of the actually implemented beliefs and practices of Enlightenment liberalism in order to more fully realize its great promise. It’s a deconstructive, rather than destructive endeavor. The best way to separate the Enlightenment (or liberal) wheat from the liberal chaff is to not give the ideology the benefit of the doubt. If early scientific racialization counts as a mark against Enlightenment principles then that’s something that a sturdier liberalism just has to overcome.

There has been no racial progress since slavery.

In the last 250 years we’ve seen incredible progress on numerous fronts. And I think this progress has indeed come from certain liberal ideas. But that doesn’t mean everything is hunky-dory. Ibram X. Kendi suggests in his invaluable Stamped from the Beginning that we shouldn’t think about racial progress as marching along a single axis, whereupon you can take the proverbial two steps forward and one backward. Instead, racist and antiracist ideas can both progress and coevolve. The election of Barack Obama is surely a testament of our racial progress, but the election of the explicitly racist Donald Trump and the plethora of rhetorical dance moves made to evade its racist implications are all the latest in racist progress.

A booby-trapped ideology

Taking seriously the idea that racism could have been baked into the Enlightenment(s) and liberalism from the beginning is not about attacking liberalism or impugning Enlightenment heroes. The purpose of talking about how Kant, for example, explicitly and implicitly divided humanity into persons and subpersons is not to declare “Aha! Kant was a racist! Therefore we should ignore Kant, burn his books, and trash the Enlightenment!” No, problematizing Kant is the beginning of the argument, not the end. The real point is to see how racism and other ugly ideas that have been with the Enlightenment from the beginning might still be surreptitiously influencing even the good Enlightenment and liberal ideas we rightly cherish.

Before I give examples I want to cover my flank by pointing out that none of this criticism (or problematization or whatever you want to call it) is unique to the liberal wing of the Enlightenment. What Adam Gurri has called the minefield of prejudice afflicts every modern ideology, for the simple reason that every such ideology has evolved from intellectual ancestors. In his piece, Gurri admirably reflects on the potential pitfalls of his own beliefs and their influences, but argues that progressives and leftists face genealogical dangers as well:

Perhaps no single individual was more influential on the 20th century left than Karl Marx. Putting aside the obvious problems that arose among Marxists specifically—although a small but growing group has begun once again to defend the indefensible—Marx’s antisemitism is well known. What is more, “On the Jewish Question” is considered so central to the development of his thought that it is standard to include it in collections of his writing. His criticism of capitalism is intertwined with his antisemitism in ways that any critic of capitalism employing frameworks on which he had influence ought to be concerned about. Again, this isn’t a matter of discrediting the left …

I’ll give two examples of how liberal ideas are still being quietly influenced by the Enlightenment’s racist origins: the evergreen discourse on race and intelligence and the cloaking of capitalist values in ideal theory. First, my understanding from reading current reputable secondary sources is that race is not a scientifically useful concept, even though certain subcategorizations based on ancestry are valid for certain purposes (like predispositions to disease, etc). In other words, had early biologists not been afflicted with racist ideas to begin with, they would never have seized upon the broad racial categorizations we’re saddled with now. And so even as human biology and population science have progressed to the point (recently!) that the real scientists aren’t pressing racial narratives, we still have a public discourse where these ideas run amok even among good faith and intelligent people who appeal to other Enlightenment ideas to keep the race discourse forever open.

Sam Harris is a good example of this. It is abundantly clear in his debate with Vox’s Ezra Klein (and the associated blog posts and articles) that he doubts whether history or the humanities can have anything useful to say about questions of race and intelligence. He is immune to understanding that centuries of racist ideology have guided and shaped scientific questions in ways that might have lingering effects even now, to say nothing of the obviously persisting effects they have had on popular discourse about these scientific questions, which continue to “just ask questions” about race and intelligence long after the relevant experts themselves have determined race isn’t a useful genetic grouping. The most salient thing for Harris in controversies over race-based science is that he sees people pushing racial narratives as being bullied and their freedom of speech threatened. Harris and Charles Murray and others like them thus harness the Enlightenment values of free speech, open dialogue, and free inquiry — all good and important in and of themselves — to serve the interests of racism.

Second example: the liberal practices of private property and free exchange are extremely important, and they’re central to the progress that we’ve seen in terms of rising living standards and expanding capabilities. But the liberal justifications for economic freedom tend to assume equality of persons in terms of rights and powers, and also that economic freedom has no important dependence on history. Individuals just happen to own property that they bought or inherited fair and square and defending freedom means defending these property claims. But of course the actual distribution of property was brought about by violence, expropriation, and exploitation. Not only this, but the expropriation and exploitation were racialized, meaning persons belonging to groups now understood to be different races systematically bore the brunt of this process. Now, modern liberals defend present holdings and wealth/income distributions based on ideal assumptions and scoff at demands for racial justice and recompense as fundamentally opposed to liberal values. Again, private property and free exchange are essential ideas, but they require subtler justification, and any such justification will fall well short of absolutist interpretations.

Both of these examples show how racialization in the early Enlightenment advantaged non-racialized groups at the expense of racialized groups, and how liberal ideology has worked to conceal the process, even as Enlightenment and liberal ideas have become less racist over time. Grappling with the sometimes implicit racist roots of Enlightenment thinkers helps us to understand how those ideas are still influencing us today.

Featured image is “A Concise History of Black-White Relations in the United States” by Barry Deutsch (Patreon), shown here with permission.

Read the whole story
2 days ago
New York, NY
Share this story

Is Internet Access a Right?

1 Share

As a teenager I would mow lawns. The implicit contract was relatively simple, but it illustrates the relationship between claims and duties. If I mowed Mrs. Nichols’s lawn, then I had a claim of $15, which she had a duty to pay.

Similarly, when it is claimed that Internet access is a right, then a duty to provide that service naturally follows. In this case, citizens are the claimants, and the state, or perhaps some other agent, becomes duty-bound to furnish Internet access to the citizens.  

Like many other phrases, Internet access is a polysemous term, encompassing a range of possible claims. So, the right to Internet access takes differing forms.

In the United Kingdom, for example, the right to Internet access was recently defined in the context of speed. Instead of placing regulations on Internet service providers or demanding that they abide by buildout requirements, the government made access to high speed Internet of 10 Mbps a right. Now there is a legal requirement for BT to provide high speed broadband to anyone requesting it in any region in the UK, subject to a cost ceiling. In the government’s announcement, it made the goals abundantly clear. The change was meant to “maximise the provision of fixed line connections in the hardest to reach areas.”

In the United States, where broadband begins at 25 Mbps, the right to high speed Internet suggests a different set of duties. Indeed, if the US were to define broadband at the 10 Mbps mark, then the right of Internet access would almost have been met since 96 percent of people in the United States already have this level of access.

Speed and deployment aren’t the only ways to define rightful Internet access. When the United Nations passed a nonbinding resolution making Internet access a human right, there was little mention of speed. Instead, the strongest opprobriums came in the form of Internet shutdowns and privacy violations. This version of a right to Internet access suggests a different set of duties, which the UN explained in their resolution language:

Condemns unequivocally all human rights violations and abuses, such as torture, extrajudicial killings, enforced disappearances and arbitrary detention, expulsion, intimidation and harassment, as well as gender based violence, committed against persons for exercising their human rights and fundamental freedoms on the Internet, and calls on all States to ensure accountability in this regard;

Condemns unequivocally measures to intentionally prevent or disrupt access to or dissemination of information online in violation of international human rights law and calls on all States to refrain from and cease such measure.

Employing the language of Isaiah Berlin, Internet access can be understood as a negative right, which obliges a duty of inaction, or a positive right, which obliges a duty of action. That is, the right to Internet access can be understood as a positive right meaning that everyone should have access to certain speeds. Or, the right to Internet access can be understood as a negative right limiting the government from shutting down the Internet and using Internet access as a pretext for human rights violations.

Notice how Federal Communications Commissioner Michael O’Rielly defined the right to Internet access back in 2015:

It is even more ludicrous to compare Internet access to a basic human right. In fact, it is quite demeaning to do so in my opinion. Human rights are standards of behavior that are inherent in every human being. They are the core principles underpinning human interaction in society. These include liberty, due process or justice, and freedom of religious beliefs. I find little sympathy with efforts to try to equate Internet access with these higher, fundamental concepts.

What the Commissioner is doing here is known as lexical prioritizing. Lexical priority is a broad term to describe how rights are to be prioritized, like a dictionary. So, if right A is lexically prior to right B, then you will have to totally exhaust A before moving to B.

Even though he is not explicit, the emphasis by Commissioner O’Rielly on liberty, due process, and justice speaks to the negative right of Internet access, in that government should not be unduly interfering with Internet use. Indeed, the rest of his speech focuses on government hindrances. In this way, he places the negative right before the positive right of Internet access.

While O’Rielly was chided at the time for those remarks, U.S. law is centrally focused on negative rights and limitations on government action. As Judge Richard Posner explained, the Constitution serves as “a charter of negative rather than positive liberties.” That focus, however, does not require that negative Internet access rights be lexically prioritized—there needn’t be a complete satisfaction of the negative rights for us to then turn to positive rights and the issue of broadband buildout.

Rights talk, though, necessitates lexical priority. When the rhetoric of rights is involved, the matter at hand is set apart from other concerns, and is granted claims and duties. One still could retort, “why do we need to classify it as a basic human right in order to argue that the Internet, in this day and age, is a necessity that we want more and more people to have equal access to?”

Yet, rights talk is made within a political context. Philip Tetlock (summarized here by Steven Pinker) correctly characterized this act of prioritizing for what it is, a shrewd move meant to create unpalatable political tradeoffs:

Tetlock distinguishes three kinds of tradeoffs. Routine tradeoffs are those that fall within a single relational model, such as choosing to be with one friend rather than another, or to purchase one car rather than another. Taboo tradeoffs pit a sacred value in one model against a secular value in another, such as selling out a friend, a loved one, an organ, or oneself for barter or cash. Tragic tradeoffs pit sacred values against each other, as in deciding which of two needy transplant patients should receive an organ, or the ultimate tragic tradeoff, Sophie’s choice between the lives of her two children. The art of politics, Tetlock points out, is in large part the ability to reframe taboo tradeoffs as tragic tradeoffs (or, when one is in the opposition, to do the opposite). A politician who wants to reform Social Security has to reframe it from “breaking our faith with senior citizens” (his opponent’s framing) to “lifting the burden on hardworking wage-earners” or “no longer scrimping in the education of our children.” Keeping troops in Afghanistan is reframed from “putting the lives of our soldiers in danger” to “guaranteeing our nation’s commitment to freedom” or “winning the war on terror.”

Currently, when we discuss the issue of Internet access, it is best described as a routine tradeoff. By talk of rights, Internet access is raised to apotheosis. The effect has explicit political implications. By claiming that Internet access is a right, any future discussion of negative consequences will be understood as a taboo tradeoff. Once you make Internet access a right, mentioning the cost will be seen as simply crude.  

But it is a costly endeavor to build out Internet access to everyone. Last year the Federal Communications Commission (FCC) wrote a white paper on the topic and estimated that about $80 billion would be needed to get everyone onto a fixed broadband connection. Half of that would go to connecting the last 2 percent of homes in the United States. And even after this last group got connected, the government would need to provide continuing support.     

What rights talk accomplishes is the creation of a firm rhetorical footing for certain duties or entitlements. Yet you don’t need the right for the entitlement to be present, and for this reason in particular, I’m fairly skeptical of a positive right to Internet access.

For example, not many are concerned with a positive right to telephone access because various programs effectively create broad telephone access. This is done through the federal Universal Service Fund and other state-based versions of this program. For those not steeped in telecommunications policy, the phrase “universal service” has been the moniker under which telephone entitlements have been established, not a telephone access right. Indeed, the Telecommunications Act of 1996 has a section explicitly dedicated to universal service and defines it as the promotion of “quality services at just, reasonable, and affordable rates” and the expansion of “such services to all consumers, including those in low income, rural, insular, and high cost areas at rates that are reasonably comparable to those charged in urban areas.” In short, these are duties without rights.

Separately, I don’t think it is necessary to make Internet access a right in the negative sense because of the broader legal and political climate in the United States. Indeed, there is a case to be made that Internet access rights in the negative sense aren’t rights sui generis, but are merely emanations of other enshrined rights like the right to privacy, freedom of speech, freedom of association, the right of habeas corpus, private property rights, and the right to a speedy trial. In the U.S. it is difficult to shut down the Internet because governments are already constrained in what they can do. In other countries, this isn’t the reality. Both Ethiopia and Sierra Leone shut down the Internet recently, but both countries rank fairly low in international rankings on governmental constraint.

Claiming that Internet access is a right creates a rhetorical line in the sand. Yet, that categorization isn’t needed to make the case that people should have broad and equal access to the service.


Featured image is a partial map of the Internet

Read the whole story
7 days ago
New York, NY
Share this story

The Road to Citizens United Revisited: A Review of We the Corporations

1 Share

In the 2010 Supreme Court case Citizens United v. FEC,  the Court went out of its way to rule on matters never actually brought up in the original brief, making it a landmark case instead of a rather plain and incremental one based on precedent. Yet the decision was far from surprising given the long history—some might say the relentless march—of corporations fighting for and winning more and more Constitutional rights. Citizens United, as it turns out, was the cherry on top of a long, seemingly teleological movement toward greater and more clearly-defined rights for corporations.

This is not to say that there weren’t gaps in this history in which the courts were hostile to corporate interests, or that the expansion of corporate rights was inevitable from the beginning—only that Citizens United represented a significant step for corporations in which the relationship between the average citizen and the corporation was thrown into sharp relief. Citizens and the predictable ruling in Hobby Lobby shortly after don’t ease the worries of those who think corporations have become dangerously powerful in the 21st century and a potential threat to the democratic way of life.

But it wasn’t always this way. As Adam Winkler points out in We the Corporations, corporations “have pursued a long-standing, strategic effort to establish and expand their Constitutional protections” since the beginning. Their ability to find and fund the best lawyers along with an urge to increase profits and minimize regulation made them, quite literally, “Constitutional first movers” in every sense of the term (xiii). They haven’t just “piggybacked on the rights already held by individuals” (xiii). But in addition to being “first movers,” corporations were also great “Constitutional leveragers … [exploiting] constitutional reforms originally designed for progressive causes, transforming them to serve the ends of capital” (xiii). In essence, corporations have been on both sides of it, bravely hacking their way through unexplored Constitutional terrain as well as widening the Constitutional path traversed previously by individuals.  

The fundamental question at the heart of Winkler’s study is this: “Is a corporation, as Blackstone said, a legal person with rights of its own? Or is a corporation … best understood to be an association of people whose rights are derived from its members?” (70). This philosophical question has, unlike many philosophical questions, very real consequences, and the courts have at different times answered in the affirmative one way or the other, wavered on the question, and avoided answering the question altogether. At other times, they simply seemed to have answered “yes” to the question of “which one?” as if it were a coherent and satisfactory answer. t’s easy to take shots at the decisions, indecisions, and wavering of the courts: but that would be to misunderstand the murky nature of corporations and the questions that follow. Blackstone argued that corporations had three core rights: the ability to own property, make contracts, and a right to access the courts. But he also said, tellingly, that they were both public and private entities: while corporate bylaws were binding, they could be modified if “contrary to the law of the land.” As Winkler points out, this is in contrast to our modern view where we think of corporations as pretty much purely private entities.

Dartmouth College v. Woodward (1819) put an end to this uneasy tension between public and private, holding that a state legislature couldn’t change or dissolve a corporation’s charter. Chief Justice Marshall concurred with Daniel Webster—the lead attorney arguing on behalf of his alma mater and defendant in the case, Dartmouth—and broke with Blackstone’s view: although a corporation was not a person with all the rights of physical, warm-blooded persons, they were largely private, market-based entities created by and held accountable to the rights laid out in the corporation’s original charter; no longer were they the sole creatures of government and amenable to the public good. Thomas Jefferson, interestingly enough, responded:

the idea that institutions established for the use of the nation, cannot be touched nor modified, even to make them answer their end, because of rights gratuitously supposed in those employed to manage them in trust for the public, may perhaps be a salutary provision against the abuses of a monarch, but is most absurd against the nation itself.

The court, also at Webster’s insistence, denied corporate personhood. Of course, this had a double-edge to it. Because corporations were not persons, the court had to “pierce the veil” and consider the corporation’s members, its trustees, its stockholders to make a decision. Corporate personhood—whatever its critics and advocates argue today—was advanced by both anti-business populists for the purpose of limiting corporate rights as well as by pro-business corporationalists for the purpose of securing for corporations the same rights as individual citizens.

As the decisions in Dartmouth and Bank of the U.S. v Deveaux (1809)—allowing corporations to appear in federal court, a right that was thought to be limited to citizens—settled in, Justice Taney swept the courts with a populist backlash against corporations, seeking to reverse the reach of these two decisions. “The Taney Court narrowed, refocused, and finally rejected [the] rule that, for access to federal court, the citizenship of a corporation was defined by the citizenship of its members,” and instead ruled that it was the state in which the corporation, as a legal entity, resided (103). Taney also suggested, Winkler writes, that “stockholders should not be able to take advantage of [corporate] personhood to shield their assets, only to turn around and argue for piercing the corporate veil when it came to corporate rights.” This logic would echo through the years, finding itself awkwardly nestled next to rulings like Hobby Lobby (2014) in which business owners were afforded the benefit of corporate piercing so as to deny their employees certain healthcare benefits based on religious reasoning, but were then able to pull the corporate veil back over themselves if, say, someone were to slip and break their arm on company property and sue.

After the Civil War, things went yet another direction. Justice Stephen Field, in a series of decisions and oversteps, finally ruled in Minneapolis St. Louis Railway Co. v. Beckwith (1889) “that corporations are persons within the meaning of the clause in question.” The clause, or more accurately the clauses in question are of course, the “equal protection of the laws” and deprivation of property clauses of the Fourteenth Amendment. In other words, Field, who had capitalized on a previous but erroneous headnote from a different case, had just ruled that corporations were persons protected under the Fourteenth Amendment. Although the court at the end of 19th century still believed in “minimal or necessary restrictions on business activity,” it’s difficult to overstate the precedent this ruling and this era set for the rights of businesses and corporations in the upcoming 20th century—especially considering the fact that many courts around this time were already showing a bias toward laissez-faire, free market thinking in most of its corporate case rulings. Beyond this, we need only look to the fact that, as Winkler points out, a mere 28 of the 604 Fourteenth Amendment cases brought before the courts dealt with “the group whose plight motivated the adoption of the amendment”: African-Americans. “[A]nd in nearly all of those cases the racial minorities lost.”

The beginning of the 20th century kicked off what historians called the Progressive Era and in terms of the judiciary the Lochner Era. It was also the trust-building and trust-busting era, as well as the pro-business era—this era pelts the reader with such a tangle of disparate and sometimes conflicting strands that it can be difficult to see what actually came of the period between the 1890s and the 1930s. One of the main things to come out of it, for our purposes, was the court’s ruling that that corporations had property rights but not liberty rights like freedom of association. Spurred by the Progressive era urge to regulate the trusts and hold businesses criminally accountable, the courts were forced once again to address “questions about the scope of corporate rights under the Constitution.” But alas, the court never offered a “thoughtful justification of the distinction” between property and liberty rights nor bothered to “define the respective terms.”

Progressive reforms brought corporations and businesses before the courts to once again be “first movers”—this time with regard to the Fourth and Fifth Amendments. Hale v. Henkel (1906) decided that corporation didn’t have a Fifth Amendment right against self-incrimination but did have Fourth Amendment claim against unreasonable searches and seizures. To be sure, a corporation’s right to not be unreasonably searched was less extensive than that which applied to actual people. The definition of “reasonable” is a much lower bar for corporations.

Two of the most interesting cases that came before the courts materialized in 1916 and 1919. In the first—more accurately a series of cases referred to as the Brewers Cases—the Michigan Supreme Court held that “corporations have no right to influence elections” and upheld a ban on corporate money-flow to campaigns. The other, Dodge Brothers v. Ford Motor Company (1919), held that, at the end of the day, “business corporations must be run in the interests of stockholders.” Corporate law scholar Kent Greenfield refers to the ruling in Dodge “corporate law’s original sin” (248). Although some scholars argue that the ruling isn’t as consequential—or detrimental to the public good—as some make out to be, it nonetheless “has become deeply ingrained in America’s corporate culture” as a general, guiding principle if not a legally binding statute. In fact the reason it is both a shocking and somewhat dull ruling is that, as Jonathan Macey points out, “the rule of wealth maximization for shareholders is virtually impossible to enforce as a practical matter.” Generally speaking, “As long as corporate directors and CEOs claim to be maximizing profits for shareholders, they will be taken at their word, because it is impossible to refute these corporate officials’ self-serving assertions about their motives.” In other words, the ruling has had more cultural consequences than legal ones.

In the mid-20th century, as the court decided corporations do indeed have press and speech rights, the fuzzy line previous courts drew between property and liberty rights was tossed out. As Winkler says, Justice Harlan Stone’s famous footnote number four in United States v. Carolene Products “marks the end of the Lochner era, when the court devoted itself mainly to protecting economic and property rights, and the beginning of the Brown era, when the court’s primary role became protecting civil rights and civil liberties” (232).  The famous footnote claimed that the courts should work to protect “discrete and insular minorities” who are the usual targets of majority persecution. As Winkler goes on to point out, though, Justice Stone “was not just referring to racial minorities; he also meant political minorities” (232). Or more accurately, any person, group, or entity that was the target of majoritarian impulses, oppression, or unfair treatment.

If our vantage point is the recent Citizens United decision, the mid- to late 20th century is best seen as an essentially pro-business century—perhaps unsurprisingly since free market principles have ushered in an era of unprecedented wealth and comfort. So Citizens United didn’t just appear out of thin air; the scaffolding to support it was largely in place by the end of the century, much to the chagrin of Jeffersonian populist-minded justices like Louis Brandeis. Hence, the unsurprising nature of the decision given the long view of corporate rights cases.

From the standpoint of judicial activism—a term not just reserved for more liberal decisions—the majority decision in Citizens United reached a new high. Originally only briefed on whether the FEC was wrong to censor a political documentary about Hillary Clinton funded with money from a corporation’s treasury, the conservative band of Alito, Scalia, Thomas, and Kennedy pushed back against Chief Justice Robert’s original narrow ruling. “The court,” this band of right-leaning justices explicitly stated, “should not focus on the narrow question of whether the Bipartisan Campaign Reform Act applied to this one movie but on the bigger question of whether corporate political expenditures could be limited at all.” “Their draft opinion was a bold corporationalist statement on the expansive rights of corporations that blew well past” anything the lawyers in the case were arguing for. As Winkler observes:

Judicial activism is often just a label given to court rulings someone opposes. In Citizens United, however, the charge was not without justification. The court’s majority had finessed the case so that the justices could decide a major constitutional issue that had not originally been briefed and argued by the parties. The court had also struck down key provisions of a law passed by Congress, and in doing so overturned McConnell, a precedent that was less than seven years old.

What had happened? Little more than a change of Supreme Court personnel could explain this radical shift, if not from the long shadow of the 20th century, from the recent precedents and rulings in the 21st. More on Citizens’ precedent-shattering nature in a moment.

Hobby Lobby—the case that essentially determined that corporations have religious freedom, though failed to explicitly rule on the Free Exercise clause—flows more or less naturally from the history. “The Supreme Court’s decision… was a near perfect embodiment of the more than two-hundred-year history of corporate rights jurisprudence” (380). Another victory for the corporation, we might say dryly.

But the whole point of Winkler’s sweep of the history of corporate rights is to illustrate that ever since this country’s founding, corporations

have fought to win a greater share of the individual rights guaranteed by the Constitution. First they won constitutional protection for the core rights of corporations identified by Blackstone in his Commentaries: rights of property, contract, and access to court. Then they won the rights of due process and equal protection under the Fourteenth Amendment [before both minorities and women, mind you] and the protections of the criminal procedure provisions of the Constitution. In the early twentieth century, the court said that there were nonetheless limits to the constitutional rights of corporations: they had property rights but not liberty rights. Eventually, however, the court broke down that distinction and began to recognize corporations have liberty rights such as freedom of the press and of association.

And the rest, that is, the rest of the 21st century, is history.

For anyone looking for a broad and relatively unbiased sweep of the history of corporate rights in the United States, Winkler’s We the Corporations is a good place to start. Although Winkler doesn’t offer a tantalizing conclusion or any solutions, the history is suggestive—which is all we can ever really ask of history.

Read the whole story
10 days ago
New York, NY
Share this story

The Eclipse of Thought

1 Share
  • Michael Polanyi’s preface to his 1951 book The Logic of Liberty begins:

    “These pieces were written in the course of the last eight years. They represent my consistently renewed efforts to clarify the position of liberty in response to a number of questions raised by our troubled period of history.”

    One of those pieces, “Perils of Inconsistency,” was repurposed two decades later by the then-elderly Polanyi and collaborator Harry Prosch for the introductory chapter of Meaning, a book shaped mostly from lectures given by Polanyi in 1969. There titled “The Eclipse of Thought,” the chapter is virtually a reprint of “Perils of Inconsistency” but with nine new paragraphs prepended.

    Here we republish “The Eclipse of Thought,” by Polanyi and Prosch, originally published as Chapter 1 of Meaning, © 1975 by The University of Chicago, with kind permission from University of Chicago Press.

    This is the first in what will be an occasional series on Liberal Currents of republications of classic essays by liberal thinkers.



In a sense this book could be said to be about intellectual freedom. Yet its title, Meaning, is not really misleading, since, as we shall see, the achievement of meaning cannot properly be divorced from intellectual freedom.

Perhaps it could go without saying that intellectual freedom is threatened today from many directions. The ideologies of the left and the right of course have no use for it. In every one of these ideologies there is always some person, group, or party (in other words, some elite) which is supposed to know better than anyone else what is best for all of us; and it is assumed in these ideologies that it is the function of the rest of us—whether doctors, lawyers, or Indian chiefs—to support these “wise” decisions. The examples of fascism and of Marxist communism, especially as developed under Stalin, remain only too painfully present in the consciousness of twentieth-century man; moreover, the works of such writers as Milovan Djilas show us that even the most anti-Stalinist and liberal Communist regimes also engage in the repression of intellectual freedom.

We of the so-called Western world have opposed these totalitarian tyrannies—even to the extent of war. But we ourselves have also threatened intellectual freedom. We have not, to be sure, drowned it in blood, as Hitler and Stalin did. Our threats have been much more devious. We have choked it with cotton, smothered it under various blankets. We have concealed our own affirmation of the value and freedom of our intellect under detached explanatory principles, like the pleasure-pain principle, the notion of the restoration of frustrated activity, the principle of conditioning—and even the concept of free choice itself! In such circuitous ways as these we have denigrated thought and all its works, demoting them to subordinate positions in which thought is conceived to function rightfully only when serving as a means to the satisfaction of supposedly more basic needs or wants, i.e., more material, more biological, more instinctive, more comforting.

Utilitarianism and pragmatism have both, in different ways, declared thought to possess a legitimate function or significance only in relation to social welfare—a welfare conceived largely in terms of physical and material satisfactions. The behaviorists, culminating in B. F. Skinner, have reduced thought to various forms of conditioned behavior and have directed us to look “beyond freedom and dignity”—beyond the life of self-control and self-direction—to the manipulated learning of a set of tricks supposed to be ultimately good for us to have learned. This learning would require us to be placed (by whom?) in a better-organized Skinner Box than that constituted by our present societies.[i] Old Protagoras, if we can trust Plato’s interpretation of him, would have felt right at home with these ideas.

The only modern philosophic school that seems to exhibit respect for intellectual freedom is existentialism, but since it manages to smother the intellectual part of intellectual freedom under a more generic notion of freedom per se, it tends to weaken, in the end, our respect for intellectual freedom by reducing it in practice to the level of betting on the turn of a die. For these philosophers say there are no grounds for choices except the grounds we give ourselves, i.e., except the ones we choose. As Sartre puts it, value arises simply from our choices. What we choose, we value simply because we have chosen it (and apparently we remain scot-free at any moment to nonvalue it by simply un-choosing it). In other words, we do not choose (in his view) because we see the value of something. We see the value of something because we have chosen it. For him, therefore, every choice must ultimately be nonrational, because every rational choice, it is said, is ultimately grounded in a “prerational” choice. This position tells us, therefore, that there can be no reasons for our basic choices. Thought turns out to be of utilitarian value only—and then only when it happens to be of such value.

That this view may very well falter in its respect for intellectual freedom can be seen in the examples both Sartre and Simone de Beauvoir have given us by their on-again-off-again acceptance of various Communist suppressions of “bourgeois” artists and thinkers. After all (as Sartre and de Beauvoir say—sometimes), no one governs innocently anyhow. All governments interfere with the exercise of some sorts of freedom. Since these philosophers (consistently) refuse to make any philosophically based value distinctions between different sorts of freedoms—or even between different uses of these different freedoms—they seem to echo old Bentham’s remark: “Pushpin is as good as poetry.” To repress one is no better and no worse than to repress the other.

We shall see, however, that the existentialists are closer to the truth in their view than any of the other academically popular Western philosophies, because there is a sense in which it is true that determinative reasons cannot be given for every choice—in fact, not for any choice. But the way existentialists have conceived this fact has generated unnecessarily antiintellectual attitudes, with disastrous consequences for the very freedom they value so fundamentally or, in their terms, “choose” so fundamentally.

It might be thought that our inquiry should now be directed to whether or not these erosions of respect for intellectual freedom in our day are justifiable. But even to raise this question is to answer it in the negative. For the attempt to judge any matter whatsoever is the attempt to think seriously about this matter, and such thinking cannot be undertaken without a tacit acceptance of the power of thought to reach valid conclusions. So our attempt to discover whether a right to intellectual freedom, i.e., the freedom to pursue subjects or problems intellectually, is or is not justified already assumes tacitly that it is justified.

Admitting, therefore, that the eclipse of our respect for freedom of thought cannot be justified, since it would require freedom of thought to justify it, we realize that nothing could have destroyed respect for freedom of thought but its own misuse; for it is only free thought that could call into serious question the validity of anything, including itself. Let us see therefore if we can discover how this self-destruction of thought came about.


From a careful study of the history of thought in our own time it is possible to see that freedom of thought destroyed itself when thought pursued to its ultimate conclusions a self-contradictory conception of its own freedom.

Modern thought in the widest sense emerged with the emancipation of the human mind from a mythological and magical interpretation of the universe. We know when this first happened, at what place, and by what method. We owe this act of liberation to Ionian philosophers who flourished in the sixth century B.C. and to other philosophers of Greece who continued their work in the succeeding thousand years. These ancient thinkers enjoyed much freedom of speculation but never raised decisively the issues of intellectual freedom.

The millennium of ancient philosophy was brought to a close by Saint Augustine. There followed the long rule of Christian theology and the Church of Rome over all departments of thought. The rule of ecclesiastic authority was impaired first in the twelfth century by a number of sporadic intellectual achievements. Then, as the Italian Renaissance blossomed out, the leading artists and thinkers of the time brought religion more and more into neglect. The Italian church itself seemed to yield to the new secular interests. Had the whole of Europe at that time been of the same mind as Italy, Renaissance humanism might have established freedom of thought everywhere, simply by default of opposition. Europe might have returned to—or, if you like, relapsed into—a liberalism resembling that of pre-Christian antiquity. Whatever may have followed after that, our present disasters would not have occurred.

However, there arose instead in a number of European countries—in Germany, Switzerland, Spain—a fervent religious revival, accompanied by a schism of the Christian church, which was to dominate people’s minds for almost two centuries. The Catholic church sharply reaffirmed its authority over the whole intellectual sphere. The thoughts of men were moved, and politics were shaped, by the struggle between Protestantism and Catholicism, to which all contemporary issues contributed through their alliance with one side or the other.

By the beginning of the present century the wars between Catholics and Protestants had long ceased, yet the formulation of liberal thought still remained largely determined by the reaction of past generations against the old religious wars. Liberalism was motivated, to start with, by a detestation of religious fanaticism. It appealed to reason for a cessation of religious strife. This desire to curb religious violence was the prime motive of liberalism in both Anglo-American and Continental areas; yet from the beginning the reaction against religious fanaticism differed somewhat in these two areas, and this difference has since become increasingly accentuated, with the result that liberty has been upheld in the Western area up to this day but has suffered an eclipse in central and eastern Europe.

Anglo-American liberalism was first formulated by Milton and Locke. Their argument for freedom of thought was twofold. In its first part (for which we may cite the Areopagitica) freedom from authority is demanded so that truth may be discovered. The main inspiration for this movement came from the struggle of the rising natural sciences against the authority of Aristotle. Its program was to let everyone state his beliefs and to allow others to listen and form their own opinions; the ideas which would prevail in a free and open battle of wits would be as close an approximation to the truth as can be humanly achieved. We may call this the antiauthoritarian formula of liberty. Closely related to it is the second half of the argument for liberty, which is based on philosophic doubt. While its origins go back a long way (right to the philosophers of antiquity), this argument was first formulated as a political doctrine by Locke. It says simply that we can never be so sure of the truth in matters of religion as to warrant the imposition of our views on others. These two pleas for freedom of thought were put forward and accepted in England at a time when religious beliefs were unshaken and indeed dominant throughout the nation. The new tolerance aimed preeminently at the reconciliation of different denominations in the service of God. Atheists were refused tolerance by Locke on the ground that they were socially unreliable.

On the Continent the twofold doctrine of free thought—antiauthoritarianism and philosophic doubt—gained ascendance somewhat later than in England and moved straightway to a more extreme position. This position was first effectively formulated in the eighteenth century by the philosophy of Enlightenment, which was primarily an attack on religious authority, particularly that of the Catholic church. It professed a radical skepticism. The books of Voltaire and the French Encyclopedists, expounding this doctrine, were widely read in France, while abroad their ideas spread into Germany and far into eastern Europe. Frederick the Great and Catherine of Russia were among their correspondents and disciples. The type of Voltairean aristocrat, represented by the old Prince Bolkonski in War and Peace, was to be found at court and in feudal residences over many parts of Continental Europe at the close of the eighteenth century. The depth to which the philosophes had influenced political thought in their own country was to be revealed by the French Revolution.

Accordingly, the mood of the French Enlightenment, though often angry, was always supremely confident. Its followers promised mankind relief from all social ills. One of the central figures of the movement, the Baron d’Holbach, declared in 1770 that man is miserable simply because he is ignorant. His mind is so infected with prejudices that one might think him forever condemned to err. It is error, he held, that has evoked the religious fears which shrivel men up with fright or make them butcher each other for chimeras. “To errour must be attributed those inveterate hatreds, those barbarous persecutions, those numerous massacres, those dreadful tragedies, of which, under pretext of serving the interests of Heaven, the earth has been but too frequently made the theatre.”[ii]

This explanation of human miseries and the remedy promised for them continued to carry conviction with the intelligentsia of Europe long after the French Revolution. It remained an axiom among progressive people on the Continent that to achieve light and liberty you first had to break the power of the clergy and eliminate the influence of religious dogma. Battle after battle was fought in this campaign. Perhaps the fiercest engagement was the Dreyfus Affair at the close of the century, in which clericalism was finally defeated in France and was further weakened throughout Europe. It was at about this time that W. E. H. Lecky wrote: “All over Europe the priesthood are now associated with a policy of toryism, of reaction, or of obstruction. All over Europe the organs that represent dogmatic interests are in permanent opposition to the progressive tendencies around them, and are rapidly sinking into contempt.”[iii]

I well remember this triumphant sentiment. We looked back on earlier times as on a period of darkness, and with Lucretius we cried in horror: Tantum religio potuit suadere malorum—what evils religion has inspired! So we rejoiced at the superior knowledge of our age and its assured liberties. The promises of peace and freedom given to the world by the French Enlightenment had indeed been wonderfully fulfilled toward the end of the nineteenth century. You could travel all over Europe and America without a passport and settle down wherever you pleased. With the exception of Russia, you could, throughout Europe, print anything without prior censorship and could sharply oppose any government or creed with impunity. In Germany—much criticized at the time for being authoritarian—biting caricatures of the emperor were published freely. Even in Russia, whose regime was the most oppressive, Marx’s Kapital appeared in translation immediately after its first publication and received favorable reviews throughout the press. In the whole of Europe not more than a few hundred people were forced into political exile. Over the entire planet all men of European origins were living in free intellectual and personal communication. It is hardly surprising that the universal establishment of peace and tolerance through the victory of modern enlightenment was confidently expected at the turn of the century by a large majority of educated people on the Continent.

Thus we entered the twentieth century as on an age of infinite promise. Few people realized that we were walking into a minefield, though the mines had all been prepared and carefully laid in open daylight by well-known thinkers of our own time. Today we know how false our expectations were. We have all learned to trace the collapse of freedom in the twentieth century to the writings of certain philosophers, particularly Marx, Nietzsche, and their common ancestors, Fichte and Hegel. But the story has yet to be told how we came to welcome as liberators the philosophies that were to destroy liberty.

We have said that we consider the collapse of freedom in central and eastern Europe to be the outcome of an internal contradiction in the doctrine of liberty. But why did it destroy freedom in large parts of Continental Europe without producing similar effects, so far, in the Western or Anglo-American area of our civilization? Wherein lies this inconsistency?

The argument of doubt put forward by Locke in favor of tolerance says that we should admit all religions since it is impossible to demonstrate which one is true. This implies that we must not impose beliefs that are not demonstrable. Let us apply this doctrine to ethical principles. It follows that, unless ethical principles can be demonstrated with certainty, we should refrain from imposing them and should tolerate their total denial. But, of course, ethical principles cannot, in a strict sense, be demonstrated: you cannot prove the obligation to tell the truth, to uphold justice and mercy. It would follow therefore that a system of mendacity, lawlessness, and cruelty is to be accepted as an alternative to ethical principles and on equal terms. But a society in which unscrupulous propaganda, violence, and terror prevail offers no scope for tolerance. Here the inconsistency of a liberalism based on philosophic doubt becomes apparent: freedom of thought is destroyed by the extension of doubt to the field of traditional ideals, which includes the basis for freedom of thought.

The consummation of this destructive process was prevented in the Anglo-American region by an instinctive reluctance to pursue the accepted philosophic premises to their ultimate conclusions. One way of avoiding this was to pretend that ethical principles could actually be scientifically demonstrated. Locke himself started this train of thought by asserting that good and evil can be identified with pleasure and pain and by suggesting that all ideals of good behavior are merely maxims of prudence.

However, the utilitarian calculus cannot in fact demonstrate our commitment to ideals which demand serious sacrifices of us. A man’s sincerity in professing his ideals is to be measured rather by the lack of prudence he shows in pursuing them. The utilitarian confirmation of unselfishness is not more than a pretense by which traditional ideals are made acceptable to a philosophically skeptical age. Camouflaged as long-term selfishness or “intelligent self-interest,” the traditional ideals of man are protected from destruction by skepticism.

It would thus appear that the preservation of Western civilization up to this day within the Anglo-American tradition of liberty was due to this speculative restraint, which amounted to a veritable suspension of logic within British empiricist philosophy. It was enough to pay philosophic lip service to the supremacy of the pleasure principle. Ethical standards were not really replaced by new purposes; still less was there any inclination to abandon these standards in practice. The masses of the people and their leaders in public life could in fact disregard the accepted philosophy, both in deciding their personal conduct and in building up their political institutions. The whole sweeping advance of moral aspirations to which the Age of Reason opened the way—the English Revolution, the American Revolution, the French Revolution, the first liberation of slaves in the British Empire, the Factory Reforms, the founding of the League of Nations, Britain’s stand against Hitler, the offering of Lend-Lease, U.N.R.R.A., and Marshall Plan aid, the sending of millions of food parcels by individual Americans to unknown beneficiaries in Europe—in all these decisive actions, public opinion was swayed by moral forces, by charity, by a desire for justice and a detestation of social evils, despite the fact that these moral forces had no true justification in the prevailing philosophy of the age. Utilitarianism and other allied materialistic formulations of traditional ideals remained merely verbal. Their philosophic rejection of universal moral standards led only to a sham replacement; or, to speak technically, it led to a “pseudosubstitution” of utilitarian purposes for moral principles.

The speculative and practical restraints which saved liberalism from self-destruction in the Anglo-American area were due in the first place to the distinctly religious character of this liberalism. As long as philosophic doubt was applied only to secure equal rights to all religions and was prohibited from demanding equal rights for irreligion, the same restraint would automatically apply in respect to moral beliefs. A skepticism kept on short leash for the sake of preserving religious beliefs would hardly become a menace to fundamental moral principles. A second restraint on skepticism, closely related to the first, lay in the establishment of democratic institutions at a time when religious beliefs were still strong. These institutions (for example, the American Constitution) gave effect to the moral principles which underlie a free society. The tradition of democracy embodied in these institutions proved strong enough to uphold in practice the moral standards of a free society against any critique that would question their validity.

Both of these protective restraints, however, were absent in those parts of Europe where liberalism was based on the French Enlightenment. This movement, being antireligious, imposed no restraint on skeptical speculations, nor were the standards of morality embodied there in democratic institutions. When a feudal society, dominated by religious authority, was attacked by a radical skepticism, a liberalism emerged which was protected by neither a religious nor a civic tradition from destruction by the philosophic skepticism to which it owed its origin.

Here, in brief, is what happened. From the middle of the eighteenth century, Continental thought faced up seriously to the fact that universal standards of reason could not be philosophically justified in the light of the skeptical attitude which had initiated the rationalist movement. The great philosophic tumult which started in the second half of the eighteenth century on the Continent of Europe and finally led up to the philosophic disasters of our own day represented an incessant preoccupation with the collapse of the philosophic foundations of rationalism. Universal standards of human behavior having fallen into philosophic disrepute, various substitutes were put forward in their place.

One such substitute standard was derived from the contemplation of individuality. The case for the uniqueness of the individual is set out as follows in the opening words of Rousseau’s Confessions: “Myself alone . . . . There is no one who resembles me . . . . We shall see whether Nature was right in breaking the mould into which she had cast me.” Individuality here challenged the world to judge it, if it could, by universal standards. Creative genius claimed to be the renewer of all values and therefore incommensurable. Extended to whole nations, this claim accorded each nation its unique set of values, which could not be criticized in the light of universal reason. A nation’s only obligation was, like that of the unique individual, to realize its own powers. In following the call of its destiny, a nation must allow no other nation to stand in its way.

If you apply this claim for the supremacy of uniqueness—which we may call romanticism—to individual persons, you arrive at a general hostility to society, as exemplified in the anticonventional and almost extraterritorial attitude of the Continental bohème. If applied to nations, it results, on the contrary, in the conception of a unique national destiny, which claims the absolute allegiance of all its citizens. The national leader combines the advantages of both. He can stand entranced in the admiration of his own uniqueness while identifying his personal ambitions with the destiny of the nation lying at its feet.

Romanticism was a literary movement and a change of heart rather than a philosophy. Its counterpart in systematic thought was constructed by the Hegelian dialectic. Hegel took charge of Universal Reason, emaciated to a skeleton by its treatment at the hands of Kant, and clothed it with the warm flesh of history. Declared incompetent to judge historical action, reason was given the comfortable position of being immanent in history. An ideal situation: “Heads you lose, tails I win.” Identified with the stronger battalions, reason became invincible—but unfortunately also redundant.

The next step was therefore, quite naturally, the complete disestablishment of reason. Marx and Engels decided to turn the Hegelian dialectic right way up. No longer should the tail pretend to wag the dog. The bigger battalions should be recognized as makers of history in their own right, with reason as a mere apologist to justify their conquests.

The story of this last development is well known. Marx reinterpreted history as the outcome of class conflicts, which arise from the need of adjusting “the relations of production” to “the forces of production.” Expressed in ordinary language, this says that, as new technical equipment becomes available from time to time, it is necessary to change the order of property in favor of a new class; this change is invariably achieved by overthrowing the hitherto-favored class. Socialism, it was said, brings these violent changes to a close by establishing the classless society. From its first formulation in the Communist Manifesto this doctrine puts the “eternal truths, such as Freedom, Justice, etc.”—which it mentions in these terms—in a very doubtful position. Since these ideas are supposed always to have been used only to soothe the conscience of the rulers and to bemuse the suspicions of the exploited, there is no clear place left for them in the classless society. Today it has become apparent that there is indeed nothing in the realm of ideas, from law and religion to poetry and science, from the rules of football to the composition of music, that cannot readily be interpreted by Marxists as a mere product of class interest.

Meanwhile the legacy of romantic nationalism, developing on parallel lines, was also gradually transposed into materialistic terms. Wagner and Walhalla no doubt affected Nazi imagery; Mussolini gloried in recalling imperial Rome. But the really effective idea of Hitler and Mussolini was their classification of nations into haves and have-nots on the model of Marxian class war. The actions of nations were in this view not determined, or capable of being judged, by right or wrong: the haves preached peace and the sacredness of international law, since the law sanctioned their holdings, but this code was unacceptable to virile have-not nations. The latter would rise and overthrow the degenerate capitalistic democracies, which had become the dupes of their own pacific ideology, originally intended only to bemuse the underdogs. And so the text of Fascist and National Socialist foreign policy ran on, exactly on the lines of a Marxism applied to class war between nations. Indeed, already by the opening of the twentieth century, influential German writers had fully refashioned the nationalism of Fichte and Hegel on the lines of a power-political interpretation of history. Romanticism had been brutalized and brutally romanticized until the product was as tough as Marx’s own historic materialism.

We have here the final outcome of the Continental cycle of thought. The self-destruction of liberalism, which was kept in a state of suspended logic in the Anglo-American field of Western civilization, was here brought to its ultimate conclusion. The process of replacing moral ideals by philosophically less vulnerable objectives was carried out in all seriousness. This is not a mere pseudosubstitution but a real substitution of human appetites and human passions for reason and the ideals of man.

This brings us right up to the scene of the revolutions of the twentieth century. We can see now how the philosophies which guided these revolutions—and destroyed liberty wherever they prevailed—were originally justified by the antiauthoritarian and skeptical formulas of liberty. They were indeed antiauthoritarian and skeptical in the extreme. They even set man free from obligations toward truth and justice, reducing reason to its own caricature: to a mere rationalization of positions that were actually predetermined by desire and were held—or secured—by force alone. Such was the final measure of this liberation: man was to be recognized henceforth as maker and master, no longer as servant, of what before had been his ideals.

This liberation, however, destroyed the very foundations of liberty. If thought and reason are nothing in themselves, it is meaningless to demand that thought be set free. The boundless hopes which the Enlightenment of the eighteenth century attached to the overthrow of authority and to the pursuit of doubt were hopes attached to the release of reason. Its followers firmly believed—to use Jefferson’s majestic vocabulary—in “truths that are self-evident,” which would guard “life, liberty, and the pursuit of happiness” under governments “deriving their just powers from the consent of the governed.” They relied on truths, which they trusted to be inscribed in the hearts of man, for establishing peace and freedom among men everywhere. The assumption of universal standards of reason was implicit in the hopes of the Enlightenment, and the philosophies that denied the existence of such standards denied therefore the foundation of all these hopes.

But it is not enough to show how a logical process, starting from an inadequate formulation of liberty, led to philosophic conclusions that contradicted liberty. We have yet to show that this contradiction was actually put into operation, that these conclusions were not merely entertained and believed to be true but were met by people prepared to act upon them. If ideas cause revolutions, they can do so only through people who will act upon them. If this account of the fall of liberty in Europe is to be satisfactory, it must show that there were people who actually transformed philosophic error into destructive human action.

Of such people we have ample documentary evidence among the intelligentsia of central and eastern Europe. They are the nihilists.

There is an interesting ambiguity in the connotations of the word “nihilism” which at first may seem confusing but actually turns out to be illuminating. As the title of Rauschning’s book—The Revolution of Nihilism—shows, he interpreted the National Socialist upheaval as a revolution.[iv] As against this, reports from central Europe often spoke of widespread nihilism, meaning a lack of public spirit, the apathy of people who believe in nothing. This curious duality of nihilism, which makes it a byword for both complete self-centeredness and violent revolutionary action, can be traced to its earliest origins. The word was popularized by Turgenev in his Fathers and Sons, written in 1862. His prototype of nihilism, the student Bazarov, is an extreme individualist without any interest in politics. Nor does the next similar figure of Russian literature, Dostoevski’s Raskolnikov in Crime and Punishment (1865), show any political leanings. What Raskolnikov is trying to find out is why he should not murder an old woman if he wanted her money. Both Bazarov and Raskolnikov are experimenting privately with a life of total disbelief. But within a few years we see the nihilist transformed into a political conspirator. The terrorist organization of the Narodniki, or Populists, had come into being. Dostoevski portrayed the new type in his later novel The Possessed. The nihilist now appears as an ice-cold businesslike conspirator, closely prefiguring the ideal Bolshevik as I have seen him represented on the Moscow stage in the didactic plays of the early Stalinist period. Nor is the similarity accidental. The whole code of conspiratorial action—the cells, the secrecy, the discipline and ruthlessness—known today as the Communist method, was taken over by Lenin from the Populists. The proof of this can be found in articles published by him in 1901 and 1902.[v]

English and American people find it difficult to understand nihilism, for most of the doctrines professed by nihilists have been current among themselves for some time without turning those who held them into nihilists. Great, solid Bentham would not have disagreed with any of the views expounded by Turgenev’s prototype of nihilism, the student Bazarov. But while Bentham and other skeptically minded Englishmen may use such philosophies merely as a mistaken explanation of their own conduct—which in actual fact is determined by their traditional beliefs—the nihilist Bazarov and his kind take such philosophies seriously and try to live by their light.

The nihilist who cries to live without any beliefs, obligations, or restrictions stands at the first, the private, stage of nihilism. He is represented in Russia by the earlier type of intellectual described by Turgenev and the younger Dostoevski. In Germany we find nihilists of this kind growing up in large numbers under the influence of Nietzsche and Stirner; and later, between 1910 and 1930, we see emerging in direct line of their succession the great German Youth Movement, with its radical contempt for all existing social ties.

But the solitary nihilist is unstable. Starved for social responsibility, he is liable to be drawn into politics, provided he can find a movement based on nihilistic assumptions. Thus, when he turns to public affairs, he adopts a creed of political violence. The cafés of Munich, Berlin, Vienna, Prague, and Budapest, where writers, painters, lawyers, and doctors had spent so many hours in amusing speculation and gossip, thus became in 1918 the recruiting grounds for the “armed bohemians,” whom Heiden in his book on Hitler describes as the agents of the European revolution.[vi] In much the same way, the Bloomsbury of the unbridled twenties unexpectedly turned out numerous disciplined Marxists around 1930.

The conversion of the nihilist from extreme individualism to the service of a fierce and narrow political creed is the turning point of the European revolution. The downfall of liberty in Europe consisted in a series of such individual conversions.

Their mechanism deserves closest attention. Take, first, conversion to Marxism. Historical—or dialectical—materialism had all the attractions of a second Enlightenment; taking off and carrying on from the first, antireligious, Enlightenment, it offered the same intense intellectual satisfaction. Those who accepted its guidance felt suddenly initiated into a knowledge of the real forces actuating men and operating in history, into a grasp of reality that had hitherto been hidden to them—and still remained hidden to the unenlightened—by a veil of deceit and self-deceit. Marx, and the whole materialistic movement of which he formed a part, had turned the world right side up before their eyes, revealing to them the true springs of human behavior.

Marxism also offered them a future of unbounded promise for humanity. It predicted that historic necessity would destroy an antiquated form of society and replace it by a new one, in which the existing miseries and injustices would be eliminated. Though this prospect was put forward as a purely scientific observation, it endowed those who accepted it with a feeling of overwhelming moral superiority. They acquired a sense of righteousness, and this in a paradoxical manner was fiercely intensified by the mechanical framework in which it was set. Their nihilism had prevented them from demanding justice in the name of justice or humanity in the name of humanity; these words were banned from their vocabulary, and their minds were closed to such concepts. But their moral aspirations, thus silenced and repressed, found an outlet in the scientific prediction of a perfect society. Here was set out a scientific utopia, relying for its fulfillment only on violence. Nihilists could accept, and would eagerly embrace, such a prophecy, which required from its disciples no other belief than a belief in the force of bodily appetites and yet at the same time satisfied their most extravagant moral hopes. Their sense of righteousness was thus reinforced by a calculated brutality born of scientific self-assurance. There emerged the modern fanatic, armored with impenetrable skepticism.

The power of Marxism over the mind is based here on a process exactly the inverse of Freudian sublimation. The moral needs of man, denied expression in terms of ideals, are injected into a system of naked power, to which they impart the force of blind moral passion. With some qualification the same thing is true of National Socialism’s appeal to the mind of German youth. By offering them an interpretation of history in the materialistic terms of international class war, Hitler mobilized their sense of civic obligation which would not respond to humane ideals. It was a mistake to regard the Nazi as an untaught savage. His bestiality was carefully nurtured by speculations closely reflecting Marxian influence. His contempt for humanitarian ideals had a century of philosophic schooling behind it. The Nazi disbelieved in public morality the way we disbelieve in witchcraft. It is not that he had never heard of it; he simply thought he had valid grounds for asserting that such a thing cannot exist. If you told him the contrary, he would think you peculiarly old-fashioned or simply dishonest.

In such men the traditional forms for holding moral ideals had been shattered and their moral passions diverted into the only channels which a strictly mechanistic conception of man and society left open to them. We may describe this as a process of moral inversion. The morally inverted person has not merely performed a philosophic substitution of material purposes for moral aims; he is acting with the whole force of his homeless moral passions within a purely materialistic framework of purposes.

It remains only to describe the actual battlefield on which the conflict that led to the downfall of liberty in Europe was fought out. Let us approach the scene from the West. Toward the close of the First World War, Europeans heard from across the Atlantic the voice of Wilson appealing for a new Europe in terms of pure eighteenth-century ideas. “What we seek,” he summed up in his declaration of the Fourth of July, 1918, “is the reign of law, based upon the consent of the governed and sustained by the organized opinion of mankind.” When, a few months later, Wilson landed in Europe, a tide of boundless hope swept through its lands. They were the old hopes of the eighteenth and nineteenth centuries, only much brighter than ever before.

Wilson’s appeal and the response it evoked marked the high tide of the original moral aspirations of the Enlightenment. This event showed how, in spite of the philosophic difficulties which impaired the foundations of overt moral assertions, such assertions could still be vigorously made in the regions of Anglo-American influence.

But the great hopes spreading from the Atlantic seaboard were contemptuously rejected by the nihilistic or morally inverted intelligentsia of central and eastern Europe. To Lenin, Wilson’s language was a huge joke; from Mussolini or Goebbels it might have evoked an angry sneer. And the political theories which these men and their small circle of followers were mooting at this time were soon to defeat the appeal of Wilson and of democratic ideals in general. They were to establish within roughly twenty years a comprehensive system of totalitarian governments over Europe, with a good prospect of subjecting the whole world to such government.

The sweeping success of Wilson’s opponents was due to the greater appeal their ideas had for a considerable section of the populace in the central and eastern European nations. Admittedly, their final rise to power was achieved by violence, but not before they had gained sufficient support in every stratum of the population so that they could use violence effectively. Wilson’s doctrines were first defeated by the superior convincing power of opposing philosophies, and it is this new and fiercer Enlightenment that has continued ever since to strike relentlessly at every humane and rational principle rooted in the soil of Europe.

The downfall of liberty which in every case followed the success of these attacks demonstrates in hard facts what we said before: that freedom of thought is rendered pointless and must disappear wherever reason and morality are deprived of their status as a force in their own right. When a judge in a court of law can no longer appeal to law and justice; when neither a witness, nor the newspapers, nor even a scientist reporting on his experiments can speak the truth as he knows it; when in public life there is no moral principle commanding respect; when the revelations of religion and of art are denied any substance; then there are no grounds left on which any individual may justly make a stand against the rulers of the day. Such is the simple logic of totalitarianism. A nihilistic regime will have to undertake the day-to-day direction of all activities which are otherwise guided by the intellectual and moral principles that nihilism declares empty and void. Principles must be replaced by the decrees of an all-embracing party line.

This is why modern totalitarianism, based on a purely materialistic conception of man, is of necessity more oppressive than an authoritarianism enforcing a spiritual creed, however rigid. Take the medieval church even at its worst. The authority of certain texts which it imposed remained fixed over long periods of time, and their interpretation was laid down in systems of theology and philosophy developed over more than a millennium, from Saint Paul to Aquinas. A good Catholic was not required to change his convictions and reverse his beliefs at frequent intervals in deference to the secret decisions of a handful of high officials. Moreover, since the authority of the church was spiritual, it recognized other independent principles outside its own. Though it imposed numerous regulations on individual conduct, many parts of life were left untouched, and these were governed by other authorities, rivals of the church such as kings, noblemen, guilds, corporations. What is more, the power of all these was transcended by the growing force of law, and a great deal of speculative and artistic initiative was also allowed to pulsate freely through this many-sided system.

The unprecedented oppressiveness of modern totalitarianism has become widely recognized on the Continent today and has gone some way towards allaying the feud between the champions of liberty and the upholders of religion, which had been going on there since the beginning of the Enlightenment. Anticlericalism is not dead, but many who recognize transcendent obligations and are resolved to preserve a society built on the belief that such obligations are real have now discovered that they stand much closer to believers in the Bible and the Christian revelation than to the nihilist regimes based on radical disbelief. History will perhaps record the Italian elections of April 1946 as the turning point. The defeat inflicted there on the Communists by a large Catholic majority was hailed with immense relief by defenders of liberty throughout the world, many of whom had been brought up under Voltaire’s motto “Ecrasez l’infame!” and had in earlier days voiced all their hopes in that battle cry.

The instability of modern liberalism stands in curious contrast to the peacefully continued existence of intellectual freedom through a thousand years of antiquity. Why did the contradiction between liberty and skepticism never plunge the ancient world into a totalitarian revolution like that of the twentieth century?

We may answer that such a crisis did develop at least once, when a number of brilliant young men, whom Socrates had introduced to the pursuit of unfettered inquiry, blossomed out as leaders of the Thirty Tyrants. Men like Charmides and Critias were nihilists, consciously adopting a political philosophy of smash-and-grab which they derived from their Socratic education; and, as a reaction to this, Socrates was impeached and executed.

Yet whatever difficulties of this sort developed in the ancient world, they were never so fierce and far-reaching as the revolutions of the twentieth century. What was lacking in antiquity was the prophetic passion of Christian messianism. The ever-unquenched hunger and thirst after righteousness which our civilization carries in its blood as a heritage of Christianity does not allow us to settle down in the Stoic manner of antiquity. Modern thought is a mixture of Christian beliefs and Greek doubts. Christian beliefs and Greek doubts are logically incompatible; and if the conflict between the two has kept Western thought alive and creative beyond precedent, it has also made it unstable. Modern totalitarianism is a consummation of the conflict between religion and skepticism. It solves the conflict by embodying our heritage of moral passions in a framework of modern materialistic purposes. The conditions for such an outcome were not present in antiquity, when Christianity had not yet set alight new and vast moral hopes in the heart of mankind.

[i] B. F. Skinner, Beyond Freedom and Dignity (New York: Alfred A. Knopf, Inc., 1971).

[ii] Baron d’Holbach, The System of Nature, trans. H. D. Robinson (Boston: J. P. Mendum, 1853, pp. 153, ix–x).

[iii] W. E. H. Lecky, History of the Rise and Influence of the Spirit of Rationalism in Europe, 2 vols. (New York: Appleton, 1878), 1:128.

[iv] Hermann Rauschning, The Revolution of Nihilism, trans. Ernest W. Dickes (New York: Longmans, Green, 1939).

[v] V. I. Lenin, “Where to Begin?” (1901) and “What Is to Be Done?” (1902) in Collected Works, ed. Victor Jerome, trans. Joe Fineberg and George Hanna (Moscow: Foreign Language Publishing House, 1961), 5:23–24, 473–84, and 514–18.

[vi] Konrad Heiden, Der Fuehrer, trans. Ralph Manheim (Boston: Houghton Mifflin, 1944), pp. 145–50.

Read the whole story
14 days ago
New York, NY
Share this story

Hayek’s Reimagined Economics and What It Lacked

1 Share

F. A. Hayek was, to use Peter Boettke’s phrase, a “lifelong learner,” changing his position on a variety of topics throughout his long career as he tested the boundaries of his framework and sought to iron out its contradictions. The Hayek I wish to present here is therefore just one Hayek among the several one can find. The crucial elements of this Hayek’s framework were expectations, embeddedness, and innovation. Hayek’s framework when he wrote in this vein can improve our understanding of how interpretations are generated and the impact they have on our social systems. The union of Hayekian analysis and the concept of interpretation allows us to overcome several conceptual tensions which Hayek himself could not, and to evaluate social orders from a discursive rather than positivist or utilitarian perspective.


Hayek began by focusing on one of the central pillars of modern economics analysis: the concept of equilibrium. His suggestion in “Economics and Knowledge” is that equilibrium is a state of affairs in which agents pursue mutually accommodating projects and have their expectations met.

It appears that the concept of equilibrium merely means that the foresight of the different members of the society is in a special sense correct. It must be correct in the sense that every person’s plan is based on the expectation of just those actions of other people which those other people intend to perform and that all these plans are based on the expectation of the same set of external facts, so that under certain conditions nobody will have any reason to change his plans. Correct foresight is then not, as it has sometimes been understood, a precondition which must exist in order that equilibrium may be arrived at. It is rather the defining characteristic of a state of equilibrium. Nor need foresight for this purpose be perfect in the sense that it need extend into the indefinite future or that everybody must foresee everything correctly. We should rather say that equilibrium will last so long as the anticipations prove correct and that they need to be correct only on those points which are relevant for the decisions of the individuals.

Further down he fleshed this out:

Consider the preparations which will be going on at any moment for the production of houses. Brickmakers, plumbers, and others will all be producing materials which in each case will correspond to a certain quantity of houses for which just this quantity of the particular material will be required. Similarly we may conceive of prospective buyers as accumulating savings which will enable them at certain dates to buy a certain number of houses. If all these activities represent preparations for the production (and acquisition) of the same amount of houses, we can say that there is equilibrium between them in the sense that all the people engaged in them may find that they can carry out their plans. This need not be so, because other circumstances which are not part of their plan of action may turn out to be different from what they expected. Part of the materials may be destroyed by an accident, weather conditions may make building impossible, or an invention may alter the proportions in which the different factors are wanted. This is what we call a change in the (external) data, which disturbs the equilibrium which has existed. But if the different plans were from the beginning incompatible, it is inevitable, whatever happens, that somebody’s plans will be upset and have to be altered and that in consequence the whole complex of actions over the period will not show those characteristics which apply if all the actions of each individual can be understood as part of a single individual plan, which he has made at the beginning.

In short, much of our life is spent in relations of more or less mutual accommodation which can be disturbed when our expectations are not met.


Hayek made a radical turn towards embeddedness in his most famous and influential paper, “The Use of Knowledge in Society.” In it, he introduced his “man on the spot” as the agent seeing the world from a ground-level view. Hayek asked what knowledge would be useful to the man on the spot for achieving his ends.

There is hardly anything that happens anywhere in the world that might not have an effect on the decision he ought to make. But he need not know of these events as such, nor of all their effects. It does not matter for him why at the particular moment more screws of one size than of another are wanted, why paper bags are more readily available than canvas bags, or why skilled labor, or particular machine tools, have for the moment become more difficult to obtain. All that is significant for him is how much more or less difficult to procure they have become compared with other things with which he is also concerned, or how much more or less urgently wanted are the alternative things he produces or uses. It is always a question of the relative importance [emphasis added] of the particular things with which he is concerned, and the causes which alter their relative importance are of no interest to him beyond the effect on those concrete things of his own environment.

Hayek invited his colleagues to reimagine the price system as something capable of drawing on the specific circumstantial knowledge dispersed across many embedded agents, while providing those agents with information on “how much more or less difficult” achieving their ends had become. A price does not answer the question “what is going on in the world that might impact my plans?” Instead, it answers the question “how much must I sacrifice to get this car, this factory input, this sandwich?” By and large, his contemporaries hadn’t even concerned themselves with those questions in the first place, much less interpreted the nature of the price system with that context in mind.


The next aspect of this Hayek which we will examine here concerns the nature of innovation and its role in social change. In The Constitution of Liberty, Hayek discussed what he called “the creative powers of a free civilization.” This process greatly resembles what Everett Rogers would call Diffusion of Innovations in his book by that name, which came out two years later. Modern readers will recognize the influence of Rogers’s book in familiar concepts such as the “early adopter.”

In contrast to the diffusion of innovations literature, which focused on the top-down introduction of innovations into a social system, Hayek offers a vision of trial and error on a vast scale. Specific individuals and subcommunities experiment with new products, practices, fashions, even lifestyles. These innovations either end where they began or diffuse beyond their initial trial group. More on this below, but for now it suffices to observe that the projects that agents pursue are no more fixed than their expectations. They are subject to innovation by the agents themselves, or the adoption of innovations developed by other agents that have diffused through the social systems.

These processes of experimentation include the well-known instances of wealthy consumers subsidizing the early iterations of innovations that have yet to become cost-effective:

At any stage of this process there will always be many things we already know how to produce but which are still too expensive to provide for more than a few. And at an early stage they can be made only through an outlay of resources equal to many times the share of total income that, with an approximately equal distribution, would go to the few who could benefit from them. At first, a new good is commonly ‘the caprice of the chosen few before it becomes a public need and forms part of the necessities of life. For the luxuries of today are the necessities of tomorrow.’ Furthermore, the new things will often become available to the greater part of the people only because for some time they have been the luxuries of the few.[1]

But social experimentation of any sort falls into Hayek’s framework. He gives the example of the wealthy communities who developed tennis for fun. Their small-scale experimentation was adopted, and transformed in the process of adoption, until it became a vocation for athletes and a source of entertainment for the masses. He also discusses beliefs and morality:

We cannot attempt to recount here the long story of all good causes which came to be recognized only after lonely pioneers had devoted their lives and fortunes to arousing the public conscience, of their long campaigns until at last they gained support for the abolition of slavery, for penal and prison reform, for the prevention of cruelty to children or to animals, or for a more humane treatment of the insane. All these were for a long time the hopes of only a few idealists who strove to change the opinion of the overwhelming majority concerning certain accepted practices.[2]

He adds that “The argument for democracy presupposes that any minority opinion may become a majority one.”[3] The emphasis in these examples lend themselves to an interpretation of this trial-and-error process as the engine of progress, but I would argue that it is just as crucially the engine of pluralism. Tastes, lifestyles, and values are generated, tried within smaller or larger communities, and then disappear more often than not. Some subset of these diffuse widely and become mainstream.

A living system

The pieces we have assembled so far are as follows. First, we all exist in relations of more or less mutual accommodation that can be disturbed when our expectations are not met. Second, our field of action and therefore the knowledge which informs our expectations are circumscribed to our corner of the system, rather than to expectations about the system as a whole, about which we may or may not have much useful knowledge at all. Third, most people are engaged, to a greater or lesser degree, in trial and error to attempt to improve some facet of their lives. Putting them together, it is clear that the innovations adopted either by the few or the many will fall outside of people’s expectations, almost by definition. Rogers, for example, defined an innovation as “an idea, practice, or object that is perceived as new by an individual or another unit of adoption.”[4] In less careful language, we might say that something is innovative to the degree that we did not expect it.

Agents will adjust their expectations to the ways these innovations change their little corner of society. These adjustments can add up to change the overall character of the social systems in which those agents are embedded. This change in character, in turn, will alter the specific circumstances of each agent, through a change in prices perhaps, or job opportunities, or simply in available goods or services for purchase. And agents will therefore adjust their expectations to these changes, changing the character of the system once more. As an example, consider the adoption of the smartphone. When it was in the hands of a few people, it was largely a novelty. But as enough people were exposed to it and found it compelling, and adoption spread more widely, it became profitable to develop software on top of it. This in turn made them more attractive, fueling wider adoption. Wider adoption, in turn, made it more attractive to software developers. At this point we have a complex social and business ecosystem that has grown around smartphones specifically. And the story does not, of course, end there, any more than the story of the role of the personal computer ended in the 1990s.

Hayek therefore begins by analyzing the nature of equilibrium only to bring it to the very brink of irrelevance. The only comfort he offers his colleagues is that there may exist a tendency towards equilibrium. “It is only by this assertion that such a tendency exists that economics ceases to be an exercise in pure logic”[5]. Instead of comparative statics, we are left with complex, adaptive social systems, existing in relations of constant multidirectional adjustment with the actions, expectations, and innovations of embedded agents.

The role of interpretation in all of this is strangely unstated by Hayek, but clearly implied. Prices are interpreted by agents as meaning that something is more or less difficult to get than it was, innovations are innovations only because they are interpreted as something new or unexpected, equilibrium exists because we’ve interpreted the situation as having stayed within our expectations. As we shall see, the way we understand our situation, our goals and our options, and our relations to one another, is also a matter of interpretation – and Hayek’s process of experimentation generates innovation in interpretation as well.

Articulation as Interpretation

Hayek begins “Rules, Perception and Intelligibility” as follows:

The most striking instance of the phenomenon from which we shall start is the ability of small children to use language in accordance with the rules of grammar and idiom of which they are wholly unaware. “Perhaps there is,” Edward Sapir wrote thirty-five years ago, “a far-reaching moral in the fact that even a child may speak the most difficult language with idiomatic ease but that it takes an unusually analytical type of mind to define the mere elements of that incredibly subtle linguistic mechanism which is but a plaything in the child’s unconscious.”[6]

The incredible difficulty underlined in the Sapir quote to “define the mere elements” of a mechanism of social commerce that is “incredibly subtle” was central to Hayek’s later work. This is the struggle to articulate the basis of articulacy, to develop a language to talk about language – or any similarly complex social system that we need not consciously understand in order to participate in.

Hayek pursued this question most systematically in his heroic studies in Law, Legislation, and Liberty, the first volume in particular. I am not going to attempt to reconstruct his answer here; it involves a complex cocktail of natural law, neo-Kantianism, and cultural evolution which I do not believe bore the fruit Hayek desired of it. Moreover, the specter of consequentialism looms over the role he casts for cultural evolution there and throughout his work. Glen Whitman makes a compelling case that Hayek explicitly denounced such consequentialism, but he does not appear to me to have been consistent on this; the following passage from The Constitution of Liberty exemplifies the difficulties he struggled with, with emphasis added by me:

It is, of course, a mistake to believe that we can draw conclusions about what our values ought to be simply because we realize they are a product of evolution. But we cannot reasonably doubt that these values are created and altered by the same evolutionary forces that have produced our intelligence. All that we can know is that the ultimate decision about what is good or bad will be made not by individual human wisdom but by the decline of the groups that have adhered to the “wrong” beliefs.[7]

I do not think Hayek managed to resolve this tension in his framework, and I certainly reject the idea that “the ultimate decision” about right and wrong are determined by the decline or success of particular groups. But I believe Hayek was right to focus on the problem of articulacy. And the pieces from his works which we put together above, when complemented by certain aspects of late 20th century and contemporary philosophy, provide us with the resources to address this problem more fruitfully, without falling into either Hayek’s own struggles, or the positivism and utilitarianism of mainstream economic theory.

In thinkers such as Maurice Merleau-Ponty and Charles Taylor, we find the notion of a kind of physical articulacy; ways of being and of conducting oneself which carry coded values and understandings. We can then name aspects of these ways of being, and develop ways of talking about them. In attempting to articulate these physical enactments in word, however, we necessarily change them, just as a poem is necessarily changed when translated into another language. To translate a poem requires us to interpret it; differences in translation reflect differences in the translators’ interpretations. Taylor, with Alasdair MacIntyre and Richard Rorty, offers a vision of languages which provide greater or lesser resources for articulation. But articulation of what?

I think Deirdre McCloskey, Daniel Klein, and Russ Roberts, all spiritual successors to the Hayek outlined in this piece, have each attempted to answer this question in their own way. Klein invites his readers to imagine a being capable of looking down upon the Hayekian order from above, and to judge it according to how pleasing she finds it. Roberts’s defense of that order seems to rest primarily on the notion that it is not merely efficient, but sublime.

McCloskey, Klein, and Roberts are critics of, and struggle against, positivist and utilitarian tendencies that are latent in professional economics. Those latent tendencies can be seen as part of a single project to get outside the messy contingencies of persuasion within the context of pluralism, a project aimed at offering some standard that will stand above the fray. Positivists in economics believe that the only theories which have scientific content are those with strict, testable predictions. Utilitarianism and consequentialism gesture at an underspecified “happiness” to maximize or “good state of affairs” to aim at. Both traditions have fatal flaws, but the most important flaw they share is the notion of a standard that can transcend the uncertain, frustrating, groundless process of negotiating interpretations and the standards that emerge from them, which we are all a part of.

Rather than entering into the judgments of a metaphorical being or a discussion of the nature of the sublime, I suggest we think of the struggle for articulacy as the struggle to articulate the deepest possible understandings of ourselves and our shared situation; what might be called wisdom. We seek to articulate it with regimented theoretical languages, as economists do and philosophers have done for millennia. But we also seek to articulate it in novels, film, poetry, music, and painting. Hayek’s grand processes of trial and error do not generate new games (such as tennis) or forms of art (such as the novel) simply because they are entertaining, but because they provide fresh resources for articulating what we previously could not. In other words, the processes of trial-and-error generate the resources for interpreting what, exactly, counts as an “error,” what is unwise.

All faiths, philosophies, and ways of life which offer a vision of the good and the wise were generated through the pluralistic processes of experimentation described by Hayek. How you evaluate each of them, as well as how you evaluate the pluralistic processes themselves, depends upon your interpretative framework, which is itself in play. There is a circularity here, but it is more of a spiral than an infinite regress: you play different readings off against one another to deepen your understanding, revising your interpretations to varying degrees along the way. As Taylor put it in “Interpretation and the Sciences of Man,”

Our conviction that the account makes sense is contingent on our reading of action and situation. But these readings cannot be explained or justified except by reference to other such readings, and their relation to the whole. If an interlocutor does not understand this kind of reading, or will not accept it as valid, there is nowhere else the argument can go. Ultimately, a good explanation is one which makes sense of the behaviour; but then to appreciate a good explanation, one has to agree on what makes good sense; what makes good sense is a function of one’s readings; and these in turn are based on the kind of sense one understands.[8]

Positivists and others with similar modernist sensibilities think that true knowledge is impossible unless we find a way to break out of this spiral, and that Taylor must therefore doom us to ignorance if we listen to him. But this pessimism is unwarranted; we have accomplished tremendous things within the confines of this kind of messy argumentation. It’s just that there’s no non-circular way to point that out. I am saying that if you believe science has accomplished tremendous things, and if it is true that we can only argue in the manner Taylor described, then we must have accomplished tremendous things by arguing in the manner Taylor described. As pure formal logic, this isn’t very persuasive. As a move in a game in which the assumptions of either side are sources of reasons to be appealed to, even when the goal is to get some other assumptions revised, it can make all the difference. And if science can be thought of as a community engaged in similar such games, then the argument holds.

Social scientists are merely participants in these pluralist negotiation games. The very idea of “an economy” is part of the language of economics, which orients our understanding of our relations to one another—as Hayek was well aware. It is a language which continues to be developed by agents, known as economists, who occupy a ground-level view of their particular corner of the discipline known as economics.

Hayek often understood himself to be struggling to develop a language for phenomena that existing languages lacked the resources to do justice to. In his analysis of equilibrium, he articulated the sense of something constantly in motion and of which economists nevertheless have captured a partial view in their static models. In his analysis of embeddedness, he articulated the way agents with radically restricted fields of view could nevertheless accomplish a great deal of what economists, with unrealistic assumptions about the state of their knowledge, expected them to be able to do. In his analysis of innovation, he articulated the way we all have a hand in the constant processes of transformation of the systems of which we are a part. To his articulations, I add: interpretation matters every step of the way—in setting our goals and expectations, in reading the details of our particular circumstances to come to a specific decision, and in revising it all in light of a surprising innovation.

[1] Hayek, 2011. 97-98.

[2] Ibid, 192.

[3] Ibid, 175.

[4] Rogers, 2005. Kindle location 663-668.

[5] Hayek, 2014.

[6] Hayek, 1980. 43.

[7] Hayek, 2011. 87.

[8] Taylor, 1985. 24.

Work cited:

Hayek, Friedrich. Studies in Philosophy, Politics and Economics. University of Chicago Press, 1980.

———. The Constitution of Liberty. Routledge, 2011.

———. “Economics and Knowledge | Friedrich A. Hayek.” Mises Institute, 18 Aug. 2014,

———. “‘The Use of Knowledge in Society.’” Supply and Demand, Markets and Prices, College Economics Topics | Library of Economics and Liberty,

Rogers, Everett M. Diffusion of Innovations. Free Press, 2005. Kindle Edition.

Taylor, Charles. Philosophical Papers. Cambridge University Press, 1985.

Featured image is Shipping Routes , by Grolltech.

Read the whole story
16 days ago
New York, NY
Share this story

The Zen of Chaos

1 Share

He screamed in our faces, and the crowd exploded. Greg Puciatio stalked across the stage, his low grunts and shrieking howls resounding. He threw his body at the edge again and again, while Ben Weinman, thrashing the air and shredding notes, hurled himself backwards off the amplifiers. Meanwhile, the bass and drums beat down and slammed in off-kilter jolts. Bodies crushed around me and people leapt from the stage, over and over. The speakers roared and the sound ripped at my ears. This was my introduction to the live show of The Dillinger Escape Plan, who stopped in Israel on their farewell tour. I’d long been a big fan of their music, but even as a hardened devotee of all things weird and heavy, this was a new, mind-blasting experience. Dillinger, infamously known as “the most dangerous band on the planet”, is defined by always taking their live experience to the maximum. They make music that doesn’t just sound difficult and abrasive, but through their performances, they take pride in making discomfort manifest for themselves and the audience. I had a great time.

It was walking out, my ears still ringing from the feedback, that I started to more deeply grasp the bewilderment and occasional physical steps backward that have become common reactions of people hearing about some of my favourite music.

Extreme or experimental music defies conventions, through breaking taboos or departing from more accessible forms. From death metal’s embrace of chromatic scales and highly technical instrumentation, to the odd time signatures, squealing lines and abrupt breaks in free jazz, the complex (dis)chords of underground music don’t make things easy for you. As Keith Kahn-Harris notes in his book on the topic, extreme metal reaches areas that begin to depart from what we would traditionally call music at all. The lyrics emphasize the dissolution and reconstruction of both body and mind in intense but varied ways, ranging from horror movie violence to heavily existentialist themes of bleak yearning and searching.

In general, heavy and experimental music often aims towards sonic violence, or at minimum, real discomfort. It hits you with harsh, twisted sounds, and incorporates a lot of background noise and feedback. This has more value than you might first think. The power of heavy dissonance music is to tear things up. To pull apart the sense of who we are, and what we might be as human beings. In the end, it’s the both the technical precision and the sheer forceful power which allows listeners like me to transcend the prison of our expectations and judgements, to float upward and be lifted high, out and beyond our heads. When we get back to earth, we are filled with only ourselves.

Although leaning heavily on specific guitar riffs (or in the case of jazz, saxophones and trumpets) and drum beats for orientation, the sheer attack of the music aims to displace you and rip up your feeling of groundedness.

Consider this Dillinger classic, “43% Burnt”:

The riffs in ‘43%’ are jagged and cut off sharply. The guitars use a janky, scraping tone and repeat in very fast, variegated patterns. The drums hit heavily and move in very swift repetitions, almost resembling machine guns. The vocals are piercing and fierce. The time signature shifts constantly. Overall, the attack and disorientation is almost overwhelming, and rarely lets up for more than a few seconds.

Or listen to “Bonehead” by the experimental jazz group Naked City, led by the legendary composer John Zorn:

Here, the saxophone screams constantly at a very high pitch, sounding almost like an actual person. The drums come in speedy blasts, repeating in staccato bursts. The bass and guitars thunder underneath.

As I mentioned, earlier it is precisely the deconstructing and bewildering elements of heavy and experimental music that give it power. In life, we search constantly for a sense of place, of self-definition. It follows us around in public and often even when we are alone. Every time we step outside our doors we are confronted with the question of what it means to be us. Significantly, the person we appear to be is never truly ourselves, unadulterated. Rather, we invent or create, a person for other people to interact with. The famed sociologist Erving Goffman put it this way:

“The self... is not an organic thing that has a specific location, whose fundamental fate is to be born, to mature, to die; it is a dramatic effect arising diffusely from a scene that is presented.”

For Goffman, the self is a performance, a thing we put on to interact and communicate with others. It is an artificial creation for the purposes of signalling and communication. However, not only is it a character we invent for others, it’s someone that we make up for ourselves, as well. We give ourselves an image of who we are, a person that exists in our minds for us to refer to and say, “This is me”.

Or, as Goffman writes:

“Our sense of being a person can come from being drawn into a wide social unit; our sense of selfhood can arise through the little ways in which we resist the pull. Our status is backed by the solid buildings of the world, while our sense of personal identity often resides in the cracks.”

To a large degree, the uniqueness of these musical forms is in the conscious rejection of comfort and stability, boundaries and definitions. It says that to some degree, the quest for identity, while important and necessary, is one that will never be fully realized, and will always in so doing, limit how we experience the world. Dissonant and difficult music is a sound that is seeking (if never quite reaching) a regained sense of the untamed and the unbroken, away from formulaic and standard imagery.

To fully grasp this, we need to go beyond the mere content of the songs on records. For complete absorption, we must get off the sidelines and into the pit.

Barbaric Yawps, Unselfing, and Finding Who We Are

“These floods of you are unforgiving/
Pushing passed me spilling through the banks/
And I fall/
Faster than light and faster than time/
That’s how memory works/
At least in the dark where I’m searching for meaning/
When I’m just searching for something/
I want out.”

Jane Doe, Converge, Jane Doe

Moshing, crowd surfing, stage diving, and the infamous “wall of death” are key parts of most ‘heavy music’ shows. Moshing, often stereotyped as just violent collision, is better understood as a complex, extremely kinetic dance. It forces participants out from themselves. People push into each other, together and apart, in visceral and often abrupt ways within an enclosed area. It creates a sensation of freedom from being wholly immersed in our own space. Like the noodly meanderings of the artiest jazzers, moshing keeps ripping things up and starting again.

However, unlike in electronic dance parties or techno raves, moshing isn’t quite about the total loss of agency. While in the same area as EDM in the search for transcendence, moshing is a step sideways from the totality of the ‘losing your mind’ ethos. In moshing, there is a unique concentration on building and harnessing mental and physical energy, and a focus on deep emotional engagement. Alissa White-Gluz, vocalist for the band Arch Enemy, has compared it to yoga, with the flow and intensity of movement almost reaching a deep meditative state. Thus, moshing is far more like an extreme sport or a complex and engaging exercise, like Tai Chi, rather than the travel companion of an drug trip, dissolving all our mental furniture.

A key moment in any mosh is the transition to the ‘circle pit’ as people run together in a messy oval. We flail our arms and legs out, arms slapping against the air, and here and there, smacking a chest or neck. Heads windmill and hair flies. The ground shakes and people bounce into and off each other, in ways that take little notice of sexuality, race, gender identity, or anything else. It feels like an updated tribal ritual, pressing and pulling against one another in sacred patterns. We are the pit, and the pit is us.

This feeling has a strange dualistic quality. As things intensify, I remain me, but am also somehow at one with other things and people at the exact same time. This produces an additional and valuable element- the feeling of engaged anonymity, of involvement with others without having to create a complete persona. In becoming each other, we lose our carefully shaped self-images.

This kind of hyper focused extremity has the net effect of creating a kind of liberation. The overwhelming sensations of often highly complex sound, combined with the directness of the pit, break up the everyday feeling of inhabiting ourselves that we usually experience.

One of my favourite ways of talking about this can be found in Iris Murdoch’s discussion of what she called “techniques of unselfing”, in her work The Sovereignty of the Good:

“We are anxiety-ridden animals. Our minds are continually active, fabricating an anxious, usually self-preoccupied, often falsifying veil which partially conceals our world. Our states of consciousness differ in quality, our fantasies and reveries are not trivial and unimportant, they are profoundly connected with our energies and our ability to choose and act. And if quality of consciousness matters, then anything which alters our consciousness in the direction of unselfishness, objectivity and realism is to be connected with virtue.

The most obvious thing in our surroundings which is an occasion for ‘unselfing’ is what is popularly called beauty…I am looking out of my window in an anxious and resentful state of mind, oblivious of my surroundings, brooding perhaps on some damage done to my prestige. Then suddenly I observe a hovering kestrel.

In a moment everything is altered. The brooding self with its hurt vanity has disappeared. There is nothing now but kestrel. And when I return to thinking of the other matter it seems less important. And of course, this is something which we may do deliberately: give attention to nature in order to clear our minds of selfish care.”

The pit is a physical mirror for the music, in which the extreme and the overwhelming blast and tear you apart. In that moment, there is a sense of no more me, no fully constituted self, but merely fragments that perhaps, you could call a person. And yet, as I become unmoored, I am swept along with the rush of the music and the push of bodies back into myself. I am rebuilt, remade and reconceived. I discover a sense of solidity and permanence, made with coarse cement and mortar, jagged metal and stone. As I am remade, the artifice of the self and the project of being A Person is pushed aside. For a brief time, I am in some sense Truly Me.

It shouldn’t be surprising then, that different studies find that heavy music is, contrary to stereotypes of angry and misled teens, a genre that plays an important role in mental health. Metal and heavy music are excellent at contributing to instilling calm, feelings of catharsis, and positive emotions.

For me, the painter Dan Witz expresses this experience in the most direct form, through his Rembrandt-esque representations of New York Hardcore, capturing in depth and detail, the rough glamour of the moment. As Witz describes, the hardcore show is (borrowing from Walt Whitman) a kind of ‘barbaric yawp’, a shouting protest, petitioning the empty sky above us. It yells out at the blackness of the void, and through it, becomes full. So, what is dissonance for? It is simply a reminder that sometimes, through embracing the confusion, the chaos and the madness, we might finally find peace.

Witz mosh


Read the whole story
17 days ago
New York, NY
Share this story
Next Page of Stories