Librarian of the Internet. Collector of Stories.
3076 stories

Sigegard Ainsworth, Chapter 9

1 Share
An excerpt: “Liars and thieves,” Sige said. “And now murderers. That world is at once very true and also pure fiction. I wonder if there’s anything in that aesthetic which might be considered true: murderers born of lies and thievery. Would it be that such careful thieves created a logic within their lies which is […]

Download audio:
Read the whole story
1 day ago
New York, NY
Share this story

Economic Cooperation in Furtherance of Peace: Lessons from the Postwar Era

1 Share

During and after World War II the architects of the economic institutions of the liberal international order articulated a compelling rationale for the structure and rules of the International Monetary Fund (IMF) and General Agreement on Tariffs and Trade (GATT). In a recent article I identified a common set of principles, based on an exhaustive review of contemporaneous speeches, essays and Congressional testimony, that guided efforts to establish these institutions. These principles represented a pragmatic, battle-tested evolution of a more abstract vision associated with several major Enlightenment thinkers, including the pacifying effects of international commerce and the prospects for a rules-based federation of nations. In brief, the principles are as follows: multilateral coordination, liberalization, mutual benefit with mutual responsibility, and linking economy to security.  Together these concepts motivated and underpinned negotiations over how to govern international commerce in furtherance of peace in the postwar world.

That article was part of an effort to develop–and share–a richer understanding of the historical motivation for economic cooperation in the postwar era that might otherwise be lost to distance. That said, I was not attempting to defend the behavior of the IMF or GATT (or its more recent incarnation as the World Trade Organization) or other institutions of the liberal international order in the decades since their creation. What I would like to do instead, is to situate these institutions and the principles behind them within the liberal tradition and to review what those principles were and how they arose from their historical moment. My hope is that if we are better positioned to understand their historical charge, it might provide us with better tools for assessing  just what parts of that legacy are worth preserving or reforming. Was there something that the creators of the IMF and GATT learned about why international cooperation was essential, and what could help it succeed, that we have since forgotten? Would we evaluate these institutions differently today if we could recover that distant rationale?

An enlightenment lineage

The importance of international commerce and its potential for nefarious or virtuous influence on relations among states has a long history. In the second half of the 18th century several important concepts and theories were articulated that have particular relevance for thinking about international economic cooperation after World War II. A foundational argument expressed by a range of Enlightenment thinkers was that foreign countries, foreign businesses and foreign individuals were potential commercial partners and suppliers, not merely customers. Against the mercantilism of European monarchies and commercial incumbents, these were arguments for mutual gain and cooperation, most prominently by Adam Smith in the Wealth of Nations. As Smith simply put it, “[i]f a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them.”

But partnership with foreigners also had important consequences unrelated to private economic gain. Like cooperation among co-nationalists, cooperation across national borders would encourage peace between nations through interdependence, improved understanding and perhaps even a recognition of common humanity. The Baron de Montesquieu, for example, argued that, “[c]ommerce is a cure for the most destructive prejudices.” Thomas Paine similarly declared that commerce “is a pacific system, operating to cordialise mankind, by rendering nations, as well as individuals, useful to each other.” Immanuel Kant agreed: “The spirit of commerce, which is incompatible with war, sooner or later gains the upper hand in every state.” Moreover, Kant viewed a global, pacific federation of republican states–creating rules for international conduct among states in order to secure freedom for those states–to be both a moral duty and a practical inevitability.

Future diplomats and international negotiators would feel this same duty but did not take peaceful outcomes for granted. Unlike Kant’s assertions about conditions that would “guarantee” peace, Smith recognized that trade entailed no such guarantees: “Commerce, which ought naturally to be, among nations, as among individuals, a bond of union and friendship, has become the most fertile source of discord and animosity.” Only diligent attention to practical details, guided by a set of essential principles, and supported by compelling argument, would have a chance of promoting peaceful commercial and political relations. 

The architects of the liberal international order were well aware of these divergent possibilities and keen to design institutions that could tilt outcomes toward peaceful commerce and away from the myopic pursuit of national advantage through trade. In describing the State Department’s approach to international economic negotiations in 1944, Dean Acheson (Assistant Secretary of State, 1941–1945, Under Secretary of State, 1945–1949, Secretary of State 1946–1953), succinctly captured US interests, “[o]ur own self-interest dictates that we should collaborate with other countries in this endeavor.” 

Interwar economic anarchy

Second only to the scourge of war itself, foremost among the motivations for cooperation was first hand experience with the collapse of national economic activity and international economic cooperation during the interwar years. That is, two key events dominated these years: the Great Depression and escalating trade wars. The first decimated the major world economies, leading to material hardship for huge swaths of their populations, economic and political desperation and prompting increasingly inward-looking politics. The second exacerbated–though was not a material factor in causing–economic stagnation. In the United States in particular a boom in farm lending, a stock market bubble, and egregious monetary policy mistakes paved the way for a wave of bank failures, deflation, and the Great Depression.

The Hawley–Smoot Tariff Act, passed in June 1930, led to large increases in tariff rates across notable proportion of US imports. This reduced the exports of US trade partners and led to retaliatory trade policy measures abroad in relatively short order. Economic nationalism increased the appeal of protectionism while retaliation further reduced commercial interdependence and reinforced perceptions of international economic policy as proxy warfare. This led to a vicious cycle of protectionist tariffs, draconian currency practices and retaliatory measures. In sum, depression and trade wars had dealt a massive blow to every major national economy, reinforcing economic and political nationalism and creating fertile conditions for fascism and war.

The architects of the postwar order saw these events as directly related to the bloodshed of World War II. New and corrosive political movements arose out of this economic hardship and fanned the flames of ethno-nationalism. A new liberal order would need to recognize their root causes and put in place a framework that could facilitate international cooperation while constructively addressing critical domestic needs. There would need to be a balance between international anarchy and international uniformity. This balance would be guided by a set of principles that would not pre-ordain a specific structure but would effectively bound the set of institutional structures that would have any likelihood of addressing these fundamental tensions. 

Principles from the postwar negotiations

In developing a better understanding of how American officials described their efforts to resolve these tensions I found two key insights as surprising as they were impressive. First, negotiators recognized that full employment policies would be critical to ensuring support for continued global engagement and that economic institutions would need to provide member nations with the flexibility to pursue full employment. William Clayton (Assistant Secretary of State, 1944–1946, Under Secretary of State, 1946–1947), for example, wrote that, “[t]he attainment of approximately full employment by the major industrial and trading nations, and its maintenance on a reasonably assured basis, are essential to the expansion of international trade on which the full prosperity of these and other nations depends.” 

Second, American officials did not offer simplistic and one-sided mercantilist arguments for international trade. Clayton, for example, explained that, “a nation profits because it secures better or cheaper goods abroad than at home.” President Roosevelt offered a nuanced argument for the benefits of imports in a posthumous letter to Congress in support of the Reciprocal Trade Agreements Act (RTAA), which would provide the legislative authority for the negotiation and ratification of GATT, “[i]t is also important to remember that imports mean much more than goods for ultimate consumers. They mean jobs and income at every stage of the processing and distribution channels through which the imports flow to the consumer.” Officials offered these and a range of other arguments in order to persuade their peers in Congress and in foreign capitals. I expand on and offer some supporting context below for each of the principles that guided these efforts.

Multilateral coordination. American officials concluded that international economic policies can and should be changed to meet changing circumstances and domestic needs, but those policies and changes should be coordinated with multilateral partners.

American officials firmly believed that alternatives to multilateral policy coordination were inferior, including unilateral policy changes, bilateral agreements, and regional economic blocs. The dramatic rise in protectionism and competitive devaluation had been initiated by a series of unilateral maneuvers (including unilateral actions by the United States), greatly tarnishing unilateralism in the minds of American officials. Furthermore, despite earlier success with bilateral trade deals over the previous decade, American officials now believed that this approach was too timid, too piecemeal and too geographically narrow to be successful in the post-war world. Clayton, for example, was similarly wary of regionalism, “[t]here is the way of economic blocs, in which a group of nations which cannot solve their problems by letting the rest of the world in, try to solve them by shutting the rest of the world out … [It] tend[s] in the long run to contract and restrict rather than expand international trade … [and is] contrary to our deepest convictions about the kind of economic order which is most conducive to the preservation of peace.”

Henry Morgenthau (Secretary of Treasury, 1934–1945), in an essay in Foreign Affairs, also cautioned against creating an exclusive economic club, “there would seem to be considerable danger—political as well as economic—in setting up a world divided into two blocs. Such a division of the world would not only deprive us of the general advantages of multilateral trade but would inevitably lead to conflict between the two groups. The fact is that the problems considered at Bretton Woods are international problems, common to all countries, that can be dealt with only through broad international cooperation.

Liberalization. American officials believed that policy should reduce domestic and international barriers to trade and that exchange rate policy should make all currencies freely tradeable (convertible).

A transparent and predictable set of rules engendered by multilateral coordination would be best put to use to reduce the severe interwar barriers to international trade in goods and, to facilitate payments and investment, in currencies. While the architects of economic reform after World War II were not laissez-faire capitalists they saw both material and foreign policy gains from relatively free exchange.

In arguing for the RTAA in 1943, for example, Hull shared his conclusions about the need for liberalization, “[i]t was clear to us that satisfactory economic recovery was impossible without a restoration and expansion of healthy foreign trade … through a reduction, here and abroad, of unreasonable and excessive trade barriers.” Similarly, in 1944, Acheson described the goals of multilateral trade negotiations simply, “[w]e seek … with as many nations as possible … the effective and substantial reduction of all kinds of barriers to trade.”

More open trade in goods was only one form of liberalization – free trade in currencies would also be necessary to restore global commerce. Acheson made the case to Congress in 1944 for the crucial role of currency convertibility in international commerce, “[f]oreign investment and financial transactions … require … the assurance that interest and principal can be converted into the lender’s own currency … Exporters are not inclined to export unless there is reasonable assurance that they will get paid in money … which can be readily transferred into their own currency.” 

Economy = Security. The link between the economy and security had two important components for American officials, (1) robust economies are essential to national and international security, and security is essential to economic stability and growth, and (2) international cooperation in the economic and financial arena was essential to, and must proceed in parallel with, international cooperation in the political and security arena.

The latter arena being associated in particular with the United Nations, or as Morgenthau put it at length in Foreign Affairs, “[i]nternational monetary and financial cooperation is indispensable for the maintenance of economic stability; and economic stability, in turn, is indispensable to the maintenance of political stability. Therefore, a program for international economic cooperation of which Bretton Woods is the first step must accompany the program for political and military security toward which the United Nations are moving. Bretton Woods is the model in the economic sphere of what Dumbarton Oaks is in the political. They reinforce and supplement each other. Political and economic security from aggression are indivisible, and a sound program for peace must achieve both.”

In 1943 Hull described the relationship, and virtuous cycle, between peace and economic security, “it was also clear from the beginning that a revival of world trade was an essential element in the maintenance of world peace … without prosperous trade among nations any foundation for enduring peace becomes precarious and is ultimately destroyed … .The political and social instability caused by economic distress is a fertile breeding ground of agitators and dictators, ready to plunge the peoples over whom they seize control into adventure and war.”

Mutual benefit-Mutual responsibility. American officials believed that international commerce was mutually beneficial – positive-sum, not zero-sum – and that national economies were deeply interdependent. However, harnessing those benefits came with important responsibilities.

Officials articulated those mutual benefits in a number of ways, including both straightforward benefits associated with exports, and more subtle benefits associated with cheaper imports and income earned by US trade partners. Hull testified in 1943, for example, that international commerce should be expanded through “mutually beneficial trade agreements.”

Mutual benefits and interdependence, however, were not sufficient to sustain international cooperation. A mutual view of responsibility was also required. Key responsibilities included reciprocity in the reduction of trade barriers and maintaining full domestic employment, as I noted above, to boost demand for imports and ensure continued political support open trade.

Nations could join and benefit from a liberal, multilateral trading system only if their own economies reciprocally opened to exports from trade partners. This was the core rationale for the RTAA and postwar negotiations. In Hull’s words, the United States would “grant to foreign countries reductions in our tariff rates in exchange for benefits to our trade by other countries.”

Enduring ideas?

Effective institutions can facilitate cooperation, both domestically and internationally. The architects of the economic institutions of the liberal international order harnessed painful experience and keen insight to carry out their institutional vision. The principles associated with that vision, as I have characterized them–multilateral coordination, liberalization, mutual benefit-mutual responsibility, and economy = security–might yet have some relevance for contemporary debates. Multilateral coordination to ensure inclusive ownership of international policy decisions and predictable rules for international commerce; liberalization to ensure that the substantial material benefits of global commerce are realized and used to strengthen connections among nations; mutual benefit-mutual responsibility to ensure that the reciprocal gains from trade are recognized while at the same time obligations to international partners and domestic constituencies are honored; and economy = security to ensure that security partnerships are made more credible and durable through partnerships that also bind economies and citizens closer together. 

The ethos of the liberal international order represented genuine progress over the destructive anarchy of trade war and military conflict, and the pernicious ideologies that gave rise to them. American leaders and their allies envisioned an alternative based on interdependence, mutual responsibility and shared security and prosperity. Secretary of State Cordell Hull seemed to appreciate the tenuous nature of peace and the fragility of post-war economic wisdom. The continuation of peaceful and productive international relations would require commitment and vigilance. He ably expressed that sentiment in his letter to the Nobel Committee upon receiving the Nobel Peace Prize in 1945, mostly for his work related to the founding of the United Nations, in this way, “[t]he crucial test for men and for nations today is whether or not they have suffered enough, and have learned enough, to put aside suspicion, prejudice and short-run and narrowly conceived interests and to unite in furtherance of their greatest common interest.”

Featured image is Photograph of Cabinet meeting at the White House, by Abbie Rowe

Read the whole story
2 days ago
New York, NY
Share this story

Neither Slave Nor Free: Jemar Tisby and the Moral Limits of American Protestantism

1 Share

“But is it supported by Scripture?” he asked me as we stood in the taco truck line after the church service. The “it” was the idea that the history of racial injustice in America, enabled by Protestant churches, required some sort of response in the present. We  heard the argument in favor of the idea in a sermon we had just heard at a multicultural church in Palo Alto, California. The sermon was delivered by Jamar Tisby, a Christian theologian currently pursuing his PhD in history at the University of Mississippi. It was a compressed version of his recent book The Color of Compromise, on the American Protestant church’s (occasional) resistance to and (frequent) support of white supremacy.

I grew up in Evangelical Christian churches in Kansas, but it had been years since I attended an American evangelical church service. The church service in Palo Alto was different though. The congregation was more diverse. I didn’t recognize the worship songs (though the plaintively earnest chord progressions were just as I remembered them). The sermon content, white American Protestantism’s complicity in racism, was also different. The evangelical churches that I went to growing up didn’t talk about the legacy of racism. They didn’t even talk about Martin Luther King Jr. For all I know they’re not talking about him still.

Just-Asking-Questions seemed like he came from a background similar to my own, given his need for Biblical justification for any deviation from his view of the world. It was a worldview that included, he informed me over tacos, the desire to get black voters “off the Democratic Party plantation” because the founders of the KKK, he informed me as if imparting secret knowledge, were Democrats. 

The conversation with Just-Asking-Questions was surreal. It wasn’t just the discovery of a member of the D’Souzan school of American historiography in the wild. It was also because we had just heard a sermon full of historical examples challenging the American Evangelical belief that “just following Scripture” is all we need to behave morally. 

Jemar Tisby’s Color of Compromise can be seen as doing for white American Protestantism what Michelle Alexander’s The New Jim Crow is doing for the criminal justice system, or Richard Rothstein’s The Color of Law is doing for housing policy. Like Alexander and Rothstein, Tisby draws attention to both the stubbornly persistent legacy of white supremacy in American culture and our desire to obscure that fact. 

Also like Alexander and Rothstein, Tisby takes the long view of history with an eye for the inconsistent use of supposedly neutral principles to racist ends. In the case of  white American Protestants, the neutral principle was a doctrine sometimes called “the spirituality of the church,” which sought to separate the church from the realm of politics. When white supremacy was the status quo, some white Protestants used the doctrine to discourage political engagement for social change. When white supremacy was threatened, they used the doctrine to keep the state out of the internal affairs of the church. When white supremacy needed other methods, they abandoned the doctrine entirely. Ultimately, whiteness was the rock on which they built their church, and the threat of black equality would not prevail against it.

Hornwell’s “spirituality of the church”

One of the major advocates for the doctrine of the “spirituality of the church” was the Southern theologian James Hanley Thornwell. In the 1850s, as abolitionists and slavery advocates rhetorically shot at each other over the status of slavery in the U.S. Constitution, Thornwell attempted to pull the church from the cross-fire. The church, according to Thornwell, “had no commission to construct society afresh… to re-arrange the distribution of its classes, or to change the forms of its political constitutions.” As Thornwell saw it the Bible was the only “constitution” the church bound itself to, and its authority was only binding within the confines of the church itself.

This theological position wasn’t new, however. It existed in practice since America was a collection of colonies bunched along the Atlantic coast. As slaves became a more central part of the colonial economic order of the colony of Virginia, the colony’s General Assembly attempted to police the “spirituality of the church” via statute. In the colonial era, some of the people enslaved, as well as some of those claiming to be their masters, were under the impression that Christian baptism freed a person from the status of slavery, as was the longstanding custom in England. 

Wanting to make sure that those enslaved had the blessings of Christian salvation without losing their lucrative forced labor system, the all-white Virginia General Assembly intervened in September 1667. The Assembly passed a law informing the white colonists that “It is enacted and declared by this General Assembly, and the authority thereof, that the conferring of baptism does not alter the condition of the person as to his bondage and freedom”.

Christian ministers reinforced this point by creating a separate liturgy for the baptisms of black individuals who were legally classified as slaves. Anglican minister Francis Le Jau, who did missionary work in the colony of South Carolina, had all enslaved individuals make a unique affirmation before receiving Christian Baptism: 

You declare in the presence of God and before this congregation that you do not ask for holy baptism out of any design to free yourself from the Duty and Obedience you owe to your master while you live, but merely for the good of your soul and to partake of the Grace and Blessings promised to the members of the Church of Jesus Christ.

Tisby summarizes this theological point succinctly, “Christianity could save one’s soul but not break one’s chains.”

Civil War and the fracturing of American Protestantism

Such was America’s moderate to conservative white Protestant theological consensus up to the time of Thornwell. As the Civil War approached, however, the theological consensus started cracking. Protestant denominations, controlled predominantly by Northern leadership, began demanding that ministers in leadership positions not be owners of human beings. Southern ministers balked. 

The Baptist General Convention of Alabama demanded that their Northern co-congregationalists recognize “the distinct, explicit avowal that slaveholders are eligible and entitled equally with non-slaveholders, to all the privileges and immunities of their several unions.” The Baptist Conventions of the Northern states refused to give equal rights to such “privileges and immunities.” The Baptists of the South seceded from their Northern co-religionists in 1845 to become the Southern Baptist Convention, sixteen years before their governments did. They did not apologize for fracturing the Baptists over their right to own humans until the 150th anniversary of the Southern Baptist Convention in 1995.

The Baptists were not the only major Protestant denomination that split along regional lines over the question of slavery (the Methodists divided in 1844). Despite Thornwell’s wishes to keep the political and spiritual separate, the spiritual divide of the denominations became the precursor for the eventual political fracture of the nation in 1861.

Reconstruction and white redemption

The end of the Civil War led to Reconstruction, resulting in a temporary change in the status of black Americans. They were freed from slavery (the 13th Amendment), given rights previously only held by white citizens (the 14th Amendment), and black men were given the right to vote (the 15th Amendment). These changes were exclusively changes in the political sphere. If the South had been consistent with its prior embrace of Thornwell’s “spirituality of the church” it would have taken this change in stride. As Thornwell had observed before the War the church “had no commission… to change the forms of [the country’s] political constitutions.”

The South took a different theological tack. As Tisby observes, the term used by white Southerners to describe their reign of terror against the newly freed blacks, Redemption, was a term that drawn from their Christian religious context. “In biblical terms,” Tisby notes “redemption refers to God’s plan to save people from their sins and make them into a holy nation. Christ achieved the redemption of his followers through his sacrificial death on the cross.”

The white Southern “redeemers,” as they called themselves, indeed instituted many sacrificial deaths on their path to save themselves from the perceived sin of black equality. Those deaths were not theirs, but almost entirely those of the newly freed blacks. The lynching tree served as a cross, providing a sign to the reinstitution of white power. Some white American Protestants, it seems, were against theological interevention when it was contrary to their interests (ending slavery), but had no problem acting on theological grounds when it served their interests (maintaining white supremacy). 

Jim Crow, fundamentalism, and the return of the spiritual church

Over the next few decades after the Civil War, the former Confederate soldiers made themselves white as snow, washed pure by the blood of emancipated blacks. The white South enshrined their new spiritual covenant through various forms of legal apartheid called Jim Crow. They also enshrined it in myth through stories of heroic cavaliers fighting for the Lost Cause, a story in which white Southern defeat was in fact not defeat, but God calling them to a higher holiness.

The white North, while marginally less explicit in its racism, created its own racial boundaries through restrictive covenants on houses prohibiting the sale of property to non-white home buyers. The theological controversies of Northern whites also had an indirect racial component – specifically Modernist-Fundamentalist split in white American Protestantism in the early 20th century. 

In the typical telling of this controversy, the central dispute was over whether the Bible was a series of books written by people inspired by their belief in God (the Modernist position), or an inerrant single text divinely inspired by God (the Fundamentalist position). Tisby highlights another divide: whether the Bible was concerned with redeeming society (Modernists of the Social Gospel school), or with redeeming individual souls through conversion (Fundamentalists generally). 

As Tisby notes, the black Protestant church of the early 20th century found itself homeless in this debate. On one hand, the black church’s view of Scriptural inerrancy was in line with the position staked out in The Fundamentals, a foundational collection of essays for Fundamentalist American Protestantism. But black Protestant members did not learn about this common theological ground with the white Fundamentalists from The Fundamentals. When the twelve volume set was mailed out to Protestant ministers across the U.S., black ministers were excluded from the mailing list.

But black Protestants also found themselves in common cause with the world-transforming spirituality of the Modernists. Since black Protestants were barred from purchasing homes in middle class (white) neighborhoods in the North, and excluded from any Jim Crowed white space in the South, they longed for social transformation like the Social Gospel Modernists. Here the Fundamentalists disagreed, “To those who are crying for equality and opportunity and improved material conditions” intoned an essay in The Fundamentals, “the Church repeats the divine message ‘Ye must be born again.’” 

Civil rights and civil whites

This focus on saving individual souls served the white racialized status quo of the Jim Crow South and its more indirect equivalents in the North just fine. But as in the 1860s, the theological consensus was deemed insufficient in the face of a mounting threat—this time,  the new black activism. White, conservative Protestants did not take this threat in a spirit of Christian quietism. 

As the Civil Rights movement was at its high watermark in 1965, a young fundamentalist minister in Lynchburg, Virginia by the name of Jerry Falwell delivered a sermon called “Ministers and Marches.” The sermon questioned the sincerity of Martin Luther King’s commitment to non-violent protest. Echoing the 1914 essay in The Fundamentals, Falwell intoned, “Preachers are not called to be politicians, but soul winners.”

But it wasn’t just small town fundamentalists like Falwell taking this line. Two years prior, more moderate ministers of non-fundamentalist churches in the city of Birmingham, Alabama wrote an open letter to Martin Luther King, then languishing in a Birmingham jail for civil disobedience. While acknowledging the “natural impatience” of American blacks for justice, they insisted the courts, not the streets were the proper venue for seeking justice. Without questioning King’s commitment to non-violence, they feared the violent response to street protests of Civil Rights activists undermined the democratic process and led to bloodshed.

While this ministerial letter was long on sympathy, it was short on historical context. It ignored centuries of enslavement and the violent, white-led reversal of black rights after Reconstruction ended in the 1870s. It also ignored that litigation proved to be an imperfect instrument of social justice: almost a decade after Brown v. Board of Education, almost no Southern states were in compliance with the court’s holding on school integration.

While the moderate ministers were more sympathetic to King’s movement than the fundamentalist Falwell, they shared Falwell’s opposition to direct political engagement on issues of race. If King had followed the advice of the moderate ministers of Birmingham, it would have had the same practical effect that Falwell’s advice would have had: no change to the racial status quo in South, or the nation. King famously sent a reply letter declining to follow their advice. The Civil Rights Act of 1964, the Voting Rights Act of 1965, and the Fair Housing Act of 1968 were the result. 

Falwell apparently took note of King’s successes. In 1976 he did a ministry tour across the United States called “I Love America” in which he stated, “This idea of ‘religion and politics don’t mix’ was invented by the devil to keep Christians from running their own country.” In 1979 Jerry Falwell, former preacher of this satanic doctrine, formed the Moral Majority, Inc. No longer satisfied with merely saving souls, his new motto was “get ‘em saved, get ‘em baptized, and get ‘em registered.” 

Falwell’s description of the Moral Majority fits with what we now normally think of as the white Evangelical political view: “pro-life, pro-family, pro-moral, and pro-America.” It was the Christianity I was taught in church growing up. 

The Devil and Bob Jones

Surprisingly, given the current state of the national debate, it wasn’t abortion that roused white Evangelicals to battle, but limits on religious exemption from taxation. After the Supreme Court decided Roe v. Wade, W.A. Criswell of the Southern Baptist Convention stated “I have always felt that it was only after a child was born and had life separate from its mother… that it became an individual person.” 

Criswell’s statements seems to have reflected a general trend towards liberalization in his denomination. In 1970 a poll of Southern Baptist pastors revealed that 70% of the ministers supported abortion to protect the physical or mental health of the mother, 64% supported abortion when there was fetal deformity, and 71% supported abortions when woman got pregnant as the result of a man raping her.

But it was not the death of the unborn that riled the godly. It was taxes. After the enactment of the Civil Rights Act, the federal government began taking a more aggressive stance on integrating public schools. As a result, white Southern parents began retreating from public schools in the 1970s, forming private, “charitable” schools for white children. 

One public school district in Mississippi was particularly egregious. White attendance collapsed from 771 white students to 28 white students in 1969. By 1970 the white student attendance in the district public school was zero. The white children were now attending the whites only private schools. In response, three black families brought a suit against the U.S. Treasury Department seeking an injunction prohibiting the Department from designating these segregated private schools as “charitable” and, therefore, tax exempt. The district court issued the injunction and eventually the Supreme Court upheld the trial court in the 1971 decision Green v. Connally

But like Brown v. Board of Education before it, the opinion was only as good as the federal agencies willing to enforce it. The IRS didn’t start seriously attacking segregated schools until 1976, when Democrat and born-again-Christian Jimmy Carter became president. One of the IRS’ targets was Bob Jones University, which faced losing its tax-exempt status not because it refused to integrate (it already had in 1971), but its refusal to permit interractial dating. As then-president Bob Jones III stated, “There are three basic races – Oriental, Caucasian, and Negroid. At BJU, everyone dates within those three basic races.” At a another time, he clarified “Even if this were discrimination, which it is not [….] it is a sincere religious belief founded on what we think the Bible teaches, no matter whether anyone believes it or not.” 

The backlash against the new regulations was swift. Both Congress and Department of Treasury officials received thousands of letters in protest. Not getting what they wanted from the born-again Carter, evangelicals backed a divorced movie star who told them, “I know you can’t endorse me, but I want you to know that I endorse you and what you are doing.” In January 1980, shortly after becoming president, Ronald Reagan reversed the IRS’ determination that Bob Jones did not get tax exempt status. It was only after public backlash that Reagan reversed course once again, submitting a bill to Congress to reinstate the IRS’ policy for all racially discriminatory private schools.

BJU, in contrast, did not reverse its prohibition of interacial dating until 2000, after then-presidential candidate George W. Bush received significant blow back for campaigning at the University. In 2008, Stephen Jones made a formal apology which stated that “in its early stages [Bob Jones University] was characterized by the segregationist ethos of American culture.” This was eight years after Bob Jones University stopped prohibiting interractial dating, and forty years after the Supreme Court barred states from prohibiting interracial marriage in Loving v. Virginia

Who cares if it’s scriptural?

Just over a decade after the president of BJU apologized for racist policies it enforced until the cusp of the millennium, Just-Asking-Questions turns to me in the taco line. He’s thinking about the history of race and the American Protestant church we had just been hearing about, and the argument that it creates moral demands in the present. Thinking in particular about the possible moral obligations, he asks me “But is it supported by Scripture?”

Given my years of disengagement from Evangelical Christianity, I was out of practice couching whatever view I wanted to support in Scriptural terms. After several bobs and weaves while I thought of an answer, I hit on one. The Bible is very comfortable with the idea of the sins of prior generations being visited on future generations. (See Exodus 34:6-7; Deuteronomy 5:8-9.) Why shouldn’t that apply to the legacy of racism? Even assuming we are not guilty of the racist actions of white Americans in the past, we are responsible for the consequences of past racist choices on the present. 

Just-Asking-Questions didn’t seem persuaded. In part because I did not express the point as clearly as I just did and in part because Just-Asking-Questions was, per his fake name, just asking questions. He wasn’t interested in the answers. What I really wanted to ask him, but did not, was “Who cares if it’s supported by Scripture?”

This question is especially pressing after reviewing the history Tisby lays out in  Color of Compromise. It is not as if Scripture provided any meaningful moral check on the desires of many white American Protestants to profit from slavery, enforce white supremacy via lynching and Jim Crow, and put barrier after barrier in the way of fulfilling the promise of Brown v. Board of Education and related cases like Loving v. Virginia.

This line of thinking gets to the main weakness of The Color of Compromise. While Tisby implicitly indicates that there is no value-neutral reading of the Bible and that we are morally responsible for how we read and use the text, he never explicitly states it. In fact, sometimes he attempts to absolve the Bible of obvious complicity in the sad history of American racism.

The most egregious attempt occurs in Tisby’s coverage of the debates over Sciptural support for slavery. Early in the book Tisby chastises theologians for defending slavery based on the “apparently tacit acceptance of the Bible” observing that “many other Christians did not see anything in the Bible that forbade slavery.” The implication is that their view was incorrect.

But the Bible’s acceptance of slavery is far more than tacit, it’s active and explicit. The Christian New Testament instructs slaves to “regard their masters as worthy of all honor, so that the name of God and the teaching may not be blasphemed.” The passage goes on to instruct slaves who have Christian masters that the slave “must not be disrespectful to [their Christian masters] on the ground[s] that they are members of the church” and must “serve them all the more.” (1 Timothy 6:1-2; see also Ephesians 6:5-9.)

Tisby doesn’t engage with passages like this and it isn’t hard to see why. How is this inconsistent with the Virginia General Assembly declaration in 1667 that “the conferring of baptism does not alter the condition of the person as to his bondage and freedom”? If anything, the General Assembly’s position is less demanding than the Bible’s view. The Bible not only states that Christianity will not free a person from slavery, but converting imposes a moral obligation to be a better, more submissive, slave.

It’s not that there aren’t ways to deal with this problem, it’s just that the solutions require promoting certain Biblical values (mercy, justice, and equality of believers) above others (judgment, punishment, and submission to authority). This makes it necessary to state plainly that some parts of Scripture are wrong, and by extension demote the status of Scripture as the ultimate arbiter of Christian life. This is a conclusion that many of the white evangelical readers that Tisby is trying to reach would almost certainly reject. 

In a way, Tisby’s refusal to take questions of Scriptural authority head on comes from the same place as Just-Asking-Questions desire to not engage with “unscriptural” critical race theory. Both of them want a Bible that supports their view of the world. 

But of course, the Biblical texts weren’t written to conform with either of their preferences. The texts of the Christian New Testament were written to deal with the social and spiritual problems of people in the Ancient Mediterranean world, not early 21st century America, or even mid-17th century Colonial Virginia for that matter. While my political sympathies are with Tisby, I don’t think a better informed view of the Bible by itself will save us. Ultimately, we’ll have to save ourselves.

Featured image is stain glass image of Confederate generals in the National Cathedral, by Carolyn Kaster/Associated Press

Read the whole story
3 days ago
New York, NY
Share this story

That Feeling When You Take Memes Seriously

1 Share

Memes are destroying America. Haven’t you heard? Whether produced by enemy nations as psy-ops or simply by the evil among and within ourselves, they are definitely bad. And they are definitely serious.

As someone who takes rhetoric pretty seriously, I requested a review copy of a certain book by what I had been assured were two of the top scholars studying rhetoric today on the topic of memes, and specifically alt-right memes in 2015/16. I assumed the narrow focus would allow for a deep dive; I was incorrect. The book was 75% comparative communications theory, 20% summaries of news and articles about memes, and 5% engagement with source material.

The rule of serious attempts to analyze memes as rhetoric is that such attempts are impossible to take seriously. They mutter ominously about dark motives and dire consequences (or scream about this “VIRTUALLY UNREGULATED” genre), all sound and fury, without insight whatsoever.

The combination of their newness, their frivolity, and the audience for which serious works are written, seems guaranteed to produce the most empty, pointless analysis imaginable.

What’s special about memes? Well, it’s like a third of the population became political cartoonists overnight, only the result is even less subtle than that. And of course, politics is but one area that’s been memed; everything from video games and sports to philosophy and theology have subreddits and Facebook pages aplenty dedicated to creating and sharing memes.

But that’s it, really. It’s a more participatory political cartoon. In day to day interactions online, it is less propaganda (which is what the serious analysts want it to be) than a visual stand-in for a one-liner. It’s a means for shit-talking as well as just dicking around for laughs and attention.

Other than that, its significance is no different from any other form of rhetoric. It seems significant now because it’s everywhere. But like all rhetoric, its very pervasiveness militates against a general significance. To put it differently; the literary theorist might think Coca-Cola can brainwash you with advertising, but Pepsi, and for that matter health drinks, can advertise too. A lot of rhetorical effects cancel one another out, just like my vote for a Democrat cancels out your vote for a Republican.

An Actual Serious Analysis™ would focus on specific source material from a specific period and analyze specific effects. This is what the aforementioned book should have done; just gone through hundreds and hundreds of memes and traced their proliferation and evolution, and attempt to suss out their specific impact from how they are received in particular communities. THAT would be interesting. I would find it interesting, at any rate.

Rhetoric does matter. Business-as-usual rhetorical effects occur within the comfortable confines of institutions; you following the voting procedure. The officiant declares a couple married. But the institutions themselves only exist because, much like money, the community “understands” them to. Some rhetorical effects can thus weaken institutions, as when confidence in a currency plummets and people stop accepting it as tender altogether. Part of the buzz around memes is that people really think we’re memeing our way to institutional death. I am skeptical that the memes are the problem. But if you think, as I do, that rhetoric matters, and also that memes are a form of it, then consider bypassing the unified-theory-of-memes approach in favor of an approach that sticks close to specific examples and pays attention to the communities that make use of them.

Read the whole story
13 days ago
New York, NY
Share this story

We All Know Too Well What Happened: Daniel Okrent and Thomas Leonard on Eugenics and Immigration

1 Share

“In the first three decades of the twentieth century,” Thomas Leonard tells us in Illiberal Reformers, “eugenic ideas were politically influential, culturally fashionable, and scientifically mainstream. The elite sprinkled their conversations with eugenic concerns to signal their au courant high-mindedness.”

“Eugenic thinking in forms both mundane and grand increasingly stained the definition of America,” Daniel Okrent concurs in The Guarded Gate. “‘If you visit the United States,’ the French academician André Sigfried wrote after a 1925 tour, ‘you must not forget your Bible, but you must also take a treatise on eugenics. Armed with these two talismans, you will never get beyond your depth.’”

The educated American today usually has some vague notion that there was once a pseudo-science known as eugenics which was used to rationalize racism. What few realize is just how enormous a cultural and political phenomena it was at its height—nor how many of the leadings lights of the day, people still celebrated by liberals and progressives to this day, were deeply involved in it.

Eugenics was a discipline whose adherents purported to offer the tools for “scientifically” managing a population. That is, for managing who breeds with whom in order to determine the best overall outcome. The wisdom of “rational” regulation of human breeding was accepted as obvious by all right-thinking elites of the early 20th century, and eugenics provided the idea with scientific respectability. 

Yet support for eugenics extended well beyond elite circles. On its popularity, Leonard says with characteristic compactness:

Eugenic thinking reached deep into American popular culture, traveling through women’s magazines, the religious press, movies, and comic strips. The idea of safeguarding American heredity, with its concomitant fear of degeneracy from within and inundation from abroad, influenced ordinary Americans far removed from the eugenics movement’s professionals and publicists.

And Okrent says with equally characteristic flair: 

According to a federal study, the number of articles on eugenics appearing in the popular press had tripled between 1909 and 1914 alone, capturing more space, wrote John Higham, “than on the three questions of slums, tenements, and living standards combined.” Publishers of a popular sex manual (pro-abstinence, anti-spooning, and very concerned about the inevitable palsy and deafness brought about through “self-pollution”) elevated its appeal by promising “Scientific Knowledge of the Law of Sex Life and Heredity or EUGENICS.” … Movie theaters played a film called The Black Stork, “a eugenic love story.”

The Guarded Gate is a book about efforts in the decades around 1900 to restrict the entry of immigrants into America, and the parallel development of eugenics to which the restrictionists would eventually hitch their wagon. Illiberal Reformers, on the other hand, is about the progressives more broadly; the economic, institutional and cultural backdrop of their time and the various intellectual frameworks which served as their guiding star. The time period covered is roughly the same; Leonard starts a little earlier and Okrent ends a little later.

Published in January of 2016 and the result of years of work, Illiberal Reformers happened to hit bookshelves at the beginning of one of the most ferocious outpourings of anti-immigration sentiment in decades. The Guarded Gate, which came out this year, is on the contrary a direct answer to that unfortunate movement, though Okrent never quite comes out and says it. However useful these books may or may not be as mirrors to hold against contemporary politics, each one is an invaluable look into a moment in American history both unique and uncomfortably familiar. Put into conversation with one another, they are greater than the sum of their parts.

Progressives were not liberal

The titular illiberalism of Illiberal Reformers refers to the progressives. Though Leonard’s reading of what constitutes liberalism appears at times suspiciously particular, he makes a compelling case that the progressives were contemptuous of the very idea of individual rights. The German-educated American economists of the second half of the 19th century rejected individualism in favor of a social holism that even contemporary communitarians would find onerous.

The progressive economists’ rejection of individualism and their embrace of what Daniel Rodgers calls the “rhetoric of the moral whole,” was perhaps best embodied in Edward A. Ross’s concept of social control, which referred broadly to all means, public and private, by which “the aggregate reacts on the aims of the individual, warping him out of his self regarding course, and drawing his feet into the highway of the common weal.” Individuals, Ross maintained, were but “plastic lumps of human dough,” to be formed on the great “social kneading board.”

The abstraction of the social whole, or the social organism, or the collective mind of society, in practice meant “social control” exercised by experts. And lest one have any doubt about what this meant in the heyday of eugenics, consider the Wharton School’s Scott Nearing, writing in 1912:

Nearing went on to assert that permitting perpetuation of hereditary defects was “infinitely worse than murder.” After all, the murderer “merely eliminates one unit from the social group,” whereas the transmission of defective heredity curses and burdens “untold generations.” A truly just society thus had an overwhelming obligation to prevent the crime of defective heredity. For the price of six battleships, Nearing estimated, the United States could house, in isolation, all of its defectives. Such a policy would remove, at a stroke, the “scum of society” and, by preventing defectives from procreating, end their threat to future generations. [emphasis added by me]

Today, many of my fellow liberals happily wear the label “progressive.” Like all political labels, such as “liberal” itself, the meaning drifts quite significantly over time, get discarded when they gain too much baggage, only to be picked up once again in a different context to replace something else. There isn’t much point in picking a political label as your hill to die on. But in as much as they are seeking to establish a connection with the progressives of the Progressive Era, they ought to reconsider.

Again, progressives’ social holism should not be confused with the sophisticated communitarian critiques of liberalism that became popular in the 1980s and have remained valuable ever since. The progressives’ social holism was crude, made worse by the conquest of evolutionary thinking and Darwinism over the educated Western mind of the time. When they spoke of the need to “cull inferiors” on behalf of the “social organism,” they were not being metaphorical. Richard Ely, a founder of the American Economic Association, was quite plain: the existence and priority of the social organism over and against the individual was “strictly and literally true.”

The implication that we could have avoided this mess if everyone had just remained good Millian liberals runs throughout Illiberal Reformers, but Leonard does not offer blanket condemnations. He has no patience for the historians who have “treated the progressives’ ambiguous legacy by wishing it away.” 

Those who admired the progressives ignored or trivialized the reprehensible and wrote lives of the saints. Those who disliked the progressives ignored or trivialized the admirable and wrote lives of the proto-fascists. But Progressivism is too important to be left to hagiography and obloquy.

The progressives who undeniably “dedicated themselves to social and economic betterment” of the poor nevertheless “made invidious distinctions” on the basis of race science; “valorizing some as victims deserving help” but “vilifying others as threats requiring restraint.” One cannot deny the greatness of the progressives, nor that that very greatness was frequently terrible. But the progressives of the Progressive Era were not liberals, in any meaningful interpretation of the tradition. They were quite happy to say so at the time; liberalism was precisely what they felt they were progressing beyond.

Science and contingency

For Leonard, the progressives struggled with a central tension: the desire to help a group frequently mixed with the desire to restrain them. For Okrent, it is more noteworthy that they wished to help at all, compared to the non-progressive elements in the immigration-restriction movement. The contrast is interesting to Okrent precisely as an illustration of the ideological diversity of those drawn into the restrictionist movement and swept up in the prestige of eugenics.

Leonard frequently notes the prominent figures of the day outside of the progressive movement who shared many of their vices—some of whom, like Henry Cabot Lodge, are key protagonists in Okrent’s book—but these serve as asides from his primary purpose. Okrent casts a wider ideological net than Leonard due to their difference in focus.

Leonard provides an eagle-eye view of the circumstances in which the progressives found themselves. He notes that in the thirty-five years after the last Civil War amendment was ratified, “the US economy had quadrupled in size” and “American living standards had doubled.” It wasn’t all good news, however; the engine of growth which “propelled the American economy upward did so undependably,” as record-breaking growth went side-by-side with financial crashes and “prolonged economic depressions.” Moreover, between 1895 and 1904 alone, “1,800 major industrial firms were consolidated into 170 giant firms” unprecedented in scale—with market value “1,000 times larger than the largest manufacturing enterprises of the 1870s.” It was a time of dizzying change and volatility.

While Okrent does provide some context of this sort—most significantly in discussing the sheer number of immigrants which arrived during the various periods covered in The Guarded Gate—his focus is less on institutions and statistics than on the empirics of politics. The politicians, scientists, activists, and figures whose efforts boosted the cultural currency of eugenics and produced a set of extremely restrictive immigration laws. Okrent shows us their lives, who they were, where they came from, and the specific ways in which they coordinated their campaigns to lock America’s front door and throw away the key.

While Leonard discusses the vast institutional changes wrought by the reformers over this period, such analyses can easily leave one wondering how exactly this was done. Okrent gives us a window into some of those particulars: the coalition building, rabble-rousing, and cajoling for funding or to change votes, as well as into the lived experiences of the key players and others who lived through the era. Where Leonard fleshes out the logic of intellectual frameworks, Okrent offers samples of political cartoons, pamphlets, and eugenics survey forms. Rather than one approach proving superior, this is the most crucial way in which the two books complement each other tremendously.

The extent to which eugenics contributed to the victory of the immigration restrictionists is somewhat ambiguous. Okrent details a committed movement that preexisted its appropriation of eugenics, on the one hand, and a series of contingent historical events, on the other. One wonders, for example, what would have happened if the literacy test—which was vetoed by no less than four Presidents—had come up for Taft’s signature when he thought he might have a chance of winning reelection, rather than after he was a lame duck. Or if World War I had happened differently, or with different timing; both Leonard and Okrent make it clear that the sudden ceasing of immigration during the war provided restrictionists an opportunity they did not fail to seize.

Yet it is precisely because so many recent attempts came close to passing but failed at the crucial moment that the role of eugenic rhetoric in their ultimate triumph seems so important. Of course, the sins of eugenics extend far beyond the immigration debate. In America, tens of thousands of people were sterilized against their will due to laws enacted on the basis of eugenics. As the most chilling chapter of The Guarded Gate recounts, Nazi scientists drew directly on the work of American eugenicists, and in many cases collaborated with them.

The temptation, when looking back on a history of bigotry against immigrants, forcible sterilization, and—unavoidably—of death camps, is to ask what could have been done differently. Or, more pressingly, what we can do now to avoid succumbing to some modern day eugenics that we might not notice until it is too late, if at all. Neither Leonard nor Okrent address this question directly, but each brings a very clear perspective which implies some possible answers.

Okrent goes out of his way to emphasize, again and again, that eugenics was an unrigorous pseudo-science. Of Charles Davenport, founder of the Eugenic Record Office and arguably the foremost American authority on eugenics, Okrent writes:

Much of Davenport’s study of human inheritance was predicated on one gigantic scientific error: his belief that characteristics as complex and as unmeasurable as memory, loyalty, and “shiftlessness” were determined by a single unit character—in other words, that the origin of each was, genetically speaking, no more complicated than the color of the flowers on one of Gregor Mendel’s pea plants.

He calls this a “stunning assumption for such a well-trained scientist,” remarking that the evidence was “flimsy,” the ones who collected it “inculcated” in “Davenport’s warped version of Mendelian theory.” Again and again, Okrent feels comfortable pointing out what assumptions a “well-trained scientist” at the time ought to have known better than to make. Or, alternatively, how obvious it should have been that the data they relied on, either because of the irrelevance of the question it answered or the poor training of those who collected it, was utterly worthless.

Anachronism aside, it is hard to disagree. Francis Galton, who was, among other things, the founder of eugenics as a subject matter, comes off as an utter crank in most of his output:

Francis Galton’s motto, a colleague said, was “Whenever you can, count.” He counted the number of deaf worms that emerged from the ground near his London town house after a heavy rain (forty-five in a span of sixteen paces), and he counted the number of flea bites he suffered in 1845 while spending a night in the home of the Sheikh of Aden (ninety-seven, but even so he thought the sheikh “a right good fellow”). Galton consumed numbers ravenously, then added them, divided them, shuffled and rearranged them so he could amaze himself with his own discoveries.

Okrent continues,

His meticulously constructed “Beauty Map” of Great Britain, he believed, established that Aberdeen was home to the nation’s least attractive women. His essay “The Measure of Fidget,” published in England’s leading scientific journal, was an effort to “giv[e] numerical expression to the amount of boredom” in any audience by counting body movements per minute. Observation and enumeration convinced him that “well washed and combed domestic pets grow dull” because “they miss the stimulus of fleas.”

One is reminded of Deirdre McCloskey’s remarks in Bourgeois Equality:

The Europeans discovered in the seventeenth and eighteenth centuries how to talk rationality, which they later applied with enthusiasm to counting the weight of bird seeds one could fit into a Negroid skull. The numbers and calculation and accounts do appeal to a rhetoric of rationality—” arguments of sense.” But they do not guarantee its substance.

Galton’s faith in counting was indeed just that: faith. There was nothing especially rational—or scientific if you prefer—about the substance of most of it. As Okrent put it:

Galton’s major discoveries—among them the individuality of fingerprints, the movement of anticyclones, the statistical law of regression to the mean—elevated his obsessive collection of data from triviality to significance. But for every one of his substantial contributions to human understanding, he probably hit upon a dozen that were trivial.

Or not so trivial, as in the case of the invention of eugenics.

I do not wish to misattribute to Okrent the argument, which he does not make but heavily implies, that this all could have been avoided with a better commitment to true science. That particular question Leonard does face head on:

Historians of science remind us that the history of bad ideas is as interesting, and as important, as the history of good ones. This is true because any bad idea of historical important is, almost by definition, an idea that many people thought to be a good idea at the time. Histories of bad ideas show us something about how science works and what happens when it is harnessed to political and economic purposes.

Eugenics and race science are historically important, and during the Gilded Age and Progressive Era many people—most conspicuously the progressives—thought they were good ideas. The events of the intervening century, some of them horrific, have changed our view. Eugenics and race science are now bad ideas, indeed Bad Ideas, which is why twenty-first-century geneticists, economists, sociologists, demographers, physicians, and public health officials remain reluctant to look too closely at their respective disciplines’ formative-years enthusiasm for now discredited notions. The very word “eugenics” remains radioactive, and the temptation to dismiss eugenics and race science as inconsequential pseudosciences is ever present.

But eugenics and race science were not pseudosciences in the Gilded Age and Progressive Era.

If Okrent implies that what we need is to be better scientists, then Leonard implies that the best we can do is to be committed liberals, an argument for which I have some sympathy.

At any rate, Leonard makes it clear that we should not make an appeal to science where an appeal to morality is required. And Okrent makes just such an appeal. The ill fated journey of the St Louis is frequently invoked in contemporary debates around refugees and immigration. Okrent takes a wider view:

In the last year before the quotas were established, 50,000 European Jews entered the country; unsheathed, the new law worked like a scythe. From 1925 until the beginning of World War II, the number plummeted to an annual average of slightly less than 9,000. Subtract the new number from the old, and straight-line math would presume 41,000 a year who wanted to come but were unable to—in all, some 574,000 people over the fourteen-year period. A more aggressive accounting would recognize that the rise of the Nazis would have accelerated the pace, increasing the total radically,to…what? A million? A million and a half?

But speculation calls for caution; sorting through such a grim accounting demands the most conservative math possible. (…)Let’s begin, then, with those math-measured 574,000. Even if we assume that half of those managed to emigrate to other nations in the Western Hemisphere (it wasn’t nearly that many), we’re down to 287,000. And let’s also assume that the rate of immigratoin would have slowed anyway during the worldwide depression of the 1930s to, say, only half of its previous rate. And let’s further stipulate that half of these remaining 143,000 who might have emigrated were instead stranded in Europe but somehow managed to escape the Nazis. What happened to that last 70,000-plus? Or, just to be even more conservative, make that 50,000. Or even 10,000. No number one can conjure is so small that we can ignore it.

We all know too well what happened.

The St. Louis was sent back because the national quota had already been met that year for Germany. Beyond this numeric argument, Okrent follows the struggles of a single Polish Jew, who had family in America and tried for years to join him before the Nazis finally got him.

A moral commitment to the free movement of people and the rejection of human taxonomy that claims or implies a hierarchy of human worth, whether true or pseudo-scientific in nature—this is what these two books, in my reading, scream for. We cannot ask for most citizens to become scientists, but we can ask for most citizens to become better liberals. Whether or not you are ready to take that leap, each book individually, but especially put together, offers indispensable insights into a period of American history all too often whitewashed or written off.

Featured image is Louis Pasteur, by Albert Edelfelt

Read the whole story
14 days ago
New York, NY
Share this story

Universal Benefits: More Liberal and More Efficient than Employer Benefits

1 Share

Early liberals focused on dismantling de jure social hierarchies and mercantilist policies, but the onset of the Industrial Revolution presented a new set of challenges.. Though some hewed to the laissez faire doctrines of the 18th century and were suspicious of any industry regulation, the majority accepted that, given the stark power imbalance between a factory owner and a worker, it was necessary to introduce industrial regulations to protect the latter; the first such code was passed in the United Kingdom in 1802. As L.T. Hobhouse, a prominent Liberal Party activist of the early 20th century, wrote, 

…men of the keenest Liberal sympathies have come not merely to accept but eagerly to advance the extension of public control in the industrial sphere, and of collective responsibility in the matter of the education and even the feeding of children, the housing of the industrial population, the care of the sick and aged, the provision of the means of regular employment.

Two distinct paths were followed in the creation of this early benefits system: the “Bismarck model,” introduced in Germany in the 19th century, was an employer-based healthcare system. In the 20th century this system gained prominence in the United States. Elsewhere, especially in the UK, the Liberal party and later the Labour party pushed through the alternative “Beveridge model,” which started as a national health insurance system. Starting in 1911 with the National Insurance Act, much more along the lines of the Bismarck Model, the UK system was replaced with a single payer National Health Service after World War II (notably, the German system, while still relying on employer contributions, has grown to be a fully universal system, unlike the system in the United States). 

While the two models are generally used when discussing health insurance specifically, they can also be used more broadly with respect to any benefit: are employers mandated to give a benefit (be it health insurance, pensions, unemployment insurance, maternity/paternity leave) to their employees, or are these benefits instead provided to every citizen universally by the government? The question has grown more acute as the public demands more benefits. Traditionally, the US  has routed many benefits through employers rather than providing them publicly, but universal systems are superior both in terms of minimizing the burden on business and maximizing individual freedom, two key liberal goals.

There is a long history of requiring employers to provide benefits rather than providing them universally. In the United States, the program was built in stages. In the early 20th century, progressive reformers created early workers’ compensation programs in several states. This move was itself modeled after a Bismarckian program in Germany, and it provided at least some protection for a common class of injuries through insurance purchased by employers. Federal wage controls during the second world war gave rise to the practice of employers adding health insurance benefits to compete for scarce workers. While price controls are no longer in effect, other policies have stepped in to encourage employer-based insurance: the fact that the premiums paid for employer-based insurance are tax deductible and pre-payroll tax mean that offering benefits is far cheaper than offering an equivalent increase in take-home pay. Employer matching contributions to employee 401(k) plans are similarly subsidized. 

These tax advantages, and the lack of universal health benefits, created the basis of the employer based benefit system. This system was further entrenched with the employer mandate of the Affordable Care Act, which levies dramatic fines against companies with more than 50 employees that fail to provide health insurance for at least 95% of them. While the individual insurance plan marketplaces were the most visible element of the ACA, only a few percent of Americans actually use individual plans; the great majority of mid-to-high income, non-elderly Americans are covered by company plans—in many cases, this system is legally mandated. This situation came about without, in many cases, a comprehensive plan, but the general reliance on employer-provided benefits has had cheerleaders on both the right and left, urging the creation of an employer-based system and in some cases working against more universal alternatives.  

From the right, conservatives are generally more comfortable with employment-based benefits because they aren’t a “handout.” The logic goes that workers have earned benefits in addition to their wages, and so it is more just to distribute benefits through employers; alternately, if too many benefits are provided universally, the beneficiaries will not be motivated to work. Furthermore, especially since the early 1990s, conservatives have made reducing taxes a centerpiece of their fiscal policy. Even if the cost to employers of providing a given benefit is the same as the cost of taxes to provide the same benefit, the optics of mandating employer benefits are easier, especially in a Grover Norquist influenced environment.

On the left, pushing responsibility for these kinds of benefits onto employers is perhaps not preferred to providing them directly, but it is certainly not an explicitly disapproved practice. Part of this is because, influenced by the theory of surplus production, many leftists and most Marxists assume that the profits of companies derive primarily from appropriating their workers’ labor value; thus, any effort to move money back from capital to workers is seen as laudable, and the connection between hiring more employees and earning greater profits seemed obvious and inescapable, so attaching a liability to provide benefits for those employees seemed to have few drawbacks. 

Moreover, it has been seen as a sort of compromise that both the right and some portions of the left can agree on—elevating workers above “slacker.” The idea seems to have been that if government provides for a smallish portion of the population that constitutes the “deserving poor,” everyone else can get a job and thus earn health insurance, pensions, etc.—and without a job, they don’t really deserve those benefits. While this may seem a regressive stance for the left to take, this hostility to “shirkers” was a common factor in 20th century communist states, and some segments of the left seemed content to agree with it. 

Finally, across the political spectrum, it seemed quite pragmatic to connect benefits to employment. Most families had at least one member working a full-time job, and the largest, richest corporations were also the largest employers. These justifications, however, have been falling apart in the last several decades, even as legislation increasingly formalizes the employer-based system.

For one thing, fewer people are full time workers, and fewer households are headed by a full time employee. Labor force participation is down to 63% (from ~67% in 2000), having fallen among men and stopped growing with women. At the same time, later rates of marriage and higher rates of divorce mean that household sizes are smaller, so fewer households will have one full time worker given the same overall labor force participation. Contract or gig labor is on the rise—quite possibly prompted, in part, by the requirements placed on “employers.” Expanding the definition of employee can help on this front, but also threatens unintended consequences. For example, Assembly Bill 5 in California, aimed largely at curtailing ride share companies contracting practices, threatens to do away with the owner-operator model of trucking as well. More broadly, in a highly flexible economy the differentiation between employer-employee relationships and those between a contactor/professional and customer are always going to be blurred around the edges.

Moreover, the connection between number of employees and profitability has essentially disappeared. Of the ten most profitable companies in the US, only one, Berkshire Hathaway, is also one of the ten biggest employers in the US. Mandating employer-based benefits burdens companies with many employees, like UPS, immensely, while costing companies that derive most of their value from intellectual property, networking effects, or capital investment, like Apple, JP Morgan, or Alphabet, very little. This, in turn, creates a questionable incentive to employ as little labor and as much capital as possible, driving up returns on investment while stunting wage growth.

Shifting from employer-paid to treasury-funded programs would thus make for a more agile system, better able to cope with changing economic and labor conditions while reducing the burden on companies and the market distortion associated with employer mandates. These costs and complications can be considerable: for companies that span multiple states, it is necessary to set up employer-based health insurance in multiple jurisdictions through multiple companies or subsidiaries thereof. Companies must employee health care insurance experts and expend hours of labor on a task—selecting and purchasing health insurance—that is far outside most of their core competencies. Moreover, many employers have responded by employing more part time workers or independent contractors when, ceteris paribus, full time employers would be more efficient but for the employer mandate. Even if the full dollar value expended on mandatory benefits were instead taken in taxes, most employers would be relieved by having a simpler, more transparent system. Businesses would also likely respond by hiring more employees, since the effective penalties for so doing would be decreased. Employers are not the only ones who would benefit, however: both employees and other citizens would enjoy the benefits of a universal system, rather than an employer based one. 

It’s quite clear that employer-based health insurance and pensions are much less practical today than they were in the mid-20th century. Individuals change jobs much more frequently than at that time, averaging twelve job changes in a career. The time and energy expended on setting up new benefits, worrying about transferring coverage, etc. is a major liability for those who seek better jobs. There’s little doubt that some people choose to stay at worse paying or less rewarding jobs precisely because they want to avoid the insecurity or hassle of moving their benefits over, wondering if the new benefits (which are often quite difficult to understand) will be comparable, and so on. These frictional costs, multiplied across a major sector of the economy (most of the full time workforce has some kind of employee benefits), present the possibility for very large deadweight losses in terms of positions that aren’t filled with the best candidates or employees stuck in jobs where they are not maximizing their productivity. Moreover, the prospect of losing one’s employer-based benefits discourages entrepreneurship by introducing unacceptable risks to the prospect of being unemployed, even temporarily. 

The initial implementation of employer-based benefit mandates made sense—at the time, most households included a full-time worker, most full-time workers stayed with the same employer for a significant period, and there was little appetite for more government spending. Today, these justifications are insufficient. Both individual and corporate liberty would be maximized by removing many employer-mandated benefits and replacing them with universal benefits available to any citizen. Doing so would allow for a smoother employment market, more confident hiring by companies, and an overall more productive citizenry, to say nothing of the relief it would bring to millions who feel trapped in their jobs, or in perpetual fear of a layoff, because their critical benefits are still tied to their employer. Furthermore, while often derided as ‘expanding’ government, a universal benefit system in fact makes government involvement in the economy more transparent, and distorts the rest of the economy less than the carrots and sticks used to enforce the employer based system. Indeed, providing universal benefits grants each individual substantially more effective liberty, as it allows individuals to enter into more varied work relationships, including part time, ‘gig’, or entrepreneurial work, or leaving the formal labor force to for education, family, or other pursuits, without fearing the loss of their benefits. Demands for more comprehensive benefits, in terms of healthcare, leave, pensions, or even minimum income, ought to be met with universal services to the extent possible, rather than by placing additional mandates on employers. 

Featured image is The Factory, by Vincent Van Gogh

Read the whole story
16 days ago
New York, NY
Share this story
Next Page of Stories