Islam in the West

Earlier this month, I spent a week in Sydney. It was my first time in Australia. I did the standard tourist things—I went out on a boat in Sydney Harbor and saw the Opera House and walked along Bondi Beach and the like—but most of my time was spent in the western part of the city, where I have family. Most tourists don’t stop in western Sydney, which instead of beaches and historic buildings has a vast expanse of strip malls and gas stations and American fast food chains. In terms of the built environment, it reminded me more than anything else of suburban Florida. I think western Sydney may be the most interesting place I have ever visited. Certainly nowhere in the English-speaking world is like it.

Intellectually, I knew coming into my trip that the places where I would be going were more Muslim than any place where I’ve ever spent time in America: Greenacre (where I was staying) is 47% Muslim, Punchbowl (where family friends live) is 38% Muslim, and Lakemba (one stop over on the train) is 61% Muslim. I was entirely unprepared for how this would feel. Strolling through the Bankstown mall felt exactly like walking through an indoor shopping complex anywhere else, except that it seemed as though most of the women (both customers and workers) were wearing headscarves and every eatery prominently displayed a halal certification from the Australian Federation of Islamic Councils. It felt as though I had stepped into an alternate version of America where everything was the same except that mosques were everywhere and everyone was like me. My brother and I were at a rugby league match deep in the suburbs one evening when prior to the start of play we observed that it was time for evening prayers; walking around the stadium, we found a stadium worker just finishing his prayers, after which we borrowed the prayer mat he had lain out. I interacted with dozens of Australians of a variety of ethnicities during my trip, and (service staff excepted) only three were not Muslim.

In the Western world there are plenty of cities with areas that are heavily Muslim, but none of them are like the Islamic belt of western Sydney. Overwhelmingly Muslim areas in the United States or in Britain or in France or anywhere else tend to be poor and undesirable urban neighborhoods; when people get enough money to leave, they usually depart to more desirable areas that have smaller Muslim populations. Moreover, they are generally not really Muslim neighborhoods but ethnic neighborhoods populated by an ethnicity that happens to be Muslim. For instance, the London borough of Tower Hamlets, in addition to being one of the poorest parts of the city, is the most Muslim area in Britain at about 35% Muslim (and parts of the borough would have a much higher Muslim proportion than that). This is because Tower Hamlets is 32% Bangladeshi. Tower Hamlets is not a Muslim area but a Bangladeshi area, and the numbers show that it has no particular appeal to Muslims who are not Bangladeshi; those Muslims live in their own ethnic neighborhoods. Muslim portions of New York and Paris are also like this.

Sydney is not like this. I have not been to all of Islamic western Sydney, but the areas where did go were made up of car-dependent middle-class suburbia that felt much more like where I come from than it felt like Jackson Heights or Tower Hamlets. Unlike in other countries, these areas are truly religious neighborhoods rather than ethnic neighborhoods. I had been told that Lakemba was the Bangladeshi neighborhood, and Bangladeshis do form the largest ethnic group there, but they make up just 14% of the population, meaning that the vast majority of Lakemba’s Muslim population belongs to other ethnic groups. The predominant ethnicity in Greenacre is Lebanese, but its proportion of Lebanese is just 32%: far less than its proportion of Muslims. The shared characteristic that brings members of these communities together and encourages them to live together is not ethnicity but religion.

Australia has a relatively large Muslim population, at least compared to America. Greater Sydney is 6% Muslim, about twice as much as the country as a whole. Yet there are enormous portions of the metropolitan area where Muslims are almost completely absent. In the suburbs to the east of the city and in the suburbs north of the harbor, traditionally the richer parts of town, the Census website does not even list the Muslim populations because they are so negligible. This does not mean that there are no rich Muslims in Sydney, but what it does suggest is that even a significant number of Muslims with money prefer to live in their own Muslim communities rather than decamping for the rich neighborhoods where they would be part of a tiny minority. I doubt that people in Sydney realize how unusual this is. Certainly nobody outside the country knows how unusual Sydney is.

Naturally I was curious as to what it would be like to live in a place like this. I was brought up in a rich neighborhood where Muslims are a tiny minority, as all Muslims born to money in America are. Multiple times during my stay in Australia, my Americanness was the subject of comment, because it was remarkable to Australians that I really saw myself as just as American as, say, a white American would. Over and over again I heard Muslims born and raised in Australia refer to “Aussies” to mean white Australians; even though they spoke Australian English and Australia was the only home they had ever known, it was inconceivable that they themselves would be Aussies. It wasn’t as though they belonged to the old country—living in a multiethnic suburb populated by Muslims speaking all sorts of different languages, it would be impossible to identify an old country—but somehow they did not really feel themselves to be Australian despite belonging to a society that only exists in Australia. That was quite alien to me.

I am also curious as to how a society like this developed in Australia and nowhere else. Most likely none of the people I have mentioned would have been able to settle in Australia prior to fifty years ago, when the government abandoned the White Australia Policy. Whatever planning decisions led to the creation of an Islamic section of western Sydney must have taken place since then. Asking around, the indications I got were that many or most of the Lebanese came during the 1980s during Lebanon’s civil war, as a consequence of which Lebanese of all religious groups emigrated to Australia in large numbers. Given the circumstances of that conflict, perhaps Lebanese found that in their new country they would prefer to live alongside their coreligionists than their countrymen. The presence of mosques and other Islamic infrastructure may have then drawn Muslims immigrating from Asia in the 1990s and later.

But much like Australia, western Sydney was not really terra nullius when these settlers came. The areas that are now predominantly Muslim are neither far-flung new developments nor depressed inner-city districts but suburbs relatively near the city. These suburbs were once home to Italians and Greeks, the sort of people who came to Australia back when the government only permitted white immigration. I don’t know what circumstances led Muslim Lebanese to all settle there, nor do I understand why in Sydney, unlike in other cities, that led to the area being attractive to other Muslims who looked different and spoke different languages. I have a lot of questions, but what I know is that this is a place worth asking questions about.

Medical Boards of Canada

About a week from now, I will fly from here on the West Coast to Sydney. On the way I have a layover of 15½ hours in Vancouver, which should be enough time to go out and see a city I have not visited in over a decade. Now, though, I find myself worried about whether I can get in and out of the country in a timely manner: supposedly for public health reasons, Canada retains some of the most vexatious border controls in the Western world. I have had American friends alternately barred from leaving Canada and being denied entrance to the country for no clear reason. The ArriveCAN app that every visitor must download before entering the country is a source of constant frustration, as one would expect from a mobile application. (At the moment I am unable to fill it out for my September 6 flight, as the “date of arrival” field in the app does not accept any date after August 31.) The justification given by the government for all of this is public health.

Mask requirements on airplanes were lifted in the United States in April and in the European Union in May, but masks are still required today on airplanes and trains in Canada. Prior to the recent announcement by premier Doug Ford that it would be shut down, the Ontario COVID-19 Science Advisory Table was best known for producing fanciful projections of unlimited disease spread. In Ottawa, a locally notable doctor is running for school board on essentially a single-issue pro-mask campaign and calling for “Safe September,” a campaign demanding regular deep cleaning of Ontario schools for the return to classes in September 2020. To these people, it is like the changes that everyone else has observed over the last two years have never happened.

None of this thinking would be foreign to American liberals, but it is unusually strong in Canada. Proponents online of “zero covid” strategies—that is to say, policies focused on eradication of the virus rather than mitigation of its effects—tend to be disproportionately Canadian. Obvious logistical issues meant that Canadians never experienced the sort of lockdowns seen in Melbourne or Dublin, which is bemoaned as evidence of cowardice on the part of Canadian authorities. An indication as to why this might be can be found in a poll from 2020: 50% of Canadians surveyed thought that the greatest health threat to Canada was the United States, as to 40% who answered with China.

Canada is a country with little in the way of common culture aside from The Tragically Hip and the under-20 men’s national ice hockey team, but what Canadians do share is that they are not American. Americans are barbaric Neanderthals who can’t be bothered to respect the health of themselves or others by following the science and wearing a mask, which is why Canadians have to be better. (That cultured Europeans are also not following the science is of no import, since Canadians do not define their identity in opposition to Europeans.) Of course Canadians do have a great deal to be proud of with their health systems, which are far better than American health care in every way that matters. It is also true that Canadians do trust the science far more than Americans, if we take “trusting the science” to mean political institutions deferring to medical professionals. Whether this is something to be proud of is less clear.

While American politicians and governments work feverishly to ban abortion, Canadians can take comfort knowing that abortion is not an issue in their country. In fact, abortion is so thoroughly absent from Canadian political discourse that Canada is the only country in the world without any laws whatsoever on abortion. This does not mean that abortion is completely unrestricted in Canada; instead, it means that Canadian politicians, seeing how thorny an issue abortion can be (and perhaps looking to avoid becoming like America) have completely abdicated any responsibility over governing abortion.

Consequently, other than a small fringe associated with church groups in anglophone Canada (the political base of Leslyn Lewis), nobody holding or seeking elective office in Canada discusses abortion or how it should be regulated. Abortion is instead regulated by the unelected professional groups that set medical guidelines in each province. In general, no Canadian providers perform abortion after 24 weeks; late-term abortions are typically outsourced to jurisdictions in the United States where the availability of abortion is determined by the government rather than by professional organizations. Here, again, we see a divergence between the American custom of health regulations being a matter of active political debate and the Canadian custom of trusting the medical industry.

Recently, euthanasia in Canada has been in the news, as one particular incident has led the press to wonder if regulations on euthanasia in Canada are too lax. Naturally, the Canadian approach to euthanasia operates on the same principles of self-regulation. Legalized euthanasia came about when the Supreme Court decided that the government was not permitted to ban the procedure (Canada having adopted judicial activism from its southern neighbor), and the national government has decided that it would be unwise to intervene overmuch and passed perhaps the least restrictive euthanasia law in the world. Obviously being put to death in a medical facility is not remotely the same as being told to put on a mask on the train, but the connection here is the particularly Canadian view that it would be too politically difficult to treat matters of health and medicine as public affairs to be governed by public input, and that it would be better to instead treat those as private matters of the medical industry.

When the outside world thinks positively of Canadians, it may be because they are seen as friendly and polite and apologetic, but to the extent that it is based on anything real it is because they are forward-thinking and progressive. Crucially, Canadians are forward-thinking and progressive unlike Americans, which is to say that those characteristics are fundamental to how Canadians can be differentiated from the people they otherwise so resemble. It’s easy to see how those perceived attributes would lead to scientism and to the idea that science sits beyond and above the realm of politics. The constant contrast between the enlightened Canadian who trusts the science and the benighted American raging against the light means that scientistic thought is embedded into what it means to be Canadian. Trusting the science is even more part of liberal Canadian nationalism than is official bilingualism.

Structural Transformation

I have written in the past about the affection that American liberals have for the idea of electoral reform. Sometimes it can seem as though electoral reform is a panacea for every problem with America that one might imagine. From this perspective, the object of electoral reform is not just to better represent the popular will: because the people are inherently virtuous, it seems that the belief is that electoral reform will lead to their will being expressed in the form of better politicians, who will thereby fix the political system. Personally I am skeptical of this.

The foremost interest of electoral reformers in this country is to fix the nomination system by reducing or removing the ability of political parties to nominate candidates for office. In other countries, nomination for office usually is basically an internal affair of political parties, and so the effect of these reforms would be to make American politics less like other countries and not more. It is difficult to find comparisons to the rest of the world for this sort of reform. However, we can see foreign parallels in other sorts of electoral reform proposed in America: for instance, ranked choice voting, enacted in many major American cities and in Maine in essentially the same form as in other countries.

Looking at the anglosphere as a whole, when electoral reform has successfully been enacted it has usually been as a result of specific failings in the electoral system. In Australia and in the Canadian provinces of Alberta and British Columbia, preferential voting was enacted to avoid splitting the non-socialist vote. (In both of the Canadian provinces, the Social Credit Party eventually consolidated the opposition to socialism and then reversed the electoral reform.) In Maine, preferential voting was enacted after Paul LePage won multiple gubernatorial elections thanks to the non-Republican vote being split. In these cases, there was a specific problem that voting reform could solve.

By contrast, when proposed electoral reform is motivated not by a specific threat but rather by a general mood of good government, the suggested reform has typically been rejected by voters: in the past fifteen years, we have seen electoral reform referendums fail in Ontario, British Columbia, the United Kingdom, British Columbia again, and Prince Edward Island. Outside the United States, English-speaking voters have not shown much interest in changing the way politics is done through changing the voting process. The exception here is New Zealand, where in 1993 voters decided to change the electoral system with the intention of permanently transforming the country’s broken political system.

In order to understand why voters in New Zealand did this, some background is necessary. In the first decades after the Second World War, New Zealand was probably the most equal society that has ever existed in the developed world. That equality was supported by general prosperity, thanks in large part to a preferential economic relationship with Britain. By the mid-1970s, however, New Zealand had entered a period of economic stagnation, as the United Kingdom’s 1973 accession to the European Economic Community led to the British government deemphasizing its economic links to its former colonies in favor of closer ties with European states.

As in many other Western countries, the postwar economic consensus fell apart in New Zealand in the 1980s. What made New Zealand unusual and led to its different political development was that there it was the ostensibly left-wing party that abandoned Keynesian orthodoxy. In the early part of the decade, a number of important figures in the New Zealand Labour Party had become enamored of monetarist economic policy. When Labour won the 1984 election, their leader Roger Douglas became finance minister in the new Labour government. Douglas and his associates then embarked on a dramatic program of deregulation, selling off state assets, and lowering taxes. While it could be justified on the grounds that it was necessary to avoid economic collapse, one effect of these reforms was a substantial increase in inequality. This was strange for an avowedly socialist party.

The Labour government’s economic policy was not generally popular, and Labour was defeated in a landslide at the 1990 election. Rather than reverse Roger Douglas’s policies, the new National Party government under finance minister Ruth Richardson opted instead to double down on those decisions by slashing public benefits and making large cuts to funding for social services. The effect was, again, to benefit the very rich at the expense of the remainder of society.

This led to a general perception in New Zealand that the political class was unaccountable. It was not just that the government had pursued unpopular policies, as has happened in many countries: because one party was voted out for its unpopular policies and the other party continued with those same policies, it was clear that no actual choice was possible in the two-party system. In an unusually literal sense, both sides stood for the same thing.

In 1993, New Zealand voters chose in a binding referendum to replace the existing electoral system with a German-style system of proportional representation. The object was not just to avoid situations where multiple candidates split the vote, but to change the country’s two-party system to a multi-party system requiring government by coalition. In this way, the thinking went, policy could not be made by a dictatorial finance minister but would require cooperation and consensus between representatives of the whole country. I can think of no other instance in the West where constitutional changes were made to facilitate the transformation of a democracy’s political system in this way. The effect was immediate: Roger Douglas and his supporters formed a new party (later joined by Ruth Richardson) that soon found itself in alignment with the National Party. Right-wing opponents of the new economic policy formed their own party, as did those to the left of the Labour Party. Starting in 1996, New Zealand was governed by coalition governments.

What has the effect of all of this been? It’s impossible to answer counterfactuals, and so it’s possible that had New Zealand continued down its pre-1984 path, things would have been much worse. However, young New Zealanders continue to emigrate to Australia in droves because wages in their home country have not kept up. New Zealand does not tax capital gains at all, and it is probably no coincidence that it has one of the world’s greatest housing bubbles, concentrating wealth in the hands of those who already own or are in line to inherit property. In 1983, New Zealand’s GDP per capita was 93% of Australia’s; by 2018, that figure had fallen to 71%. It would be hard to argue with a straight face that economic liberalization in New Zealand has been an unmitigated success or that New Zealand has realized the dream of becoming a land of opportunity.

Politically, where before New Zealand had two parties, now it has two political blocs. Of the five parties in Parliament, the Labour Party and the Green Party are on one side and the National Party and ACT (Roger Douglas’s party) are on the other. Only the small Māori Party, a splinter of the Labour Party meant to serve Māori interests, has the ability to negotiate with either bloc. All other political groups that tried to straddle the line or represent other interests without being identified with either of the main parties have fallen out of Parliament and into irrelevance.

Looking at election campaigns in New Zealand, it would appear that they are now more personalized than ever before in the past. Even though legislative candidates still contest the election in individual constituencies, the nature of proportional representation means that those constituencies are fundamentally irrelevant. The results of the election (and whether individual legislators are returned to Parliament) depends on the number of votes received by each party list, all of which are identified with their #1 candidate: the party leader. The entire political life of the country thus revolves around a tiny number of party leaders. Labour Party leader Jacinda Ardern became prime minister in 2017 and immediately became the darling of the international media: she was as young and charming and photogenic as Justin Trudeau but without the nagging sense of nepotism. Between her celebrity status and her good fortune in governing an island state in the first year of the coronavirus pandemic, she won a majority at the 2020 election, making Labour the first party since the introduction of the new electoral system to be able to govern without agreements with other parties.

Despite her enormous popular mandate, Jacinda Ardern has not been able to fix her country’s structural economic issues, and consequently her party is now trailing in the polls to the National Party under the leadership of Christopher Luxon. Luxon has had a strange career for a high-profile politician: he was CEO of Air New Zealand from 2012 to 2019 before resigning to be elected to Parliament at the 2020 election. A year later, it was decided that he would become leader of the National Party, and so he took that position without facing any opposition and became opposition leader and the presumptive next prime minister. At no point in this process was there any real sense of Christopher Luxon having to prove himself in any way by winning himself a mandate; essentially, he was simply handed a high position and he took it. It’s not necessary to be a conspiracy theorist to think that the selection of an obscure businessman (he even lacked a Wikipedia article until he decided to join politics in 2019) and his presentation as the next prime minister was the product of Luxon being chosen by a shadowy elite rather than through his having any popular support.

This isn’t the first time that this has happened. In 2002, the governor of the Reserve Bank of New Zealand, Don Brash, resigned his position to join politics. The National Party obliged by placing Brash so high on their party list that he was guaranteed to be given a seat in Parliament, whether any voters wanted him there or not. A little over a year later, he took over the leadership of the National Party and the parliamentary opposition. Shortly after assuming the party leadership, he made a speech criticizing Māori politics that led to the National Party promptly shooting to the top of the polls. While Brash never became prime minister, eventually resigning from Parliament and the National Party as a result of personal scandal, it remains striking the extent to which his propulsion into leadership involved nothing even vaguely representing democracy. Electoral reform has not increased democratic transparency in this sense. If anything, it has encouraged this sort of backroom dealing, as the extreme importance attached to the position of party leader by the electoral system encourages the elites who actually run politics to switch out leaders until they find one who serves their purposes.

So, electoral reform has solved neither New Zealand’s economic issues nor its issues with political transparency. The point here is not that electoral reform invariably leads to oligarchy or that elections should always be kept the way they are, but that electoral reform cannot by itself change political culture. While electoral reform did not fix New Zealand’s issues with internal party democracy, those issues are a reflection of its own political culture and existed prior to electoral reform: indeed, they were the impetus for that electoral reform. Similarly, it is ridiculous to suggest in an American context that changing the way candidates are elected would solve the issue of extreme right-wing politics and lead to a new era of cooperation and good government. Positive political change requires commitment to a positive agenda, not merely shifting around the rules in hopes of manipulating the electorate into choosing decent people.

Metamorphoses

Fannin County, Georgia (population 25,319) is up in the Appalachians on the North Carolina border an hour and a half north of Atlanta. If you live in or around Atlanta, you probably know its largest city, Blue Ridge, as a vacation destination for Atlantans: a place with all the lifestyle amenities urbane city-dwellers might need, complete with mixed-use development that looks like this. Thinking about all the farm-to-table restaurants and the like in Blue Ridge, and knowing the small population of the county, one might make some assumptions about Fannin County as a liberal oasis in the wilderness of rural Georgia.

In fact, the tourism industry seems to have had no effect: Fannin County is one of the most Republican counties in Georgia (82% Republican in the last presidential election), and the Republican share of the vote there has actually increased in every presidential election in my lifetime. Anyone who paid attention in their American history classes might then make a different set of assumptions: back before the civil rights movement, when all white Southerners were Democrats, surely this was a one-party county. That is the case for all the extremely Republican counties on the plain south of Georgia, but in fact for Fannin County this too is false, because the mountain counties in far northern Georgia have always been Republican. Fannin County has voted for a Democratic presidential candidate just once in the last century. Even in 1936, when Franklin D. Roosevelt won 46 out of 48 states and took 87% of the vote in Georgia, he lost there. This was never the Solid South.

To understand what made this area different, we have to go back to the Civil War. Like much of the mountainous Upper South, this part of Georgia was historically unionist because it was mountainous and disconnected from the slave economy and the politics of slavery. The most prominent such areas were the western part of Virginia (which formed its own state) and the eastern part of Tennessee. Because of their Civil War affiliation, many of these places became extremely Republican after the Civil War: the congressional district containing the city of Knoxville last elected a Democrat in 1853.

Nowhere in this country (and maybe nowhere in the world) has inspired more turgid prose than the American South. You read that people here don’t forget the past, that they still live in the past, that the past and the present are the same, that the history hangs in the air like the moisture on a humid Southern night. Maybe there’s something about the cotton or the magnolia trees or Flannery O’Connor, and of course everything is about the Civil War unless it’s older than that, in which case it’s about plantation slavery. Mostly this is well-meaning gibberish from writers anxious to be the next Faulkner. The South changes just like everywhere else, and (more to the point) what it means to be Southern changes, too. I may have just talked about the Civil War, but the phenomena that interest really me here are shifts in Southern identity that are within living memory still.

Primarily because of the historic hostility of the unionist far north to the rest of the state, only two presidential candidates in history have won every county in the state of Georgia, one Republican and one Democrat. The Republican was Richard Nixon in 1972, who won everywhere in Georgia just as he won everywhere in America; his case is not interesting. By contrast, the Democrat who won every county in Georgia never won a large nationwide victory: it was Jimmy Carter in 1976. It certainly helped Carter that he was a native Georgian, but people like him, lowland farmers, had been the the recipient of Appalachians’ political antipathy since time immemorial. Here they put aside that antipathy and voted for him.

In many ways, the Jimmy Carter campaign was the last vestige of the old Solid South at the presidential level. He was the last Democrat to win every state in the Deep South. He thought of himself as a Southerner and spoke with a Southern accent and white Southerners voted for him, just as they had for nearly every Democrat up through John F. Kennedy in 1960. And yet in Fannin County, where Kennedy had lost in 1960 by over thirty percentage points, Carter won in 1976 by twelve and a half. The residents of Fannin County and the rest of Georgian Appalachia had roundly rejected the Southerners’ candidate in 1960, but they gave resounding margins to the Southerners’ candidate in 1976. Something had changed between 1960 and 1976 to make the hill folk into Southerners.

To examine further what might have led this to be the case, we can consider linguistics. The notions of identity and language are basically inseparable. A magazine writer crafting a feature story about the South might understand that to mean that the unique Southern dialect is as old as the South itself, but in fact Southern English (like all varieties of English) has undergone great change over the past century. The most significant trend has been homogenization, as the many English varieties of the South have gradually converged toward a white Southern standard.

Walt Wolfram’s essay “Enclave dialect communities in the South” sets out the conditions for the formation of what are termed “dialect enclaves”: isolated areas whose inhabitants use a dialect distinct from their neighbors. One can easily see why mountain and island communities in the South would have become dialect enclaves. Given that the primary criterion is geographic isolation, one can also imagine why speech in the South rapidly changed after World War II, as improved transportation infrastructure and mass communications meant that rural southrons were exposed to the speech of outsiders as never before.

An illustrative example of the sort of linguistic homogenization that took place in the American South in the second half of the twentieth century is the topic of rhoticity: whether the letter “r” is pronounced. Rhotic dialects, like standard American English, pronounce the letter “r” in all cases; non-rhotic dialects, like standard Australian English, often do not pronounce the letter “r” after a vowel. Prior to the advent of television, the South had a mixture of rhotic and non-rhotic accents. When non-Southerners imagine a stereotypical Southern accent, it is probably non-rhotic. Jimmy Carter spoke with a non-rhotic Southern accent. Yet the white Southern politicians of today who were children during the Carter presidency uniformly use rhotic accents. Non-rhoticity is effectively dead among white Southerners. Was it just exposure to standard American accents that caused this shift?

The most provocative hypothesis comes in A Handbook of the Varieties of English from Erik R. Thomas, who suggests that this change has to do with the American civil rights movement. During the peak of that movement, much of which coincided with the 1960 to 1976 period being discussed in this post, white Southerners were constantly exposed to black people on TV, most of whom spoke with non-rhotic accents. The particularly visible racial tensions of the period meant that white Southerners had an obvious reason to avoid linguistic features that seemed black, and so non-rhotic accents rapidly died out among white speakers. One way to conceptualize this change is to think of it as a shift from class-based or local identity, where rhoticity or the lack thereof served to distinguish poorer or inland speakers from richer or coastal speakers, to general white Southern identity, where rhoticity served to distinguish white speakers from black speakers. This change served to mark speakers as white Southerners foremost.

It doesn’t take any great deductive leap to imagine how this might have affected voting patterns for white Southerners in historically Republican areas. Whereas before they had identified with their ancestors’ unionism in opposition to other white Southerners, perhaps the advent of television bringing the whole world to their homes made some people think of their identity in broader terms. When Jimmy Carter was first nominated for governor, he lost Fannin County by thirty-five points even as he won the statewide election by nearly twenty points. But when he was running in a nationwide election in 1976, perhaps many voters in Fannin County looked at Jimmy Carter, a Southerner and a Georgian, and decided that their own identities as Southerners and Georgians were now important enough to override their old identities as unionists and Republicans, and so Carter won Fannin County just as he won the other 158 counties in the state.

What can we take away from this? My conclusion would be that identities are not static. Sometimes identities create circumstances, but they can respond to circumstances, too. They persist because they offer something useful to the people who hold them. When communications improved through the radio and television, circumstances changed, and identities broadened with them. With the Internet continuing to remove geographic barriers to communication, we should expect regional identities to be further homogenized.

A buzzword you see all the time now is “Southernization,” which might mean that American politics has become more like Southern politics but in a more vulgar form seems to be used to mean that all rural areas are the South now. In either case, it’s not a term I like because it unnecessarily exoticizes the South, but it’s not hard to see why people feel the need to use it. There’s a sense that something has changed. There’s less regional diversity than ever, as every place seems to fold into one of a few pre-set stereotypes. This process isn’t new and there’s no reason to think that it might slow down.

All that being said, none of this is predetermined. It’s a lot less interesting now that you can no longer look at an election map of the South and pick out which places supported the United States in the Civil War, but it’s very interesting that that process took far northern Georgia from Republican to Democratic for a moment to Republican again. Everyone in the world will have to keep adapting to the world getting smaller, but none of us can say how it’ll happen.

The Silent Majority

Satyajit Ray made dozens of movies in Bengali, his (and my) native language, but he made only one movie in Hindustani: The Chess Players, a story of the waning Muslim elite in Oudh in north India on the eve of 1857. Not only was The Chess Players the only film that Ray made in Hindustani, but it was also the only film that Ray made about Muslims. In all his Bengali movies, Hindus are the main characters, and this even though during his life most Bengali people were Muslim (as they are now). Much of The World of Opu, one of his most famous pictures, is set in modern-day Bangladesh in the hinterland of Khulna, which is mostly Muslim now as it was then, but the world depicted is entirely Hindu.

My point here is not that Satyajit Ray was a bigot or myopic or anything like that. Anyone who learns anything about Bengali high culture is struck by its overwhelmingly Hindu character. The famous Bengal Renaissance of the nineteenth century was a basically Hindu phenomenon: the well-known writers of that time (Chatterjee, Dutt, Tagore, and so on) were nearly all the products of Hindu families. Similarly, of the influential Bengali-language filmmakers of the twentieth century (Ray, Sen, Ghatak, Sinha), none at all were Muslim, and only Ghatak had a notable film about the indigenous Muslim population of Bengal. By contrast, for as long as there has been a Hindustani film industry in Bombay, it has been filled with Muslims. In the rest of north India, not just political power but also the arts have historically been associated with Muslims, the class ruling over the Hindu peasantry. In Bengal, the situation is reversed: culture has always been the province of Hindus and not the rustic Muslims. Small wonder, then, that when Satyajit Ray decided to make a Hindustani story it was a Muslim story.

The culturally marginal position of Bengali Muslims warrants further exploration. I have heard numerous times stories from Bengali Muslims in this country of talking to an Indian who expresses surprise that the Bengali does not speak Urdu, since that is the Muslim language. That is true in much of India (the most prominent example here is Hyderabad, which is surrounded by Telugu-speaking country but where the large Muslim population uses Urdu), but it is certainly not true for Bengalis, who make up the second-largest ethnic group of Muslims in the world behind only Arabs. Perhaps as evidence of the extent to which Bengali letters were historically dominated by the Hindu minority, the Muslims of Bengal are essentially the only group of Muslims in the world never to have used the Arabic script to write their language. Clearly in many ways this is an unusual group.

The question of why Bengali Muslims occupy the cultural position that they do is inextricable from the question of why the population of eastern Bengal, so far from other Muslim lands, came to have an overwhelming Muslim majority. The best treatment of this that I have found is in The Rise of Islam and the Bengal Frontier, 1204–1760 by Richard M. Eaton. I first read this book years ago, and it was certainly the best book that I read for pleasure in college. Essentially, the book’s thesis is that the eastern part of Bengal was economically and politically marginal uncleared jungle until the late sixteenth century, when the main outflow of the Ganges River shifted from the Hooghly in the west to the Padma in the east at about the same time that the Mughal state in Delhi conquered Bengal and ended the independence of its local Muslim rulers, who had governed a non-Muslim peasantry much as elsewhere in north India. The effect of these two changes was to suddenly make eastern Bengal an area of great importance to a large and economically powerful state. (The more I read, the more it seems that everything in the world is about hydrography.) This in turn led to a vast campaign of clearing the forests and introducing settled agriculture to eastern Bengal; for the inhabitants of the country, the dramatic lifestyle changes that followed included acceptance of the Islamic God. In western Bengal, where agriculture was not new, the Hindu peasantry was generally untouched.

In the book’s third chapter, Eaton relates a telling anecdote:

With British activity centered on Calcutta, in the predominantly Hindu southwest, colonial officials throughout most of the nineteenth century perceived Bengal’s eastern districts as a vast and rather remote hinterland, with whose cultural profile they were largely unfamiliar. They were consequently astonished when the first official census of the province, that of 1872, showed Muslims totaling 70 percent and more in the Chittagong, Noakhali, Pabna, and Rajshahi districts, and over 80 percent in Bogra.

The religious structure of Bengal, with Islam concentrated in the peasantry rather than in the cities, was so different from the rest of India that the governing authorities had no idea to expect it until they actually counted the population and found out. It is difficult to avoid characterizing Islam as an urban religion: it was first revealed neither to peasants nor to landholders but rather to the mercantile cities of Mecca and Medina, and the pattern in the Islamic world has generally been for Islam to take hold first in cities (often cities created by the new Muslim rulers) and only gradually radiate out to the countryside. This was generally the pattern in India, and prior to the Mughal conquest this was the case in the independent Bengali sultanate, whose Muslim population was concentrated in its cities. Eastern Bengal does not follow that model, its Islamic identity being fundamentally rural in character and having been established through a connection with agriculture. Eaton points out that in at least one other place, Islam was disseminated in much the same way as in eastern Bengal: Java, whose indigenous Islam is best known to me through its depiction by Clifford Geertz in Islam Observed.

In both Java and Bengal, we see first the Islamization of existing indigenous religious practices, followed by a very long process of religious “purification” driven by the twin forces of education and urbanization: the newly educated discover the ways that their religious practices differ from Islamic orthodoxy, while those who remove to the city have no means to employ their traditional religious practices and have nothing to replace them with but orthodox Islam. This is the process by which the rural Muslim populace of eastern Bengal, so forth to other Indians, transitions from syncretism to good Islam.

Ray’s film Devi considers an equivalent issue in a Hindu context: the conflict between the indigenous Hinduism of the village and the rationalistic cosmology of Calcutta. I am not aware of any Bengali fiction exploring this from a Muslim perspective, which is indicative of the paucity of Muslim voices in Bengali high culture (or at least of my ignorance of those that do exist). I imagine that it is not a topic about which Calcutta Hindus would have much reason to think. Perhaps a suitably creative Bengali Muslim will tackle this someday.

Welcome to the Historically Black Parade

The first home football game during my time as an undergraduate at the University of Maryland came against the College of William & Mary on September 1, 2012. The weather was 90 degrees and humid and the football on display was awful. Maryland finally scored in the fourth quarter to defeat the overmatched William & Mary squad by a score of seven to six and the game ended with the entire freshman class turned off college football forever.

In a literal sense, this was an unfair matchup. Maryland, as a Division I FBS school, had 85 players on scholarship on its roster. William & Mary, a member of Division I FCS, was only allowed to issue 63 scholarships. When Maryland scheduled a lower-level team, the best-case scenario for the game was an uncompetitive blowout that denied fans the chance to see much meaningful action. That the game against that lower-level team was actually competitive turned out to be an embarrassment. It was impossible for the game between Maryland and William & Mary to turn out to be a good game that fans would enjoy.

These games—”buy games,” where a university with a prominent football program pays a school with a lower-level football program six or seven figures to come play a game with the understanding that the visiting team will receive a lopsided defeat—are widely detested by fans because they are boring, but nevertheless they persist. The advantage to the contracting schools is that buy games provide additional inventory to sell to season ticket-holders and (more importantly) to sell to television networks. A little over fifteen years ago, the number of football games that schools were allowed to schedule each year was raised from eleven to twelve. In general, this has been taken by schools as an additional opportunity to schedule a bad game.

Buy games have some proponents: their primary argument is that they provide much-needed funding to poorer schools, although it remains an open question why there’s no other way to fund non-flagship schools than through bad football games. For the most part, though, there seems to be a general recognition that there’s something unsavory about scheduling an opponent that is prohibited by rule from offering the same number of scholarships. The Big Ten Conference acted on this in 2013 when it banned its members from scheduling lower-level football opponents in the future, but that prohibition never meaningfully took effect and was rescinded in 2017, the allure of a seventy-point victory over Idaho having proven too much to resist for the conference’s leaders.

At the time of writing, there remain three major college football programs that have never played a game against a lower-level opponent: UCLA, Southern California, and Notre Dame. This fact is a point of pride for fans of all three schools, and that pride in turn is an inconvenience for administrators who would like very much to schedule a free win against an undermanned opponent. Fortunately, administrators at two of these three schools have seized upon an ingenious solution, and so UCLA and Notre Dame will both soon remove themselves from this list: UCLA faces Alabama State in September of this year and Notre Dame will play against Tennessee State in September 2023. Both Alabama State and Tennessee State are historically black.

In recent years, historically black schools have become an emotional touchstone for Democrats. Contemporary liberals are often uncomfortable engaging with issues that are not framed in racial terms (this is how you can see climate change framed as a racial justice issue because it disproportionately affects “people of color”), and so it is convenient to use historically black schools synecdochally to stand in for the education delivered to black students as a whole, even though less than ten percent of black students are enrolled at historically black schools. The political utility of these schools can be seen in the American Jobs Plan that the Biden administration announced in 2021: among its proposals was to commit $40 billion “to upgrade research infrastructure in laboratories across the country,” of which half would be allocated to designated minority-serving institutions, referring primarily to historically black colleges and universities.

For UCLA and Notre Dame, scheduling a historically black university is a masterstroke. Administrators get the buy game they want to schedule, and critics have to think twice lest their opposition to playing against Alabama State or Tennessee State be seen as racially insensitive. After all, it would surely be inappropriate for any blanket prohibition to result in a ban on historically black schools. Presumably, the deed having been done, UCLA and Notre Dame will no longer have to worry about being on that list with Southern California and will be free to schedule the likes of Sacramento State and Eastern Kentucky to their hearts’ content.

The term that comes to mind here is pinkwashing: the process by which a corporation shields itself from criticism by identifying itself either with breast cancer awareness (and, by extension, with respecting women) or with the gay community. I haven’t yet encountered “blackwashing,” and maybe that’s too politically dangerous term to use, but what these universities are doing is an exceptionally cynical version of that. If it is really important on an institutional level for the University of California, Los Angeles or the University of Notre Dame to be seen uplifting the black communities of Alabama and Tennessee, there is any number of things that those institutions could be doing. Scheduling a terrible football game is not one of them.

Perhaps it’s not surprising that the athletic program of a state university in California would use emotional messaging targeted at liberals to launder its reputation, but it’s somewhat more surprising in the case of Notre Dame. During his time as president of the university, Fr. John I. Jenkins has worked to keep the University of Notre Dame open to right-wing politics, most prominently by his attendance a year and a half ago at the White House superspreader event celebrating the nomination of Notre Dame alumna Amy Coney Barrett to the Supreme Court. It’s not obvious that what is emotionally appealing to partisan Democrats would be emotionally appealing to Notre Dame.

The underlying principles here seem to be the same that I’ve discussed before: messaging that is meant to evoke liberal values plays well for corporations. It’s all the better when it’s a topic like historically black colleges and universities that generates a strong positive response in liberals but no corresponding negative response in conservatives. The games will go ahead, the universities will get some points for their commitment to racial justice and black education, and everyone will win except for fans of good football.

A New New England

I was raised in the West, which meant that my conception of physical geography was always oriented around the idea of wilderness. I grew up in the shadow of the Santa Cruz Mountains; the elementary school next door was named after John Muir. In fifth and again in eighth grade, my whole class decamped to the Sierra Nevada for a week to be in nature. The nature we were presented may have been a sanitized and friendly version of the real thing, but we were always made to know that the wilderness was out there, just beyond the lights of our camp.

More generally, the national park system is integral to Americans’ conception of themselves. The outdoors are perhaps the least politically controversial marker of America today, and Theodore Roosevelt is still a national hero for having preserved them. We are led to believe that our wild places are untouched by man, and that distinguishes us in the New World from old Europe, and particularly from England, the source of most of our cultural inheritance.

When I first came across descriptions of the geography of England when I was very young, I assumed that the ancient hedges being described were some sort of natural feature that happened to share a name with the hedges I would see in the courtyards of apartment complexes. It was startling to discover subsequently that in fact these were the same sort of hedges, but they had been planted by people the better part of a millennium prior. In that way I came to understand what seemed to me to be the defining characteristic of the physical geography of England, which is that practically every inch of land had been directly altered by human activity.

Contemplating a few months ago the way that physical geography affected the life of Joseph Smith reminded me of the one great exception to this opposition between England and America. The great forests of northern New England, where the prophet was born, are nearly all second-growth forests. The land was logged and in many cases cleared for agriculture and families like the Smiths moved in: then the people were gone, and by the middle of the twentieth century the fields of crops and pasture had become forest again. The woods the original white settlers of New England encountered were hardly devoid of human interaction, but now the forest is practically wholly manmade. The old stone walls, the sort of examples of centuries-past human manipulation of the environment that would seem so normal in England but so alien in most of the United States, stand in the forest marking where farms used to be.

This story is thoroughly conventional, but it feels curiously untold. As of the 1950 census, the county where Joseph Smith was born (Windsor County, Vermont), had almost exactly the same population as it did in 1830. Other examples of mass depopulation in American history linger in the popular imagination: the removal of the Indians, the Great Migration, the Dust Bowl. The depopulation of rural New England took place in the context of a highly literate society, but it has left no real traces in our cultural memory. It doesn’t even have a name.

One ready point of comparison here is the contemporary emptying out of the old frontier. Many of the rural counties of the Great Plains have less than half of the population they supported a century ago. Unlike the Okies, but like the New Englanders who left their farms, the people who leave these places today will not be remembered as a recognizable cultural group. The difference now is that the exodus from the land is not because it is agriculturally unproductive. When you fly over New England, you see second-growth forest, just as you might flying over Europe. When you fly over the Great Plains, on the other hand, you see the most American landscape of all: industrial agriculture.

Big Characters

Last month I discussed the tendency for people unfamiliar with San Francisco to misunderstand and mischaracterize the city and its people. In the last week we got to see another example of this, as voters recalled every member of the San Francisco Unified School District board that they were legally able to.

Belatedly seizing the cultural opportunity presented by the widespread renaming of schools named for slaveowners and segregationists, the school board voted in January of last year to rename a third of the schools in the city on the grounds that their namesakes were inappropriate. In both its size and the range of names targeted, the proposed renaming went far beyond decisions to rename schools anywhere else and largely relied on ludicrous historical and etymological logic to do so. The story made the rounds of national media before the board reversed itself and fell off the front pages.

Naturally, this was the last time that most Americans thought about the San Francisco school board. When the board was recalled, the Washington Post headline read “San Francisco recalls school board members seen as too focused on racial justice.” This was certainly a convenient framing, since it appealed to a national issue of interest to a national audience. Gabriela López, one of the recalled officeholders, posted on Twitter an image of that headline with the message that “white supremacists are enjoying this.” The framing of her recall as a matter of “racial justice” was straightforwardly useful for her, and it allowed for still more coverage in non-local media about the racial issue roiling liberal San Francisco. The hazy idea of “San Francisco” provides a useful backdrop for a mutually beneficial feedback loop.

In reality the recall was not precipitated by any schools being renamed but by the board’s refusal to open schools. The recalled members of the school board actively opposed the efforts of the district superintendent to plan to reopen schools and came up with no alternative, leading to San Francisco public schools remaining closed until August 2021. When the responsible board members were questioned by the San Francisco Chronicle prior to the recall election, they were impenitent regarding delays in school opening: López cited fear in “harder hit communities” to justify her opposition while Alison Collins attacked unnamed “community members who wanted us to open without safety measures in place.” As the Chronicle editorial board noted in its endorsement of the recall effort, Collins had previously accused the superintendent’s proposal for a reopening plan of “recreating white supremacy” and expressed opposition to any reopening plan that was not “authentic” in terms of its engagement with “Black and brown people.”

These quotes reminded me of an article last summer in Los Angeles magazine on the head of the Los Angeles teachers’ union, Cecily Myart-Cruz. The money quote from that article, the one that headed dozens of right-wing opinion pieces, is as follows:

There is no such thing as learning loss,” she responds when asked how her insistence on keeping L.A.’s schools mostly locked down over the last year and a half may have impacted the city’s 600,000 kindergarten through 12th-grade students. “Our kids didn’t lose anything. It’s OK that our babies may not have learned all their times tables. They learned resilience. They learned survival. They learned critical-thinking skills. They know the difference between a riot and a protest. They know the words insurrection and coup.”

These lines inevitably call to mind the Cultural Revolution. The elevation of making revolution over making multiplication is Mao Tse-tung Thought without the communism. It would be a mistake to too closely equate the relatively measured words of the San Francisco school board members to the rhetoric here, and the point here is not that there are Maoist cadres lurking everywhere, but it is nonetheless striking to see in multiple places this apparent notion that conventional education perpetuates racial inequities. In her day job, Alison Collins works in real estate and is married to a prominent developer. In her political role it seems as though she and her ilk have embraced a sort of vulgar Third Worldism, with racial determinism replacing economic determinism.

If these are serious attacks on education, fundamentally they are different than Republican attacks on education. Republicans may be ideologically opposed to public education as it exists today, but they make no attempt to hide that. Republicans announce that they will gut public education and then when they are elected they proceed to do just that. There is nothing particularly underhanded or secretive about the Republican agenda.

By contrast, none of the members of the San Francisco school board were elected promising that they would keep schools closed. For that matter, none of them intended when they were elected in 2018 to close schools; it was simply a matter of opportunity. Similarly, it is inconceivable to think that a majority of Los Angeles teachers would rather their students learn revolutionary political vocabulary than mathematics. The Los Angeles magazine article noted that turnout in the last union leadership election was 16 percent.

Ideologically committed minorities do well in elections that attract minimal interest and minimal turnout, especially now that local news and the concept of local community have both died, and for better or for worse in this country we elect nearly everything. For every San Francisco there are a hundred stories of rural or suburban school boards that fail to understand freedom of speech, freedom of religion, or both. If an organized ideological concern were to embark on a serious entryist campaign, they’d have all the youth of America for the taking.

SF and Fantasy

For many years I’ve had the vague intention to someday see Los Angeles Plays Itself. I should be interested in Los Angeles Plays Itself, because I am interested in mass culture and I am interested in Los Angeles (a city I like more the older I get) and I am interested in geography and the ways people relate to the world around them. I’ve just never blocked out three hours of my life to watch a documentary, and so I’ve never watched it. Instead I’m left with something I picked the first time I read a review of the film a decade ago: an argument from the film about how the term “LA” for the city was invented by Hollywood. The contention, at least as I understand it, is that “Los Angeles” is the real city where real people live and die, while “LA” is the abstraction depicted in the movies. Maybe the way the film makes the point is more nuanced and maybe I’m mischaracterizing it entirely, but in any case it’s an appealing notion that the names of things could organize our thoughts so cleanly. One doesn’t have to believe in the Sapir–Whorf hypothesis to find it a useful means of categorization.

If you had asked me ten years ago whether anyone said “SF” for San Francisco in the same way that people say “LA” for Los Angeles, I would probably respond that “SF” is the monogram that the Giants and the 49ers use, not a name. Nowadays, though, I sometimes do hear or see “SF” for San Francisco. To me it sounds like nails on a chalkboard. In fact it has turned out to be a great shibboleth, because invariably the culprits are the same sort of people: tech industry workers who grew up somewhere else and live in a preposterously overpriced apartment in the city and used to take the corporate bus every day to their jobs in the Valley. If they work at Facebook (as many of them do), they say “Menlo” for Menlo Park just like they say “SF” for San Francisco, and the result is equally strange to the ears of any local.

I’ve spent the vast majority of my life in the suburbs of either San Francisco or Washington, and I like to tell people that the two cities are unpleasant in exactly the same way: whatever local culture they might have, each city has exactly one industry and both are full of young people who moved to the city for that industry and talk about nothing but their jobs. It’s not exactly a fair charge in either case, but people enjoy the line because it reflects an underlying reality. If you want, you can find a whole industry of thinkpieces about how tech money has driven out all of San Francisco’s artistic character. One could argue that nearly all of the characters from Slouching Towards Bethlehem would be priced out of the San Francisco of today. The SF of today is peopled by an army of identical full stack developers.

None of this really feels quite right. If the tech industry destroyed the Bay Area, it did so long ago. I grew up in a largely Asian suburb whose unique character was the product of the fact that nearly everyone I knew growing up had parents who worked in the tech industry. That San Francisco is expensive and not bohemian is more than anything else because there was never any meaningful flight of capital out of the city in the ’60s and ’70s; San Francisco didn’t have much of an inner city to gentrify. So much of what people believe about San Francisco isn’t true. I often hear from people who don’t know the area well a fiction perpetuated by shows like Silicon Valley, which is that the city or the region is overwhelmingly white. In fact the Bay Area is just 36% non-Hispanic white. Perhaps no metropolitan area in the country is less understood.

Still, there really are a lot of tech workers in San Francisco, and they love talking about the SF that they inhabit, and their presence has me thinking of what “SF” might be. If “LA” is the image of Los Angeles that the city’s hegemonic industry broadcasts to the world, then perhaps “SF” is the same for San Francisco. When Republicans inveigh against Big Tech, as they have started to do over the past few years, they are assailing SF just as they have been launching polemics against LA for a century. SF is one of the greatest concentrations of wealth in the world, and it seems likely that that concentration will only grow because SF controls so much of the quaternary sector of the economy. The products that SF sells are everywhere and in everything. Maybe the term “SF” will never spread beyond those who are enmeshed in its machinery, but it’s fitting that they have their own term for the word they inhabit.

Whose Merchants Are Prophets

Recently I took part in setting up a book group dedicated to discussing The Venture of Islam by Marshall Hodgson. Islam is of course a subject of the utmost importance and interest to me, and Hodgson characterizes Islam as a project imbued with moral force and seeks to contextualize that project within world history. It’s an exciting book, and I look forward to discussing a new chapter each week.

Yesterday we discussed the first chapter of the book, which deals in large part with the conditions that made the Hejaz suitable for the coming of Islam. Hodgson’s thesis here (which he suggests is drawn heavily from the work of Max Weber, although I cannot confirm this as I have not read any of Weber’s work) is based on geographical determinism. Hodgson draws a distinction between well-watered areas producing abundant agricultural surplus and arid regions where agriculture is more marginal; the former create great wealth that accrues to the holders of the land, while in the latter the land yields less surplus and aristocratic landholders are consequently less rich relative to merchants.

Consequently, while well-watered regions tend to foster religion that emphasizes hierarchy and aristocratic manly virtues like strength and courage, religion in arid regions instead focuses on universal values like honesty and fairness, values that are conducive to the efficient functioning of the marketplace. The area of the world that produced the Abrahamic religions is unusual within the populated Old World in terms of the vast size of its arid portions, and Hodgson contrasts the arid Abrahamic religions with aristocratic Zoroastrianism, the product of an agricultural land.

This thesis also fits with an observation I found striking in Chris Wickham’s Framing the Early Middle Ages: of all the states covered in the vast geographic and temporal range of that book, only in the first centuries of the caliphate was there a state whose army (and, by extension, whose government) was paid entirely in money and not at all in land. The Islamic society is a commercial society. This observation would not be surprising to any contemporary Muslim, since everyone knows that the Koran was revealed to a member of a merchant tribe.

Another point in the book that attracted comment during our discussion was the inclusion in a timeline of the development of religion in that part of the world of the collapse of the Marib dam in the Yemen. The pre-Islamic Yemen occupies a central place in Islamic historical imagination, since it was from the Yemen that the would-be conqueror Abraha sent an invading force that was repelled in the Year of the Elephant, the year the Prophet was born. Hodgson brings up the collapse of the Marib dam because that collapse resulted in the end of a settled agricultural society in southern Arabia and the triumph of mercantile life. It also resulted in large-scale migration out of southern Arabia, and many of those migrants must have passed through Mecca while bound for the Mediterranean.

So, then, we have an agriculturally marginal but commercially central area adjacent to recent large-scale migration. The natural comparison made in our discussion was to the burned-over district of upstate New York in the first half of the nineteenth century. That district was so named because of its propensity for religious enthusiasm, but it is best known for giving rise to the second-most famous man to be known to his followers simply as “the Prophet”: Joseph Smith.

As fortune would have it, a few months ago I read a biography of Joseph Smith: Rough Stone Rolling by Richard Bushman. The book is worth reading; the author is a Mormon and is well aware of the suspicions that a Mormon trying to write Mormon history might receive. The product is a treatment that seems fair, although it is perhaps less skeptical of Joseph Smith’s personal experiences than a non-Mormon might be.

In any case, a characteristic that emerges of the world from which Joseph Smith came, the northeastern United States in the early nineteenth century, was that there was a constant shortage of money. Because there was not enough physical currency to meet the commercial needs of the public, whenever a family moved to a new place (as the Smith family did frequently) they would need to untangle a web of IOUs before they were financially free to leave. Even though they were primarily farmers and not merchants, the Smiths were nonetheless constantly involved in commercial schemes. This is perhaps evident in Joseph Smith’s later life, as he established his city of Nauvoo at a point on the Mississippi particularly conducive to trade and constantly endeavored to turn his Nauvoo House into a commercially viable establishment.

In Hodgson’s typology, the life of Joseph Smith comes at or slightly after the end of the Agrarian Age, when agriculture was the primary source of wealth in society. In the subsequent period, the contemporary Technical Age, conditions changed completely. It seems clear, though, that agriculture remained preeminent at least at the time and place of Joseph Smith’s birth. During his childhood, agricultural difficulties forced his family westward as part of the great depopulation of northern New England that forever changed not just the social character but the physical geography of that part of the country. They ended up in a part of New York that was a central connection point for North America, particularly after the construction of the Erie Canal.

The similarities here are obvious, which is why people have been comparing Mormonism and Islam for as long as Mormonism has existed. More to the point, though, it is hardly possible to imagine a Joseph Smith arising from a landed society like that of Mississippi. Mormonism, as previously mentioned, was hardly unique in the land where it arose; there were myriad new religious movements, all responding in their own way to the conditions of their common society. Clearly some societies are more conducive to religious development than others. It seems strange to think that religion could be conditioned on geography in this way, and that it might do so is a testament to the power of physical conditions in organizing human lives.

Create your website with WordPress.com
Get started