Monday, February 29, 2016

Day 196: Gideon's Spies: The Secret History of the Mossad



Matters had looked very different on that morning in late March 1985 when Ari Ben-Menashe had caught the early-morning British Airways flight from Tel Aviv to London. Eating his kosher airline breakfast, he reflected that life had never been so good. He was not only making “real money,” but had learned a great deal at the elbow of David Kimche as they trawled through the Byzantine world of selling arms to Iran. Along the way, he had also furthered his education in the continuous interplay between Israel’s politicians and its intelligence chiefs. For Ben-Menashe, “compared to my former colleagues, the average arms dealer was a choirboy.” He had identified the problem: the aftereffects of Israel’s Lebanon adventure, from which it had finally withdrawn, battered and demoralized. Anxious to regain prestige, the politicians gave the intelligence community an even freer hand in how it waged pitiless war against the PLO, whom they saw as the cause of all Israel’s problems. The result was a succession of scandals where suspected terrorists and even their families were brutalized and murdered in cold blood. Yitzhak Hofi, the former head of Mossad, had sat on a government commission, set up after intense public pressure, to investigate the brutality. It concluded that intelligence agents had consistently lied to the court about how they obtained confessions: the methods used had too often been gross. The committee had called for “proper procedures” to be followed.

But Ben-Menashe knew the torture had continued: “It was good to be away from such awful matters.” He regarded what he was doing, providing arms for Iranians to kill untold numbers of Iraqis, as “different.” Nor did the plight of the Beirut hostages, the very reason for his wheeling and dealing, unduly concern him. The bottom line was the money he was making. Even with Kimche’s departure, Ben-Menashe still believed the merry-go-round he was riding would only stop when he decided—and he would step off a multimillionaire. By his count, ORA’s business was now worth “hundreds of millions”—most of it being generated through the house in the London suburb from where Nicholas Davies ran ORA’s international operations.

Ben-Menashe knew Davies had continued to amass his own fortune, far in excess of the sixty-five-thousand-pound yearly salary he was paid as foreign editor of the Daily Mirror; Davies’s commission from ORA was almost always as much in a month. Ben-Menashe didn’t mind if the newspaperman took “an extra slice of the cake; it left plenty to go around. It was still champagne time.”
...
Admoni’s first call was to Prime Minister Shimon Peres, who ordered every step be taken to “secure the situation.” With those words Peres authorized an operation that once more demonstrated the ruthless efficiency of Mossad.

Admoni’s staff quickly confirmed Vanunu had worked at Dimona from February 1977 until November 1986. He had been assigned to Machon-Two, one of the most secret of all the plant’s ten production units. The windowless concrete building externally resembled a warehouse. But its walls were thick enough to block the most powerful of satellite camera lenses from penetrating. Inside the bunkerlike structure, a system of false walls led to the elevators that descended through six levels to where the nuclear weapons were manufactured.

Vanunu’s security clearance was sufficient to gain unchallenged access to every corner of Machon-Two. His special security pass—number 520—coupled with his signature on an Israeli Official Secrets Acts document ensured no one ever challenged him as he went about his duties as a menahil, a controller on the night shift.

A stunned Admoni was told that almost certainly for some months, Vanunu somehow had secretly photographed the layout of Machon-Two: the control panels, the glove boxes, the nuclear bomb-building machinery. Evidence suggested he had stored his films in his clothes locker, and smuggled them out of what was supposedly the most secure place in Israel.

Admoni demanded to know how Vanunu had achieved all this—and perhaps more. Supposing he had already shown his material to the CIA? Or the Russians? The British or even the Chinese? The damage would be incalculable. Israel would be exposed as a liar before the world—a liar with the capability of destroying a very large part of it. Who was Vanunu? Whom could he be working for?

Answers were soon forthcoming. Vanunu was a Moroccan Jew, born on October 13, 1954, in Marrakech, where his parents were modest shopkeepers. In 1963, when anti-Semitism, never far from the surface in Morocco, spilled once more into open violence, the family emigrated to Israel, settling in the Negev Desert town of Beersheba. Mordechai led an uneventful life as a teenager. Along with every other young person, when his time came he was conscripted into the Israeli army. He was already beginning to lose his hair, making him appear older than his nineteen years. He reached the rank of first sergeant in a minesweeping unit stationed on the Golan Heights. After military service he entered Ramat Aviv University in Tel Aviv. Having failed two exams at the end of his first year in a physics-degree course, he left the campus.

In the summer of 1976 he replied to an advertisement for trainee technicians to work at Dimona. After a lengthy interview with the plant’s security officer he was accepted for training and sent on an intensive course in physics, chemistry, math, and English. He did sufficiently well to finally enter Dimona as a technician in February 1977.

Vanunu had been made redundant in November 1986. In his security file at Dimona it was noted that he had displayed “left-wing and pro-Arab beliefs.” Vanunu left Israel for Australia, arriving in Sydney in May of the following year. Somewhere along his journey, which had followed a well-trodden path by young Israelis through the Far East, Vanunu had renounced his once-strong Jewish faith to become a Christian. The picture emerging from a dozen sources for Admoni to consider was of a physically unprepossessing young man who appeared to be the classic loner: he had made no real friends at Dimona; he had no girlfriends; he spent his time at home reading books on philosophy and politics. Mossad psychologists told Admoni a man like that could be foolhardy, with a warped sense of values and often disillusioned. That kind of personality could be dangerously unpredictable.

~~Gideon's Spies: The Secret History of the Mossad -by- Gordon Thomas

Sunday, February 28, 2016

Day 195: Cosmos



The conventional bombs of World War II were called blockbusters. Filled with twenty tons of TNT, they could destroy a city block. All the bombs dropped on all the cities in World War II amounted to some two million tons, two megatons, of TNT—Coventry and Rotterdam, Dresden and Tokyo, all the death that rained from the skies between 1939 and 1945: a hundred thousand blockbusters, two megatons. By the late twentieth century, two megatons was the energy released in the explosion of a single more or less humdrum thermonuclear bomb: one bomb with the destructive force of the Second World War. But there are tens of thousands of nuclear weapons. By the ninth decade of the twentieth century the strategic missile and bomber forces of the Soviet Union and the United States were aiming warheads at over 15,000 designated targets. No place on the planet was safe. The energy contained in these weapons, genies of death patiently awaiting the rubbing of the lamps, was far more than 10,000 megatons—but with the destruction concentrated efficiently, not over six years but over a few hours, a blockbuster for every family on the planet, a World War II every second for the length of a lazy afternoon.

The immediate causes of death from nuclear attack are the blast wave, which can flatten heavily reinforced buildings many kilometers away, the firestorm, the gamma rays and the neutrons, which effectively fry the insides of passersby. A school girl who survived the American nuclear attack on Hiroshima, the event that ended the Second World War, wrote this first-hand account:

Through a darkness like the bottom of hell, I could hear the voices of the other students calling for their mothers. And at the base of the bridge, inside a big cistern that had been dug out there, was a mother weeping, holding above her head a naked baby that was burned bright red all over its body. And another mother was crying and sobbing as she gave her burned breast to her baby. In the cistern the students stood with only their heads above the water, and their two hands, which they clasped as they imploringly cried and screamed, calling for their parents. But every single person who passed was wounded, all of them, and there was no one, there was no one to turn to for help. And the singed hair on the heads of the people was frizzled and whitish and covered with dust. They did not appear to be human, not creatures of this world.

The Hiroshima explosion, unlike the subsequent Nagasaki explosion, was an air burst high above the surface, so the fallout was insignificant. But on March 1, 1954, a thermonuclear weapons test at Bikini in the Marshall Islands detonated at higher yield than expected. A great radioactive cloud was deposited on the tiny atoll of Rongalap, 150 kilometers away, where the inhabitants likened the explosion to the Sun rising in the West. A few hours later, radioactive ash fell on Rongalap like snow. The average dose received was only about 175 rads, a little less than half the dose needed to kill an average person. Being far from the explosion, not many people died. Of course, the radioactive strontium they ate was concentrated in their bones, and the radioactive iodine was concentrated in their thyroids. Two-thirds of the children and one-third of the adults later developed thyroid abnormalities, growth retardation or malignant tumors. In compensation, the Marshall Islanders received expert medical care.

The yield of the Hiroshima bomb was only thirteen kilotons, the equivalent of thirteen thousand tons of TNT. The Bikini test yield was fifteen megatons. In a full nuclear exchange, in the paroxysm of thermonuclear war, the equivalent of a million Hiroshima bombs would be dropped all over the world. At the Hiroshima death rate of some hundred thousand people killed per equivalent thirteen-kiloton weapon, this would be enough to kill a hundred billion people. But there were less than five billion people on the planet in the late twentieth century. Of course, in such an exchange, not everyone would be killed by the blast and the firestorm, the radiation and the fallout—although fallout does last for a longish time: 90 percent of the strontium 90 will decay in 96 years; 90 percent of the cesium 137, in 100 years; 90 percent of the iodine 131 in only a month.

The survivors would witness more subtle consequences of the war. A full nuclear exchange would burn the nitrogen in the upper air, converting it to oxides of nitrogen, which would in turn destroy a significant amount of the ozone in the high atmosphere, admitting an intense dose of solar ultraviolet radiation.* The increased ultraviolet flux would last for years. It would produce skin cancer preferentially in light-skinned people. Much more important, it would affect the ecology of our planet in an unknown way. Ultraviolet light destroys crops. Many microorganisms would be killed; we do not know which ones or how many, or what the consequences might be. The organisms killed might, for all we know, be at the base of a vast ecological pyramid at the top of which totter we.

The dust put into the air in a full nuclear exchange would reflect sunlight and cool the Earth a little. Even a little cooling can have disastrous agricultural consequences. Birds are more easily killed by radiation than insects. Plagues of insects and consequent further agricultural disorders are a likely consequence of nuclear war. There is also another kind of plague to worry about: the plague bacillus is endemic all over the Earth. In the late twentieth century humans did not much die of plague—not because it was absent, but because resistance was high. However, the radiation produced in a nuclear war, among its many other effects, debilitates the body’s immunological system, causing a deterioration of our ability to resist disease. In the longer term, there are mutations, new varieties of microbes and insects, that might cause still further problems for any human survivors of a nuclear holocaust; and perhaps after a while, when there has been enough time for the recessive mutations to recombine and be expressed, new and horrifying varieties of humans. Most of these mutations, when expressed, would be lethal. A few would not. And then there would be other agonies: the loss of loved ones; the legions of the burned, the blind and the mutilated; disease, plague, long-lived radioactive poisons in the air and water; the threat of tumors and stillbirths and malformed children; the absence of medical care; the hopeless sense of a civilization destroyed for nothing; the knowledge that we could have prevented it and did not.

~~Cosmos -by- Carl Sagan

Saturday, February 27, 2016

Day 194: Mahabharata: A Modern Retelling



THE Mahabharata is a text of about 75,000 verses—sometimes rounded off to 100,000—or three million words, some fifteen times the combined length of the Hebrew Bible and the New Testament, or seven times the Iliad and the Odyssey combined, and a hundred times more interesting. More interesting both because its attitude to war is more conflicted and complex than that of the Greek epics and because its attitude to divinity is more conflicted and complex than that of the Jewish and Christian scriptures. It resembles the Homeric epics in many ways (such as the theme of the great war, the style of its poetry, and its heroic characters, several of them fathered by gods), but unlike the Homeric gods, many of the Mahabharata gods were then, and still are, worshipped and revered in holy texts, including parts of the Mahabharata itself. It has remained central to Hindu culture since it was first composed. It is thus “great” (Maha), as its name claims, not only in size but in scope. Hindus from the time of the composition of the Mahabharata to the present moment know the characters in the texts just as Christians and Jews and Muslims, even if they are not religious, know Adam and Eve. To this day, India is called the land of Bharata, and the Mahabharata functions much like a national epic.

The story may have been told in some form as early as 900 BCE; its resemblance to Persian, Scandinavian, Greek, and other Indo-European epic traditions suggests that the core of the tale may reach back to the time when these cultures had not yet dispersed, well before 2000 BCE. But the Mahabharata did not reach its present form until the period from about 300 BCE to 300 CE—or half a millennium; it takes a long time to compose three million words.
The Mahabharata marks the transition from the corpus of Sanskrit texts known as shruti, the unalterable Vedic canon of texts (dated to perhaps 1500 BCE) that the seers “heard” from divine sources, to those known as smriti, the human tradition, constantly revised, the “remembered texts” of human authorship, texts that could be altered. It calls itself “the fifth Veda” (though so do several other texts) and dresses its story in Vedic trappings (such as ostentatious Vedic sacrifices). It looks back to the Vedic age, and may well preserve many memories of that period, and that place, up in the Punjab. The Painted Gray Ware artifacts discovered at sites identified with locations in the Mahabharata may be evidence of the reality of the great Mahabharata war, which is usually supposed to have occurred around 950 BCE. But the text is very much the product of its times, the centuries before and after the turn of the first millennium.

The Mahabharata was retold very differently by all of its many authors in the long line of literary descent. It is so extremely fluid that there is no single Mahabharata; there are hundreds of Mahabharatas, hundreds of different manuscripts and innumerable oral versions (one reason why it is impossible to make an accurate calculation of the number of its verses). The Mahabharata is not confined to a text; the story is there to be picked up and found, salvaged as anonymous treasure from the ocean of story. It has been called “a work in progress,” a literature that “does not belong in a book.” The Mahabharata (1.1.23) describes itself as unlimited in both time and space—eternal and infinite: “Poets have told it before, and are telling it now, and will tell it again. What is here is also found elsewhere, but what is not here is found nowhere else.” And in case you missed that, it is repeated elsewhere and then said yet again in slightly different words toward the end of the epic: “Whatever is here about dharma, profit, pleasure, and release [from the cycle of death and rebirth] is also found elsewhere, but what is not here is found nowhere else . . .” (18.5.38).

The Mahabharata grew and changed in numerous parallel traditions spread over the entire subcontinent of India, constantly retold and rewritten, both in Sanskrit and in vernacular dialects. It grows out of the oral tradition and then grows back into the oral tradition; it flickers back and forth between Sanskrit manuscripts and village storytellers, each adding new gemstones to the old mosaic, constantly reinterpreting it. The loose construction of the text gives it a quasi-novelistic quality, open to new forms as well as new ideas, inviting different ideas to contest one another, to come to blows, in the pages of the text. It seems to me highly unlikely that any single author could have lived long enough to put it all together, but that does not mean that it is a miscellaneous mess with no unified point of view, let alone “the most monstrous chaos,” “the huge and motley pile,” or “gargantuan hodge-podge” and “literary pile-up” that some scholars have accused it of being. European approaches to the Mahabharata often assumed that collators did not know what they were doing and, blindly cutting and pasting, accidentally created a monstrosity.

But the Mahabharata is not the head of a brahmin philosophy accidentally stuck onto a body of non-brahmin folklore, like the heads and bodies of people in several Indian myths, or the mythical beast invoked by Woody Allen, which has the body of a lion and the head of a lion, but not the same lion. True, it was somewhat like an ancient Wikipedia, to which anyone who knew Sanskrit, or who knew someone who knew Sanskrit, could add a bit here, a bit there. But the powerful intertextuality of Hinduism ensured that anyone who added anything to the Mahabharata was well aware of the whole textual tradition behind it and fitted his or her own insight, or story, thoughtfully into the ongoing conversation. However diverse its sources, for several thousand years the tradition has regarded it as a conversation among people who know one another’s views and argue with silent partners. It is a contested text, a brilliantly orchestrated hybrid narrative with no single party line on any subject. It was contested not only within the Hindu tradition, where concepts of dharma were much debated, but also by the rising rival traditions of Buddhism and Jainism. These challenges to the brahmin narrators are reflected in the text at such places as Bhishma’s teachings in Books 12 and 13. But the text has an integrity that the culture supports (in part by attributing it to a single author) and that it is our duty to acknowledge. The contradictions at its heart are not the mistakes of a sloppy editor but enduring cultural dilemmas that no author could ever have resolved.

The great scholar and poet A. K. Ramanujan used to say that no Indian ever hears the Mahabharata for the first time. For centuries Indians heard it in the form of public recitations, or performances of dramatized episodes, or in the explanations of scenes depicted in stone or paint on the sides of temples. More recently, they read it in India’s version of Classic Comics (the Amar Chitra Katha series) or saw it in the hugely successful televised version, based largely on the comic book; the streets of India were empty (or as empty as any street ever is in India) during the broadcast hours on Sunday mornings, from 1988 to 1990. Or they saw various Bollywood versions, or the six-hour film version (1989) of Peter Brook’s nine-hour theatrical adaptation (1985).

~~Mahabharata: A Modern Retelling -by- Carole Satyamurti

Friday, February 26, 2016

Day 193: Hidden Agendas



There is something in journalism called a slow news day. This usually falls on a Sunday or during the holiday period when the authorised sources of information are at rest. Nothing happens then, apart from acts of God and disorder in far-away places. It is generally agreed that the media show cannot go on while the cast is away.

This book is devoted to slow news. In each chapter, the setting changes, from Iraq to the East End of London, from Burma to the docks of Liverpool and the West of Ireland, from Vietnam to Australia and the 'new' South Africa. In all these places, events have occurred that qualify as slow news. Some have been reported, even glimpsed on the evening news, where they are unremembered as part of a moving belt of images 'shot and edited to the rhythms of a Coca-Cola advertisement', wrote one media onlooker, pointing out that the average length of the TV news 'soundbite' in the United States had gone from 42.3 seconds in 1968 to 9.9seconds.

That is the trend. In American television, a one percentage point fall in the ratings can represent a loss of $100 million a year in advertising. The result is not just 'infotainment', but 'infoadvertising': programmes that 'flow seamlessly into commercials'. This is how commercial television works in Australia, Japan, Italy and many other countries. Britain is not far behind; the ever-diminishing circle of multinational companies that control the media, especially television, take their cue from the brand leader, Rupert Murdoch, who says his role in the 'communications revolution' is that of a 'battering ram'.
...
Diego Garcia Is a British colony in the Indian Ocean, from which American bombers patrol the Middle East. There are few places as important to American military planners as this refuelling base between two continents. Who lives there? During President Clinton's attack on Iraq in 1996 a BBC commentator referred to the island as 'uninhabited' and gave no hint of its past. This was understandable, as the true story of Diego Garcia is instructive of times past and of the times we now live in.

Diego Garcia is part of the Chagos Archipelago, which ought to have been granted independence from Britain in 1965 along with Mauritius. However, at the insistence of the United States, the Government of Harold Wilson told the Mauritians they could have their freedom only if they gave up the island. Ignoring a United Nations resolution that called on the British 'to take no action which would dismember the  territory of Mauritius and violate its territorial integrity', the British Government did just that, and in the process formed a new colony, the British Indian Overseas Territories. The reason and its hidden agenda soon became clear.

In high secrecy, the Foreign Office leased the island to Washington for fifty years, with the option of a twenty-year extension. The British prefer to deny this now, referring to a 'joint defence arrangement'. This is sophistry; today Diego Garcia serves as an American refuelling base and an American nuclear weapons dump. In 1991, President Bush used the island as a base from which to carpet-bomb Iraq. In the same year the Foreign Office told an aggrieved Mauritian government that the island's sovereignty was 'no longer negotiable'.

Until 1965, the Ilois people were indigenous to Diego Garcia. With the militarisation of their island they were given a status rather like that of Australia's Aborigines in the nineteenth century: they were deemed not to exist. Between 1965 and 1973 they were 'removed' from their homes, loaded on to ships and planes and dumped in Mauritius. In 1972, the American Defense Department assured Congress that 'the islands are virtually uninhabited and the erection of the base will cause no indigenous political problems'. When asked about the whereabouts of the native population, a British Ministry of Defence official lied, 'There is nothing in our files about inhabitants or about an evacuation.'

A Minority Rights Group study, which received almost no publicity when it was published in 1985, concluded that Britain expelled the native population 'without any workable re-settlement scheme; left them in poverty; gave them a tiny amount of compensation and later offered more on condition that the islanders renounced their rights ever to return home'. The Ilois were allowed to take with them 'minimum personal possessions, packed into a small crate'. Most ended up in the slums of the Mauritian capital, leading wretched, disaffected lives; the number who have since died from starvation and disease is unknown.
~~Hidden Agendas -by- John Pilger

Thursday, February 25, 2016

Day 192: Muslims and Jews in America



A Few Good Men could well describe more than a famous movie starring Tom Cruise, Demi Moore, and Jack Nicholson about U.S. Marines and their struggle to balance what they experience as the competing demands of fealty to “the Corps” with their obligations as human beings sepa rate from the military. It could also describe, with the addition of a few good women, the struggle of interreligious relations, which all too often is a struggle to balance these same competing demands. The best-known line in the script is voiced when a young attorney demands the truth from Nicholson’s character, Colonel Nathan R. Jessup. Believing that his experience commanding troops in Guantanamo Bay is too complex to be properly understood by his examiner (an irony I will not examine here), Jessup angrily screams back, “You can’t handle the truth!”

In this instant, two things become clear to the viewer. First, however painful it may be, the truth must always come out. The inability to shine a light on the dark corners of the communities we love most ultimately weakens and even destroys them. This is what happened with Colonel Jessup and the Marines under his command. Second, it becomes clear that what one understands as “the truth” is very much rooted in personal experience, something that demands respect. And yet, respect for the personal context that shapes the understanding of truth must not be used as an excuse for less-than-forthright discussions of even the most difficult issues.

Colonel Jessup, fully consumed by his own personal experience and the narrative that both shapes and is shaped by it, is not so different from any one of us. Yes, he is an extreme example of what can happen when our guiding narratives become entirely self- referential, lacking a mechanism to be kept in check. But more importantly, he represents the impulse to cover up those things we do not want to confront. Jessup also embodies the belief that the tough stuff cannot be fully explored with those outside one’s community because not only will such outsiders fail to appreciate “the truth,” they will use any newly acquired information shared by insiders to harm those who were brave enough to risk divulging it.

With only a few names changed, this entire discussion could easily describe the state of interreligious relations in much of the United States, especially between Jews and Muslims. This essay explores the current state of affairs between these two communities—our two communities— and how we might improve them. I should point out that to even speak of the Jewish or Muslim community, especially in contemporary America, is a misnomer. There is no single Jewish or Muslim community in the United States or elsewhere. But there are organizations representing large segments of each community, as well as media, advocacy, and educational institutions that are seen as indicative, if not fully representative, of Jews and Muslims in America.

While I believe that there are as many ways to be either Jewish or Muslim as there are people who identify as such, these institutions and organizations often present themselves and/or are perceived of as being synonymous with Jews, Judaism, Muslims, and Islam. Since, as is often remarked, the biggest difference between perception and reality is that perception is more difficult to change, it remains useful to use the language of Jews, Muslims, and communities herein. Yet I do so with the acute awareness that these are terms of convenience and not in any way exhaustive of the ideological and spiritual range claimed by all Jews or Muslims.

The bulk of this essay explores the ethical obligations that devolve upon both individuals and the institutions that claim either to represent them or speak in the voice of their traditions. How do we keep from becoming Nathan Jessup? How do we build our capacity to “handle the truth?” How do we nurture this capacity in others? How do we become more honest about the challenges within our own communities and make it safe for members of other communities to do the same? Only when these questions are fully addressed will we find ourselves on the path of maximizing the relationships between America’s Jewish and Muslim communities.

We must begin by raising the bar on our expectations regarding interreligious encounters. We must not confuse feel- good moments of mutual affirmation, which too often pass for genuine interreligious dialogue, with accomplishing the harder work of recognizing genuine differences between communities. It’s not that the so-called kumbaya experiences do not have their place, but they are not enough. Such experiences bring people together without actually being inter- anything. They are simply opportunities for mutual self-congratulation about that which is already shared. Yes, the identification and celebration of shared values remains a critical component of peace-building work. But time and again we have seen that without a healthy capacity to examine and address the moments of breakdown between groups, things quickly devolve into ugly rhetoric and even into violence.
~~Muslims and Jews in America: Commonalities, Contentions, and Complexities -ed- Reza Aslan & Aaron J. Hahn Tapper

Wednesday, February 24, 2016

Day 191: Ace of Spies



If Reilly did marry bigamously after the Russo-Japanese War, the question arises as to how he managed to conceal his second wife’s existence for so long. The most likely explanation is that she was found secreted away in ‘backwater’ locations where he had contacts and connections who would ensure she was well taken care of. Odessa and Port Arthur are two such possibilities. After Russia’s defeat in the war of 1904/05, the Liaotung peninsula became a Japanese possession, eventually becoming part of the Japanese puppet state of Manchukuo. Whatever the reality of Reilly’s connections with the Japanese during the war, it is evident that he had, and continued to have, very close business connections with a number of businesses in Japan and her occupied territories. As someone known to the Japanese authorities, Reilly would have had no trouble in accommodating his new spouse in Port Arthur, which after the war was rebuilt and restored by the Japanese. His representative and principal agent in Japan was William Gill, in Narunouchi, Tokyo. Again, Gill would have been well placed to act as conduit and to ensure that Reilly’s wife was well provided for.

Likewise, Alexandre Weinstein became a trusted lieutenant of Reilly’s before the Russo-Japanese War, and remained such for over a quarter of a century. If Reilly did take a second wife, then Weinstein above all would not only have been aware of her, but would more than likely have played a pivotal role in liaising between ‘husband and wife’. When a decade later Reilly joined the Royal Flying Corps, he named his next of kin as his wife, ‘Mrs A. Reilly’, who in the event of his death could be contacted at 120 Broadway, New York City, a business address being run on his behalf at the time by Alexandre Weinstein. Further evidence concerning a possible second marriage is examined in later chapters.

In contrast to the comings and goings of wives, ex-wives and mistresses, one female relationship that survived the test of time was that with his first cousin Felitsia. Born in the Grodno gubernia of Russian Poland, she later moved to Vienna during the closing years of the 1890s. The city’s large Jewish population lived principally in the old quarter, and it was here that Reilly visited Felitsia whenever he could. She was the only member of his immediate family that he kept in touch with after leaving Odessa in 1893, and her existence was kept a closely guarded secret from all who knew him. It was through these visits to Vienna that he made the brief acquaintance of an influential businessman whose precise role in Reilly’s story has since become a source of some controversy.

Josef Mendrochowitz, an Austrian Jew, was born in 1863 and came to St Petersburg in 1904. In partnership with Count Thaddaeus Lubiensky he founded a firm of brokers, Mendrochowitz and Lubiensky, who successfully secured the right to represent Blohm & Voss shortly thereafter. Under the representation contract, Blohm & Voss undertook to pay Mendrochowitz and Lubiensky a commission of 5% on each successful business deal. In Ace of Spies, Robin Bruce Lockhart argues that ‘Mendrochovitch and Lubensky’ were awarded the rights of representation in relation to Blohm & Voss in 1911, as a result of Reilly’s chicanery with the Russian Admiralty. Blohm & Voss archives and Mendrochowitz and Lubiensky’s own business records demonstrate quite clearly that this was not the case. At the time the contract was awarded, Reilly was not even in Russia. According to the St Petersburg Police Department, Reilly first arrived in the city en route from Brussels on 28 January 1905, where he seems to have stayed for a comparatively short period of time before moving on to Vienna. By the summer of 1905 he was back in St Petersburg, this time with the intention of staying on a more or less permanent basis.

Thanks to a chance meeting with George Walford, a British born lawyer, whom he accompanied to St Petersburg’s Warsaw Railway Station on 10 September 1905, an account of his activities at this time have found their way into Ochrana records. Walford was under Ochrana surveillance, and Reilly was watched and followed from 11–29 September as a result of his being seen with him. Why Walford was under surveillance is unclear, although it was routine practice for the Ochrana to keep a watchful eye on foreign citizens, a task they took even more seriously in the wake of the Russo-Japanese War. The surveillance on Reilly yielded nothing of value for the Ochrana, although it is most helpful to us in confirming that on arrival in St Petersburg, Reilly made contact with Mendrochowitz and Lubiensky, and actually lived in their apartment building at 2 Kazanskaya. According to the surveillance report, Reilly also visited the offices of the China Eastern Railway and introduced himself as a telephone supplier. Whether or not he succeeded in making a sale is unknown. Bearing in mind that Ochrana ‘tailers’ often gave their targets nicknames in written reports, Reilly was appropriately referred to as ‘The Broker’.

If Reilly had nothing to do with Mendrochowitz and Lubiensky securing the right to represent Blohm & Voss, did he have any connection or dealings at all with the firm? Details of the firm’s dealings are contained in six volumes of files containing over 1,000 pages of correspondence and records now held by the Hamburg State Archive. In addition to the two partners, there appear to have been four other employees, including deputy manager Jachimowitz, who ran the office in the absence of the partners and was particularly well connected with influential Russian politicians. Reilly’s name is not among those employed by the firm, but is mentioned in letters and invoices concerning his work on behalf of Blohm & Voss, as a freelance broker during the winter of 1908 and the spring of 1909. During this time he was working with Mendrochowitz and Lubiensky, assisting them in marketing a new Blohm & Voss boiler system. Company records show that agents or brokers like Reilly were often used to ‘influence’ people in favour of the company.

~~Ace of Spies: The True Story of Sidney Reilly -by- Andrew Cook

Tuesday, February 23, 2016

Day 190: The System Worked: How the World Stopped Another Great Depression



Uncertainty is a function of three factors: the depth of the shock, the linkage between the idea and the adverse outcome, and the depth of the preexisting consensus among policy elites for the privileged set of ideas. The bigger the shock, the bigger the uncertainty. A market crash or sustained recession clearly suggests that the status quo is not working, which creates a powerful incentive to search for new ideas. As the biggest global downturn since the Great Depression, the 2008 financial crisis definitely met this threshold criterion.

The connection between privileged ideas and bad outcomes also matters, however. If the link between an economic idea and an outcome is clear, simple, and direct, then it is easier for elites and publics to make the cognitive connection. For example, there has been considerable debate about the underlying causes of the 2008 financial crisis. The misplaced faith in the efficient-market hypothesis seems to be directly related to the subprime mortgage crisis: deregulation permitted the creation of an asset bubble that, once popped, triggered the crisis. In finance, the causal logic connecting the privileged idea to negative outcomes was tightly coupled. On the other hand, few analysts blamed the Great Recession on low trade barriers. In trade, therefore, the causal logic was more loosely linked.

The strength of the expert consensus also affects uncertainty. One can argue that confidence in policy ideas mirrors the way Bayesian statisticians predict how people update their beliefs. In Bayesian theory, expectations about the future are based on the strength of prior beliefs and the extent to which new data contradict those beliefs. In the world of economic ideas, the strength of that prior distribution is a function of the historical depth and current breadth of the consensus view among policy elites. The longer an ideational consensus has taken root, the more it takes on a “technical” rather than “ideological” cast—thereby making it harder to challenge. Deeply privileged ideas often possess an array of auxiliary arguments that can explain anomalous effects, making it less likely that these ideas will be usurped. So in areas in which experts have been in agreement for quite some time, even severe shocks might not trigger a substantive reevaluation of beliefs. In public policy areas in which the consensus is shallower, however, such shocks might lead to large changes in policy attitudes.

Even during periods of high uncertainty, it is not enough for the privileged set of ideas to be discredited. There must also be a viable alternative. It is quite easy for discontented elites to criticize the privileged set of ideas; it is quite another for them to agree on another idea. For that to happen, there must be a substitute paradigm that provides a compelling explanation for the current negative outcome, offers a policy that reverses the status quo, and coalesces strong interests around the idea to supplant the existing ideational order. This is a daunting intellectual task, particularly if there is no “off the shelf” idea available that can explain current events. It is also a daunting political task: the proposed alternative needs to be simple and clear, and compelling enough to serve as a focal point for a heterogeneous group of individuals opposed to the status quo.

Given this checklist, the failure to dislodge the Washington Consensus begins to make more sense. To be sure, the Great Recession triggered genuine uncertainty, but that uncertainty varied across different areas of global public policy. Post-crisis surveys of leading economists suggest that a powerful consensus persisted on several key international policy dimensions. For example, the University of Chicago’s business school has run surveys of the world’s leading economists since the crisis started. On the one hand, the surveys show a strong consensus on the virtues of freer trade, as well as a rejection of returning to the gold standard to regulate international exchange rates. On the other hand, there is less consensus on monetary policy and the benefits of continued quantitative easing.

To demonstrate the ways in which the relative strength of economic ideas affected the willingness of states to cooperate with global economic governance, the next two sections look at two areas where the outcomes differed. First, we examine why a Beijing Consensus failed to take root to challenge the Washington Consensus. In this case, the necessary conditions to displace the ordering principles of neoliberalism were never in place. The depth of belief in the privileged set of ideas was strong, and the alternative set of ideas was too inchoate to be a plausible substitute. This explains, in part, why China proved to be a supporter rather than a spoiler after 2008.

Then we examine the more contested debate about macroeconomic policy coordination. In this case, the possibility of ideational change found more fertile ground. The depth of the consensus on macroeconomic policy was newer and weaker. The existence of Keynesian ideas enabled a global policy shift. Nevertheless, the shift turned out to be only a transient deviation from the neoliberal paradigm. The tightly coupled relationship between macroeconomic policy and the sovereign debt crisis enabled advocates of austerity to push back against Keynesian ordering principles.
...
As previously noted, many Chinese officials and commentators took great delight in criticizing the United States for some of the neoliberal policies it espoused during and after the Great Recession. Numerous Western commentators began to embrace China’s development model as a genuine challenger to the neoliberal model. There were certainly some policy steps that could be equated with growing Chinese assertiveness in the global political economy. Chinese policymakers embraced the openness of the WTO trading system while simultaneously arguing in the G20 and the IMF that exchange-rate questions were matters of domestic sovereignty and should not be discussed. China created or joined new institutional structures that were outside America’s reach, including the Forum on China-Africa Cooperation, Asian Bond Markets Initiative, and Chiang Mai Initiative. China’s response to the 2008 financial crisis was to double down on its investment-and-export growth model. Massive fiscal and monetary stimulus benefited state-owned sectors far more than it did private firms over the next few years. China’s robust rate of economic growth during the Great Recession seemed to vindicate its development path yet again.

~~The System Worked: How the World Stopped Another Great Depression -by-  Daniel W. Drezner

Monday, February 22, 2016

Day 189: Pakistan:Between Mosque and Military



Within three months of taking power, General Zia coerced Pakistan’s judiciary into approving his extra-constitutional coup d’état and his decision to hold the constitution in abeyance. Basing its judgment on the doctrine of necessity, the court gave Zia broad powers to make new laws and even to amend the constitution. A military regime lacking a constitutional basis had succeeded in creating the legal fiction of constitutionality. Jamaat-e-Islami and others working with Zia ul-Haq could now argue that they were still operating under a constitutional framework.

During his first two years in power Zia ul-Haq publicly maintained the image of his regime as an interim arrangement pending elections. During his first weeks in power, however, Zia promulgated military rules for civil conduct “more thorough and comprehensive than those issued by previous martial law governments.” In September 1977, in the middle of the campaign for the election scheduled by Zia for October, Zulfikar Ali Bhutto was arrested on the charge of conspiring to murder a political opponent. The charges stemmed from an assassination bid three years earlier that had resulted in the death of the father of a PPP dissident member of Parliament. Religious parties and the Muslim League celebrated Bhutto’s arrest and at their political rallies started demanding his execution.

Bhutto’s trial was dragged through the courts for more than eighteen months, but Zia ul-Haq had already decided to portray the man he had overthrown as an evil genius. Islamist media joined Zia in a propaganda campaign similar to that unleashed against Bhutto during the 1970 elections by Major General Sher Ali Khan. Zia ul-Haq’s friend, Abdul Qayyum, has since written that Zia asked him to start preparing a white paper on Bhutto’s “misdeeds” in October 1977, within days of Bhutto’s arrest and well before he had been convicted. Although Abdul Qayyum did not write the white paper, a four-volume white paper was published before Bhutto’s execution in April 1979. The volume on alleged election irregularities alone comprised 405 pages, with 1,044 pages of appendix. During the run-up to Bhutto’s execution, state-run radio and television ran a series titled Zulm ki Dastanein (Tales of Oppression). Islamist newspapers and magazines ran excerpts from the white paper, subsidized by generous advertisements from public sector enterprises.

Zulfikar Ali Bhutto was convicted of murder by the Lahore High Court in a trial of dubious legality. After confirmation of the conviction by the reconstituted Supreme Court, Bhutto was executed in April 1979. The Jamaat-e-Islami was part of Zia ul-Haq’s cabinet during the crucial period of Bhutto’s trial and execution, and the party’s nominee held the crucial portfolio of information minister. Jamaat-e-Islami joined Zia’s cabinet when Zia, claiming that political participation in the government was necessary to pave the way for general elections, included members of the PNA in government one year after the coup d’état. In fact, the inclusion of the PNA in the cabinet was designed to deflect the blame for Bhutto’s execution from the military and to share it with Bhutto’s opponents.

The PNA remained in government for almost a year. During this period, the Jamaat-e-Islami controlled ministries that allowed it to expand its influence through patronage and provide employment to its younger cadres. In addition to information and broadcasting, Jamaat-e-Islami ministers were in charge of the ministries for production, and water, power, and natural resources. Zia ul-Haq also appointed a Jamaat-e-Islami ideologue, Professor Khurshid Ahmad, to head Pakistan’s Planning Commission and draw up plans for Islamizing the economy.

At the end of their year-long association with the government, Jamaat-e-Islami ministers complained that the entrenched bureaucracy wielded greater influence than they did. Zia ul-Haq realized that he had overestimated the Jamaat-e-Islami’s ability to run a modern Islamic state. After that year, in an effort to create his own hybrid Islamic system for Pakistan, Zia decided to cast a wider net to find Islamists of different persuasions. This opened the way for many clerics and Islamic spiritual leaders from all over the world to advise Zia ul-Haq. The general held dozens of conferences and seminars of Islamic scholars and spiritualists (mashaikh). He issued numerous decrees, some as banal as prohibiting urinals in public places (because the Prophet Muhammad advised against urinating while standing) and others with significant consequences, such as liberalizing visas for Muslim ulema and students from all over the world. The liberalization of visas for Muslim activists enabled Islamists from several countries to set up headquarters in Pakistan, circumventing restrictions on Islamist political activities in their own countries.

In 1979, Jamaat-e-Islami’s support for Bhutto’s execution was central to Zia ul-Haq’s plan to suppress any resistance from PPP supporters to Bhutto’s elimination. Zia ul-Haq met the Jamaat-e-Islami chief, Mian Tufail Muhammad, for ninety minutes the night before Bhutto was hanged. Jamaat-e-Islami members took to the streets to celebrate Bhutto’s death, which countered international criticism and domestic disapproval of the ruthless execution of the ruling general’s main political rival.

The Jamaat-e-Islami’s founder and spiritual leader, Maulana Abul Ala Maududi, set the tone for his party’s relationship with Zia ul-Haq’s military regime by endorsing Zia’s initiatives for Islamization. Maulana Maududi described these steps as “the renewal of the covenant” between the government of Pakistan and Islam and also endorsed Zia’s demonization of Bhutto and the PPP by arguing if the PPP were allowed to run in a general election again, the country would face a debacle similar to the one witnessed when East Pakistan separated from West Pakistan. When Maulana Maududi died in September 1979, Zia ul-Haq expressed his admiration for him by attending his funeral.

Although Zia ul-Haq and the Jamaat-e-Islami clearly had a soft spot for each other and enjoyed a close relationship, their ambitions did not always converge. Zia recognized that the Jamaat-e-Islami’s base of support was relatively narrow, notwithstanding its impressive organization and its ability to mobilize its cadres. Moreover, the Jamaat-e-Islami was not the only religious political force in the country, and Zia ul-Haq wanted the support of other Islamic groups as well. Once the president declared his intention to Islamize Pakistan, he was confronted with several visions of what an Islamic state should look like. Zia ul-Haq also had to juggle the conflicts of interest between his parent institution, the military, and the various religious parties.

~~Pakistan:Between Mosque and Military -by- Husain Haqqani

Sunday, February 21, 2016

Day 188: The Savage Wars of Peace



Between January 1899 and May 1902 the U.S. Army ruled Cuba, first under stolid Major General John R. Brooke, then under the more dashing Major General Leonard Wood, erstwhile commander of the Rough Riders. Unlike the Filipinos, the Cubans did not fight U.S. rule, in large measure because they were confident that it would be temporary; with the passage of the Teller Amendment in April 1898, Congress had eschewed any desire to annex the “pearl of the Antilles.” The Cuban insurrectos took a bonus of $75 a man and duly disbanded. The U.S. Army picked up the pieces after the Spanish left: Soldiers went around distributing food to a hungry populace, staging a sanitary campaign, erecting thousands of public schools (modeled on those of Ohio), rooting out corrupt officials, building roads and bridges, dredging Havana harbor, and generally attempting “to recast Cuban society, such as it was, in the mould of North America.”

The occupation’s most spectacular achievement occurred when Walter Reed, a U.S. Army doctor, confirmed the intuition of a Cuban physician that yellow fever was not transmitted by general filth or other factors but by one particular variety of mosquito, the silvery Stegomyia fasciata. A mosquito-eradication campaign undertaken by the Army Medical Corps produced immediate results: In 1902, for the first time in centuries, there was not a single case of yellow fever in Havana; just two years before there had been 1,400 known cases. Incidence of malaria, another scourge of the city, of all tropical cities, plummeted nearly as much. Such dramatic improvements in public health were to become a commonplace feature of American colonial administration.

Knowing that the U.S. military occupation would be brief, and determined to safeguard American interests in Cuba after the troops went home, Congress passed the Platt Amendment in 1901. Under its terms Cuba would be obligated to obtain Uncle Sam’s approval before signing any foreign treaty; maintain low foreign debt; ratify all acts of the U.S. military government; and give the American armed forces the right to intervene at any time to protect life, liberty, and property. In addition, Cuba would have to provide the U.S. long-term leases on naval bases; it was this provision that would lead to the creation of a naval station at Guantánamo Bay in 1903. In short, the Platt Amendment represented a considerable abridgment of Cuban sovereignty.
Havana went along because it had no choice. It was only when the Cubans pledged to honor the Platt Amendment that the U.S. Army left the island, though the U.S. Navy remained a looming presence offshore. Cuba now had its own government headed by Tomás Estrada Palma, a former schoolmaster who had spent years living in New York State, but it was in effect an American protectorate.

Panama
The Platt Amendment, following the outright annexation of Puerto Rico, signaled that the U.S. was intent on turning the Caribbean into an “American lake.” This desire grew stronger once an isthmian canal was under way. The story of how the republic of Panama was created is well known and need not be recounted in much detail here. As we have seen, U.S. troops had been frequent visitors to Panama, landing there thirteen times between 1856 and 1902 to guarantee freedom of transit for Americans. Although most of Panama’s population had long chafed under the distant rule of Bogotá, in the past U.S. forces had always preserved Colombia’s sovereignty over the isthmus. That would change in 1903.

At the time, Colombia’s congress and president were balking at the proposed U.S. terms for a canal treaty, demanding more money. The U.S., with its burgeoning power in the Pacific, considered a canal a strategic necessity, and President Theodore Roosevelt was furious at the “homicidal corruptionists” in Bogotá for reneging on their commitments to allow the project to proceed. Although they did not instigate a Panamanian revolution, the president and his secretary of state, John Hay, knew about it beforehand and tacitly encouraged the plotters. The revolutionaries counted on American military intervention and were not disappointed. On November 2, the gunboat Nashville, which had just arrived at Colón on Panama’s Caribbean coast, received secret orders from the Navy Department to “prevent landing of any armed force with hostile intent, either government or insurgent.” The remarkable thing about this telegram is that, when Commander John Hubbard of the Nashville received it, the revolution had not yet broken out. It would start the next day.

By the time Hubbard received his orders, 500 Colombian soldiers had already landed at Colón, but they still had to traverse the isthmus to reach Panama City, capital of the province. The American railroad superintendent dissembled and told them there were not enough rail cars available to take all of them, but he allowed the Colombian general and his staff to go by themselves. They arrived just in time to be captured by the rebels who took over Panama City on November 3. The success of the revolution was sealed when the USS Dixie appeared off Colón on the evening of November 5 and disembarked 400 marines under Major John A. Lejeune. The Colombian army detachment left at Colón decided not to tangle with the marines; instead their colonel accepted an $8,000 bribe from the Panamanian plotters to sail back to Cartagena with his men. Within the next week, eight more U.S. warships arrived at Panama, effectively foreclosing any possibility that Colombia would take the isthmus back by force.

On November 6, 1903, the U.S. government formally recognized the Republic of Panama. One of the new government’s first acts was to sign a treaty giving the U.S. permission to build a canal under extremely generous terms that turned the Canal Zone—a 10-mile-wide strip on either side of the waterway—into U.S. territory. It was as brazen—and successful—an example of gunboat diplomacy as the world has ever seen. When Teddy Roosevelt was subsequently accused of having committed (as the New York Times termed it) an “act of sordid conquest,” he asked Attorney General Philander Knox to construct a defense. “Oh, Mr. President,” Knox is said to have replied, “do not let so great an achievement suffer from any taint of legality.”
War Plan Black
The isthmian canal, completed in 1914, gave the U.S. an invaluable strategic advantage—the ability to move its fleet quickly between the West and East Coasts, the Pacific and the Atlantic—but also a major headache: protecting the precious waterway. It was the same challenge Britain faced with the Suez Canal. London’s response was to assert control over Egypt, the Sudan, and virtually the entire Mediterranean. The U.S. took a similarly sweeping approach with Central America and the Caribbean.

Naval planners looked at the map and realized that there were only a handful of main channels into the Caribbean, the most important being the Windward Passage between Cuba and Hispaniola. Control of Puerto Rico and Cuba gave the U.S. a chokehold over this “strategic center of interest” (to use Alfred Thayer Mahan’s words), though the navy remained interested in acquiring bases in Hispaniola as an insurance policy. By 1906 the U.S. Navy was big enough to ensure that no other power would contest control of its own backyard: It deployed 20 battleships, roughly the same number as Germany, and second only to Britain’s 49. The extent of American naval might was trumpeted by the cruise of the Great White Fleet around the world in 1907–1909.

Military planners are paid to be paranoid, and despite America’s growing power, they constantly saw threats looming, principally from Berlin. The leaders of the War and Navy departments lived in constant fear that Germany would establish bases in the West Indies or South America and then use them to attack U.S. shipping, the Panama Canal or—worst-case scenario—the American mainland itself. This was the basis of War Plan Black, completed in 1914 by the General Board of the U.S. Navy.

These worries were not entirely farfetched. The German navy, under the command of Admiral Alfred von Tirpitz, was constantly, if unsuccessfully, scheming to acquire a base in the West Indies. The German admiralty staff also drew up war plans between 1897 and 1905 for seizing either Puerto Rico or Cuba as a staging area for an attack on the East Coast of the United States. Operations Plan III, as the German Caribbean strategy was called, was mothballed after 1906, when rising tensions in Europe forced the German navy to focus its attention closer to home. But after World War I broke out, German U-boats did attack some shipping close to the American mainland (though in the Atlantic, not the Caribbean) and the German foreign office did concoct a wild plot for an alliance with Mexico, which would supposedly receive in compensation the return of the southwestern United States. (The Zimmerman Telegram, laying out this plan, was intercepted by British intelligence and helped draw the U.S. into the war.)

The modern-day reader is certainly entitled to doubt in retrospect how much of a threat Germany ever posed to U.S. control of the Caribbean, but the danger loomed large enough at the time and helps to explain American willingness to intervene in the region. These fears were crystallized in the Venezuela crisis of 1902–03.

~~The Savage Wars of Peace -by- Max Boot

Saturday, February 20, 2016

Day 187: Prophets of War



The military buildup for World War II resulted in an aircraft industry that was on a whole different scale from the struggling, on-again-off-again business that companies like Lockheed struggled with in the early to mid-1930s. Demand for supplies for the war predated U.S. entry into the conflict, most notably in the case of Lockheed’s provision of Hudson bombers to Britain. But it was the war itself that transformed the industry. Output increased by an astounding 13,500 percent during the war: The U.S. aviation industry produced more than 300,000 aircraft for the military services. It was hard to imagine how a peacetime economy could sustain anything approaching those production levels, and initially it didn’t.

As Lockheed President Robert Gross put it in his reflection on the immediate postwar situation, “As long as I live, I will never forget those short, appalling weeks.” Whatever his personal feelings may have been about the conflict, from his perspective as a businessman it was not the war itself but the drop-off in business that followed that appalled him. As he put it in a 1946 letter, “After the end of the Japanese war we had what looked like a very healthy production program,” but difficulties with key programs, like the Constellation airliner, required that operations be “cut to the very bone.” By the following March, Gross was pining for the good old days of World War II: “We had one underlying element of comfort and reassurance during the war—we knew we would get paid for whatever we built. Today we are almost entirely on our own, the business is extremely speculative, and with a narrowed market, the competition is very keen.”

As the end of the war neared, Gross had hopes that his firm would have a leg up by virtue of the fact that it had built a substantial number of military transports that could be readily adapted to serve as long-range airliners, but he feared that the timing was wrong: “If the war had ended six months ago, our development position would have been so favorable compared to anybody else’s except Douglas that we would undoubtedly have been guaranteed a leading position in the market.... Now, however, the war has dragged on and every month that it lasts gives other companies an opportunity to get development work going.”

The shifts in the business took a personal toll on Gross, who wrote to his associate Henry F. Atkinson that “life is really hectic these days, what with the airplane business nearly flat on its face and me having my 50th birthday and feeling old age as a real flat tire. Seriously, we have lived a lifetime these past few years, but I am fundamentally a man of hope and faith and I believe in the end things will come out.”

Part of Gross’s “hope and faith” stemmed from his sense that he and his colleagues could successfully lobby for a policy of peacetime government subsidies for the aerospace industry, even if it did not compare to the levels of government business achieved during World War II. In August 1945, just a few months after the end of the war, Gross testified on the topic of “aircraft reconversion and America’s airpower policy” at hearings held by the Aviation Subcommittee of the Senate Committee Investigating National Defense Programs. The theme of Gross’s testimony was that just as the aircraft industry had answered the nation’s call during wartime—providing America with the “greatest air force in the world, and a production capacity of 50,000 planes per year”—the U.S. government had an obligation to sustain the industry in peacetime. And while Gross acknowledged that “peacetime aviation will not be able to immediately [emphasis added] support this war-expanded industry,” he had a number of suggestions on how to start that process. First, he wanted government to give the production equipment it had paid for during the war to industry on a free or low-cost basis. Gross argued that it would otherwise be sold for scrap with little benefit to the government. He also wanted to avoid having the government dump military transport planes onto the commercial market, a move that would deprive Lockheed and its cohorts of potential business. And he wanted the development of a peacetime aviation policy that would provide subsidies in areas such as support for civilian transport planes that could be converted to military use in time of war.

Gross was far from shy in making his case. Without “steady encouragement and financial backing” from the taxpayers, the technical marvel that was the modern aircraft industry would atrophy, he suggested. “One road leads to retrogression and mediocrity,” Gross said. “The other leads to progress and continued world leadership in the science of flight. The choice is one which the public must make, and the hour of decision is here.”

Gross’s ultimate argument, however, had little to do with science and technology for their own sake, or even the economic benefits of a thriving aircraft industry. It had to do with national security: “I find it very difficult to talk about the airplane as a weapon of war ... the prospect of an airplane maker pleading the case for air security is somewhat tragic. It is a cause I would not be selfish enough to plead as a businessman, but it is my duty to plead for it as a citizen.”

To Gross’s mind, the case was clear: “Having made these new discoveries, we have to decide whether we will advance them as a means of security for our country or abandon them only to have other countries use them against us.” Gross’s case appeared to offer no middle ground, no policy that would provide modest support to the industry without being viewed as “abandoning” it and its technological capabilities. His reflections sounded suspiciously like a recipe for a new arms race.

In spite of his fears—and his special pleading on behalf of his industry—once the initial shock wore off, Gross managed to regain his emotional footing and come out on the other side more bullish on his company’s prospects than ever. In an extraordinary address to the Southern California Council of State Chambers of Commerce, Gross plugged both the military and the civilian sides of the business. First, he suggested that the technological gains in military uses of aircraft had to be sustained through ample ongoing investments during peacetime. Then he forecast “extraordinary advances in transport of passengers and mail all over the world,” to the point where flying would become a regular part of everyday life, not a luxury for a relatively few well-heeled customers. He even predicted that private flying would increase to such a degree that it might eventually be possible to have “an air buggy [helicopter] for everyone.”

Gross’s rivals were not so upbeat. Jack Northrop suggested that there would not be adequate orders to hold together the talent and facilities that had been built up in the industry during the war. And Donald Douglas seemed more angry than hopeful. He sent a letter to Congress arguing that “after telling industry to drop everything and concentrate on war production ... Government should not, now that the war is over, say to industry ... you’re on your own.”

In the early months after the war, Wall Street seemed to agree with Gross’s rosy outlook, and the company enjoyed a surge of investment in 1946. But by 1947 Lockheed’s share price had dropped by two-thirds. Hopes had been boosted in early 1946 when the company delivered the first postwar copy of its Constellation airliner—a four-engine transport that had been in the works before U.S. entry into World War II—but it was not enough to stop the slide in its share price. Matters got worse when the Constellation suffered numerous mechanical failures, including a crash over Bozeman, Montana, that forced the Civil Aeronautics Board (CAB) to ground the plane in the summer of 1946. Despite this setback, the various versions of the “Connie”—the nickname for the Constellation—proved to be good business for the company. As early as 1941, the year America entered World War II, Lockheed had already taken orders for 80 Constellations, 40 each from TWA and Pan American. In the 1950s, the next-generation Super Constellation sold over 160 copies at roughly $1.7 million per plane.

In the end, the company held on, not by finding commercial business but by selling fighter planes and patrol aircraft to the Air Force and the Navy. The postwar increase in commercial airliner sales had been expected to amount to only $400 million in business over two to three years, versus projections of $1.2 billion per year in military aircraft sales. On an annual basis, military sales to the government would average about ten times the amount of sales of commercial planes to the airlines. As the Wall Street Journal put it in August 1945, “Continuing military contracts are expected to keep the plane makers eating regularly, but airline business may well prove to be the butter on the bread.”
~~Prophets of War -by- William D. Hartung

Friday, February 19, 2016

Day 186: Railroads in the Heartland



During the formative years of the railroad industry, the desire for the iron horse and a grasp of the realities of finance and politics prompted states in the Old Northwest to launch ambitious systems of public works (canals and railroads in particular), unded and built by "the people." But the crippling economic impact of the Panic of 1837, which devastated the region, eventually prompted the sale of state-sponsored railroads to risk-taking capitalists. The time seemed auspicious for private enterprise to open this transportation frontier.   

It would be a combination of individual investors, syndicates, and the continued financial backing from units of government (federal, state, and local) that made possible the completion of the rail network in the Midwest. And it was an impressive accomplishment. At midcentury, Ohio counted 575 miles of railroad; Michigan, 342; Indiana, 228; Illinois, 111; and Wisconsin, 20. Iowa, Minnesota, and Missouri, however, had none. On the eve of the Civil War, the Ohio mileage, largest in the nation, had soared to 2,946; Illinois, 2,790; Indiana, 2,163; Wisconsin, 905; and Michigan, 779. Even though Minnesota continued to await the iron horse, Iowa's mileage stood at 655, and Missouri's was 817. Expansion continued before and after the Panic of 1873 triggered several years of severe depression and again after the hard times of the 1890s interrupted track laying.   

Sections of the Midwest saw some construction following the return of prosperity with the Spanish-American War and up until World War I. Unlike railroad building on the Great Plains and in parts of the West, which was considerable during the 1920s, by 1917 the railroad map of the Midwest had jelled. Most of the new lines between 1898 and 1917 were feeders, branches primarily created to haul agricultural, lumber, or mineral traffic, although some cut-offs, designed to speed the flow of goods and people, were installed. In 1911 the Interstate Commerce Commission, which Congress created twenty-four years earlier to regulate the rail enterprise in the public interest, made its annual enumeration of the mileage in the Midwest. Illinois ranked in the top position regionally with 11,980 miles (only Texas, with 14,777 miles, could claim a greater intrastate network); Iowa, 9,855; Ohio, 9,128; Michigan, 8,943; Minnesota, 8,931; Missouri, 8,108; Indiana, 7,447; and Wisconsin, 7,399. Such impressive figures were hardly surprising considering these states' economic strengths and population levels.   
The vast natural and human wealth and potential of the Midwest caused it to attract ''smart money." Investments poured into farms, businesses, and factories and continued for decades, interrupted only by the occasional depression and eventually by the "rust bowl" years of the 1970s and 1980s. This growth prompted railroad promoters in the antebellum years to recognize that the iron horse must link more than "inland" communities to a waterway: lake, river, or canal. It did not take long for a broken and scattered pattern of rail lines to give way to what economic historian Alfred D. Chandler has correctly labeled "system building." The thrust of railroading after the Civil War was to fuse together smaller, independent roads into larger, unified ones. A combination of mergers, leases, and stock controls made possible a sophisticated railroad structure. These consolidated roads, flying their own corporate banners, undertook their own programs of line construction.
   
By the dawn of the twentieth century the Midwest could not claim to be the sole place served by affiliated rail systems, often allied with the nation's premier banking and investment houses. Yet a regional review suggests that the East and especially the South still had substantial numbers of small and medium-sized carriers, although even there the loss of corporate independence was accelerating. On the other hand, great railways already dominated the West. The Southern Pacific, for one, had gained "octopus" status in the minds of Californians, owing to its size, economic prowess, and the literary skills of novelist Frank Norris. Again, the Midwest took on more of a balanced, mixed position. Here a combination of systems, notably the Chicago, Burlington & Quincy; Chicago, Milwaukee & St. Paul; Chicago & North Western; Chicago, Rock Island & Pacific; and Illinois Central; lesser carriers like the Chicago Great Western; Detroit, Toledo & Ironton; Minneapolis & St. Louis; Monon; Pere Marquette; and Wisconsin Central; and a number of autonomous short lines, including the Kalamazoo, Lake Shore & Chicago; La Crosse & Southeastern; and Muscatine North & South, predominated.   

Regional differences did not go unnoticed by contemporaries. In a 1903 interview for a Twin Cities newspaper, A.B. Stickney, the driving force behind the Chicago Great Western, which recently had completed a strategic extension into the Omaha Gateway from Fort Dodge, Iowa, told the reporter that "we in the richest farm and factory belt of our country have created a mixture of large and not so large railroads. . . . The Maple Leaf Route [Chicago Great Western] faces the task of continuing to provide the best service to its many loyal patrons in Minnesota, Iowa, Illinois and Missouri where there are so many David and Goliath contests."   

By the turn of the twentieth century the railroad map of the Midwest was unequaled. Anyone who examined it would quickly sense that this was the vital center of America's massive and far-flung network of steel rails. Most striking were those principal east-west arteries the "high-iron" speedways from eastern metropolises to Chicago and St. Louis and the equally significant transcontinental roads or their connectors to western destinations. Moreover, there were other important lines and the many miles of branches and twigs that sprang from the sturdy stems.

~~ Railroads in the Heartland : Steam and Traction in the Golden Age of Postcards -by- H. Roger Grant

Thursday, February 18, 2016

Day 185: Stuffed and Starved



The concerns of food production companies have ramifications far beyond what appears on supermarket shelves. Their concerns are the rot at the core of the modern food system. To show the systemic ability of a few to impact the health of the many demands a global investigation, travelling from the ‘green deserts’ of Brazil to the architecture of the modern city, and moving through history from the time of the first domesticated plants to the Battle of Seattle. It’s an enquiry that uncovers the real reasons for famine in Asia and Africa, why there is a worldwide epidemic of farmer suicides, why we don’t know what’s in our food any more, why black people in the United States are more likely to be overweight than white, why there are cowboys in South Central Los Angeles, and how the world’s largest social movement is discovering ways, large and small, for us to think about, and live differently with, food.

The alternative to eating the way we do today promises to solve hunger and diet-related disease, by offering a way of eating and growing food that is environmentally sustainable and socially just. Understanding the ills of the way food is grown and eaten also offers the key to greater freedom, and a way of reclaiming the joy of eating. The task is as urgent as the prize is great.

In every country, the contradictions of obesity, hunger, poverty and wealth are becoming more acute. India has, for example, destroyed millions of tons of grains, permitting food to rot in silos, while the quality of food eaten by India’s poorest is getting worse for the first time since Independence in 1947. In 1992, in the same towns and villages where malnutrition had begun to grip the poorest families, the Indian government admitted foreign soft drinks manufacturers and food multinationals to its previously protected economy. Within a decade, India has become home to the world’s largest concentration of diabetics: people – often children – whose bodies have fractured under the pressure of eating too much of the wrong kinds of food.

India isn’t the only home to these contrasts. They’re global, and they’re present even in the world’s richest country. In the United States in 2005, 35.1 million people didn’t know where their next meal was coming from.1 At the same time there is more diet-related disease like diabetes, and more food, in the US than ever before.

It’s easy to become inured to this contradiction; its daily version causes only mild discomfort, walking past the ‘homeless and hungry’ signs on the way to supermarkets bursting with food. There are moral emollients to balm a troubled conscience: the poor are hungry because they’re lazy, or perhaps the wealthy are fat because they eat too richly. This vein of folk wisdom has a long pedigree. Every culture has had, in some form or other, an understanding of our bodies as public ledgers on which is written the catalogue of our private vices. The language of condemnation doesn’t, however, help us understand why hunger, abundance and obesity are more compatible on our planet than they’ve ever been.

Moral condemnation only works if the condemned could have done things differently, if they had choices. Yet the prevalence of hunger and obesity affect populations with far too much regularity, in too many different places, for it to be the result of some personal failing. Part of the reason our judgement is so out of kilter is because the way we read bodies hasn’t kept up with the times. Although it may once have been true, the assumption that to be overweight is to be rich no longer holds. Obesity can no longer be explained exclusively as a curse of individual affluence. There are systemic features that make a difference. Here’s an example: many teenagers in Mexico, a developing country with an average income of US$6,000, are bloated as never before, even as the ranks of the Mexican poor swell. Individual wealth doesn’t explain why the children of some families are more obese than others: the crucial factor turns out not to be income, but proximity to the US border. The closer a Mexican family lives to its northern neighbours and to their sugar- and fat-rich processed food habits, the more overweight the family’s children are likely to be. That geography matters so much rather overturns the idea that personal choice is the key to preventing obesity or, by the same token, preventing hunger. And it helps to renew the lament of Porfirio Diaz, one of Mexico’s late-nineteenth-century presidents and autocrats: ‘¡Pobre Mexico! Tan lejos de Dios; y tan cerca de los Estados Unidos’ (Poor Mexico: so far from God, so close to the United States).

A perversity of the way our food comes to us is that it’s now possible for people who can’t afford enough to eat to be obese. Children growing up malnourished in the favelas of São Paulo, for instance, are at greater risk from obesity when they become adults. Their bodies, broken by childhood poverty, metabolize and store food poorly. As a result, they’re at greater risk of storing as fat the (poor-quality) food that they can access. Across the planet, the poor can’t afford to eat well. Again, this is true even in the world’s richest country; and in the US, it’s children who will pay the price. One research team recently suggested that if consumption patterns stay the way they are, today’s US children will live five fewer years, because of the diet-related diseases to which they will be exposed in their lifetimes.

As consumers, we’re encouraged to think that an economic system based on individual choice will save us from the collective ills of hunger and obesity. Yet it is precisely ‘freedom of choice’ that has incubated these ills. Those of us able to head to the supermarket can boggle at the possibility of choosing from fifty brands of sugared cereals, from half a dozen kinds of milk that all taste like chalk, from shelves of bread so sopped in chemicals that they will never go off, from aisles of products in which the principal ingredient is sugar. British children are, for instance, able to select from twenty-eight branded breakfast cereals the marketing of which is aimed directly at them. The sugar content of twenty-seven of these exceeds the government’s recommendations. Nine of these children’s cereals are 40 per cent sugar. It’s hardly surprising, then, that 8.5 per cent of six-year-olds and more than one in ten fifteen-year-olds in the UK are obese. And the levels are increasing. The breakfast cereal story is a sign of a wider systemic feature: there’s every incentive for food producing corporations to sell food that has undergone processing which renders it more profitable, if less nutritious. Incidentally, this explains why there are so many more varieties of breakfast cereals on sale than varieties of apples.

~~Stuffed and Starved -by- Raj Patel

Wednesday, February 17, 2016

Day 184: Book Excerpt: The Horse, The Wheel, And Language


The Indo-European problem was formulated in one famous sentence by Sir William Jones, a British judge in India, in 1786. Jones was already widely known before he made his discovery. Fifteen years earlier, in 1771, his Grammar of the Persian Language was the first English guide to the language of the Persian kings, and it earned him, at the age of twenty-five, the reputation as one of the most respected linguists in Europe. His translations of medieval Persian poems inspired Byron, Shelley, and the European Romantic movement. He rose from a respected barrister in Wales to a correspondent, tutor, and friend of some of the leading men of the kingdom. At age thirty-seven he was appointed one of the three justices of the first Supreme Court of Bengal. His arrival in Calcutta, a mythically alien place for an Englishman of his age, was the opening move in the imposition of royal government over a vital yet irresponsible merchant’s colony. Jones was to regulate both the excesses of the English merchants and the rights and duties of the Indians. But although the English merchants at least recognized his legal authority, the Indians obeyed an already functioning and ancient system of Hindu law, which was regularly cited in court by Hindu legal scholars, or pandits (the source of our term pundit). English judges could not determine if the laws the pandits cited really existed. Sanskrit was the ancient language of the Hindu legal texts, like Latin was for English law. If the two legal systems were to be integrated, one of the new Supreme Court justices had to learn Sanskrit. That was Jones.

He went to the ancient Hindu university at Nadiya, bought a vacation cottage, found a respected and willing pandit (Rāmalocana) on the faculty, and immersed himself in Hindu texts. Among these were the Vedas, the ancient religious compositions that lay at the root of Hindu religion. The Rig Veda, the oldest of the Vedic texts, had been composed long before the Buddha’s lifetime and was more than two thousand years old, but no one knew its age exactly. As Jones pored over Sanskrit texts his mind made comparisons not just with Persian and English but also with Latin and Greek, the mainstays of an eighteenth-century university education; with Gothic, the oldest literary form of German, which he had also learned; and with Welsh, a Celtic tongue and his boyhood language which he had not forgotten. In 1786, three years after his arrival in Calcutta, Jones came to a startling conclusion, announced in his third annual discourse to the Asiatic Society of Bengal, which he had founded when he first arrived. The key sentence is now quoted in every introductory textbook of historical linguistics (punctuation mine):

The Sanskrit language, whatever be its antiquity, is of a wonderful structure: more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either; yet bearing to both of them a stronger affinity, both in the roots of verbs and in the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists.

Jones had concluded that the Sanskrit language originated from the same source as Greek and Latin, the classical languages of European civilization. He added that Persian, Celtic, and German probably belonged to the same family. European scholars were astounded. The occupants of India, long regarded as the epitome of Asian exotics, turned out to be long-lost cousins. If Greek, Latin, and Sanskrit were relatives, descended from the same ancient parent language, what was that language? Where had it been it spoken? And by whom? By what historical circumstances did it generate daughter tongues that became the dominant languages spoken from Scotland to India?

These questions resonated particularly deeply in Germany, where popular interest in the history of the German language and the roots of German traditions were growing into the Romantic movement. The Romantics wanted to discard the cold, artificial logic of the Enlightenment to return to the roots of a simple and authentic life based in direct experience and community. Thomas Mann once said of a Romantic philosopher (Schlegel) that his thought was contaminated too much by reason, and that he was therefore a poor Romantic. It was ironic that William Jones helped to inspire this movement, because his own philosophy was quite different: “The race of man… cannot long be happy without virtue, nor actively virtuous without freedom, nor securely free without rational knowledge.”3 But Jones had energized the study of ancient languages, and ancient language played a central role in Romantic theories of authentic experience. In the 1780s J. G. Herder proposed a theory later developed by von Humboldt and elaborated in the twentieth century by Wittgenstein, that language creates the categories and distinctions through which humans give meaning to the world. Each particular language, therefore, generates and is enmeshed in a closed social community, or “folk,” that is at its core meaningless to an outsider. Language was seen by Herder and von Humboldt as a vessel that molded community and national identities. The brothers Grimm went out to collect “authentic” German folk tales while at the same time studying the German language, pursuing the Romantic conviction that language and folk culture were deeply related. In this setting the mysterious mother tongue, Proto-Indo-European, was regarded not just as a language but as a crucible in which Western civilization had its earliest beginnings.
~~The Horse, The Wheel, And Language -by- David W. Anthony

Tuesday, February 16, 2016

Day 183: Book Excerpt: The Swerve: How The World Became Modern



APART FROM THE charred papyrus fragments recovered in Herculaneum, there are no surviving contemporary manuscripts from the ancient Greek and Roman world. Everything that has reached us is a copy, most often very far removed in time, place, and culture from the original. And these copies represent only a small portion of the works even of the most celebrated writers of antiquity. Of Aeschylus’ eighty or ninety plays and the roughly one hundred twenty by Sophocles, only seven each have survived; Euripides and Aristophanes did slightly better: eighteen of ninety-two plays by the former have come down to us; eleven of forty-three by the latter.

These are the great success stories. Virtually the entire output of many other writers, famous in antiquity, has disappeared without a trace. Scientists, historians, mathematicians, philosophers, and statesmen have left behind some of their achievements—the invention of trigonometry, for example, or the calculation of position by reference to latitude and longitude, or the rational analysis of political power—but their books are gone. The indefatigable scholar Didymus of Alexandria earned the nickname Bronze-Ass (literally, “Brazen-Bowelled”) for having what it took to write more than 3,500 books; apart from a few fragments, all have vanished. At the end of the fifth century ce an ambitious literary editor known as Stobaeus compiled an anthology of prose and poetry by the ancient world’s best authors: out of 1,430 quotations, 1,115 are from works that are now lost.

In this general vanishing, all the works of the brilliant founders of atomism, Leucippus and Democritus, and most of the works of their intellectual heir Epicurus, disappeared. Epicurus had been extraordinarily prolific. He and his principal philosophical opponent, the Stoic Chrysippus, wrote between them, it was said, more than a thousand books. Even if this figure is exaggerated or if it counts as books what we would regard as essays and letters, the written record was clearly massive. That record no longer exists. Apart from three letters quoted by an ancient historian of philosophy, Diogenes Laertius, along with a list of forty maxims, almost nothing by Epicurus has survived. Modern scholarship, since the nineteenth century, has only been able to add some fragments. Some of these were culled from the blackened papyrus rolls found at Herculaneum; others were painstakingly recovered from the broken pieces of an ancient wall. On that wall, discovered in the town of Oenoanda, in the rugged mountains in southwest Turkey, an old man, in the early years of the second century ce, had had his distinctly Epicurean philosophy of life—“a fine anthem to celebrate the fullness of pleasure”—chiseled in stone. But where did all the books go?

The actual material disappearance of the books was largely the effect of climate and pests. Though papyrus and parchment were impressively long-lived (far more so than either our cheap paper or computerized data), books inevitably deteriorated over the centuries, even if they managed to escape the ravages of fire and flood. The ink was a mixture of soot (from burnt lamp wicks), water, and tree gum: that made it cheap and agreeably easy to read, but also water-soluble. (A scribe who made a mistake could erase it with a sponge.) A spilled glass of wine or a heavy downpour, and the text disappeared. And that was only the most common threat. Rolling and unrolling the scrolls or poring over the codices, touching them, dropping them, coughing on them, allowing them to be scorched by fire from the candles, or simply reading them over and over eventually destroyed them.
Carefully sequestering books from excessive use was of little help, for they then became the objects not of intellectual hunger but of a more literal appetite. Tiny animals, Aristotle noted, may be detected in such things as clothes, woolen blankets, and cream cheese. “Others are found,” he observed, “in books, some of them similar to those found in clothes, others like tailless scorpions, very small indeed.” Almost two thousand years later in Micrographia (1655), the scientist Robert Hooke reported with fascination what he saw when he examined one of these creatures under that remarkable new invention, the microscope:

a small white silver-shining Worm or Moth, which I found much conversant among books and papers, and is supposed to be that which corrodes and eats holes through the leaves and covers. Its head appears big and blunt, and its body tapers from it towards the tail, smaller and smaller, being shaped almost like a carrot. . . . It has two long horns before, which are straight, and tapering towards the top, curiously ringed or knobbed. . . . The hinder part is terminated with three tails, in every particular resembling the two longer horns that grow out of the head. The legs are scaled and haired. This animal probably feeds upon the paper and covers of books, and perforates in them several small round holes.

The bookworm—“one of the teeth of time,” as Hooke put it—is no longer familiar to ordinary readers, but the ancients knew it very well. In exile, the Roman poet Ovid likened the “constant gnawing of sorrow” at his heart to the gnawing of the bookworm—“as the book when laid away is nibbled by the worm’s teeth.” His contemporary Horace feared that his book will eventually become “food for vandal moths.” And for the Greek poet Evenus, the bookworm was the symbolic enemy of human culture: “Page-eater, the Muses’ bitterest foe, lurking destroyer, ever feeding on thy thefts from learning, why, black bookworm, dost thou lie concealed among the sacred utterances, producing the image of envy?” Some protective measures, such as sprinkling cedar oil on the pages, were discovered to be effective in warding off damage, but it was widely recognized that the best way to preserve books from being eaten into oblivion was simply to use them and, when they finally wore out, to make more copies.

Though the book trade in the ancient world was entirely about copying, little information has survived about how the enterprise was organized. There were scribes in Athens, as in other cities of the Greek and Hellenistic world, but it is not clear whether they received training in special schools or were apprenticed to master scribes or simply set up on their own. Some were evidently paid for the beauty of their calligraphy; others were paid by the total number of lines written (there are line numbers recorded at the end of some surviving manuscripts). In neither case is the payment likely to have gone directly to the scribe: many, perhaps most, Greek scribes must have been slaves working for a publisher who owned or rented them. (An inventory of the property of a wealthy Roman citizen with an estate in Egypt lists, among his fifty-nine slaves, five notaries, two amanuenses, one scribe, and a book repairer, along with a cook and a barber.) But we do not know whether these scribes generally sat in large groups, writing from dictation, or worked individually from a master copy. And if the author of the work was alive, we do not know if he was involved in checking or correcting the finished copy.

~~The Swerve: How The World Became Modern -by- Stephen Greenblatt