Saturday, April 30, 2016

Day 258: Blood Work



On December 14, 1799, America’s first president awoke with a sore throat, which was soon accompanied by a fever. At six that morning, George Washington’s doctors agreed it was time for a bloodletting. Eighteen ounces of blood later, the patient’s condition had not improved, and he was bled twice more. Not long after, Washington was unable to breathe—medical historians believe that he suffered from an infection of the epiglottis—and a tracheotomy was performed. A fourth round of bloodletting followed, to no avail. Washington gasped for breath like a drowning man and died late that evening, around ten o’clock.

Though we will never know whether Washington died of his illness or of the severe bloodletting he suffered during his “treatment,” many historians would bet on the latter. His body was laid out in the family’s formal parlor so that prominent visitors could pay their respects. Yet as the nation prepared to mourn its first president, others wondered if there was a way to bring him back to life.

When Washington’s granddaughter, Mrs. Thomas Law, arrived the next morning, she brought with her a man who suggested the unthinkable. Dr. William Thornton, best known as architect of the U.S. Capitol, speculated that the president could be revived if both blood and air were returned to his corpse. Dr. Thornton suggested that Washington be warmed up “by degrees and by friction” so his blood might be coaxed to move once again through his body. Then Thornton proposed to “open a passage to the lungs by the trachea, and to inflate them with air, to produce artificial respiration, and to transfuse blood into him from a lamb.”

Thornton’s idea of transfusing the dead president was swiftly rejected by Washington’s family. They did not quibble with the doctor as to whether resurrection by transfusion could be possible. Instead they declined on the grounds that it was better to leave the memory of George Washington’s legacy intact as “one who had departed full of honor and renown; free from the frailties of age, in full enjoyment of every faculty, and prepared for eternity.” Death was preferable to any extraordinary attempt to resurrect the president using animal blood.

Thornton was not the first to propose blood transfusion as a miraculous cure, nor was he the first to consider animals as donors. More than 130 years earlier, between 1665 and 1668, all of Europe was abuzz with excitement over the possibility of blood transfusion. French and English scientists were locked in an intense battle to master blood’s secrets and to perform the first successful transfusion in humans. Members of the British Royal Society began by injecting any number of fluids into the veins of animals: wine, beer, opium, milk, and mercury. Then they turned their sights on transfusions between dogs—large ones to small ones, old ones to young ones, one breed to another. The French Academy of Science followed suit with its own canine transfusion experiments but to its dismay was unable to replicate English successes.

Then, seemingly out of nowhere, a young physician named Jean-Baptiste Denis surprised the scientific world when he performed the first animal-to-human blood transfusion to great acclaim—and even greater controversy. On a cold day in December 1667, Denis transfused lamb’s blood into the veins of a fifteen-year-old boy. The result was stunning: The boy survived. But fate would not be kind to Denis for long. Flushed with success, Denis tried his next, and last, round of transfusions—this time on a mentally ill, thirty-four-year-old man named Antoine Mauroy. The doctor cut open the vein of a calf and rigged a rudimentary system of goose quills tied together with string. He then transfused just over ten ounces of calf’s blood into Mauroy’s arm. By the next morning signs looked promising that the experiment was going to work—or, at the very least, not be fatal. Several days and several transfusions later, however, Mauroy was dead. And Denis was soon accused of murder.

In a dramatic turn of events, a Paris judge cleared Denis of all accusations on April 17, 1668. Still, the madman’s death signaled an end not only to Denis’ career as a transfusionist but also to transfusion entirely. In its judgment the French court mandated that no future human transfusion could be performed without prior authorization from the Paris Faculty of Medicine. And this was very unlikely to happen, given that the medical school had made no secret of its hostility toward the procedure. Two years later, in 1670, the French parliament banned transfusions altogether; transfusion experiments were also stopped in England, Italy, and throughout Europe, not to be taken up again for 150 years.
...
This book views the story of the Denis trial through two different lenses. First, it is a microhistory that traces the little-known and captivating tale of the rise and fall of the transfusionist Denis, and blood transfusion more generally, over a period of about five years during the seventeenth century. But, perhaps more important, it is also a macrohistory that traces the confluence of ideas, discoveries, and cultural, political, and religious forces that made blood transfusion even thinkable in this era before anesthesia, antisepsis, and knowledge of blood groups. This story is, then, as much about the scientific revolution—its greatest minds and most calculating monarchs—as it is about blood transfusion itself.

~~Blood Work: A Tale of Medicine and Murder -by- Holly Tucker

Friday, April 29, 2016

Day 257: The Garden of the Eight Paradises



After retreating from northern Afghanistan back to Kabul sometime in 1514, almost nothing is known of Babur’s life until five years later when his narrative resumes in January 1519 as he and his men begin an attack on the fortress of Bajaur, over a hundred miles east-north-east of Kabul. By his own evidence Babur started 1519 still trying to pacify his petty Afghan kingdom, a “state” whose territories were then primarily concentrated along the section of the road that ran from northeast from Ghazni through Kabul and then east to the Khybar pass, extending perhaps to the banks of the Indus. His direct control of this modest region may have extended in 1519 fifty to sixty miles north of Kabul to the Hindu Kush passes, while he exercised a loose suzereignty over the Qunduz region in northeastern Afghanistan in the person of his cousin, Ways Mirza Miranshahi. The richer plains of Balkh along the Amu Darya immediately west of Qunduz were in 1519 probably divided between the Uzbeks, the Safavids and to some degree Babur himself. East of Kabul his authority was restricted in some areas to no more than a few miles to the north and south of the road.

Yet, little more than a month after subduing Bajaur, Babur crossed the Indus on a raid that led to the first, albeit temporary capture of Bherah, situated on the bank of the Jhelum river in the northwestern Panjab. The occupation of Bherah, an outlying district of the wealthy Lahore province, apparently led Babur to think seriously for the first time of taking Delhi, for according to his own testimony, after taking the town he sent a message to the newly enthroned ruler of Delhi, Sultan Ibrahim Lodi, claiming the territories Timur had conquered in 1398. By 1523 Babur established his authority in Lahore, the capital of the Panjab, little more than a hundred miles southeast of Bherah, and in November 1525 he led an army out of Kabul that in April 1526 defeated Ibrahim and founded Timorid rule in northern India. His conquest was an act of military imperialism legitimized by Timor’s brief invasion. It represented a dynastic conquest in the mold of the Ottomans and Uzbeks. Unlike the Safavid founder, Shah Ismail, Babur was not an ideologue and did not enter India on a religious crusade. Having failed to restore his own rule and Timurid fortunes in Samarqand, he invaded India for simple mulkgirliq reasons, to ensure the power and prosperity of his paternal and maternal relations, the Miranshahi Timurids and the Chaghatay Chingizids. Those practical political and material goals remained typical of the dynasty throughout most of its history, whatever the legitimizing affectations of later monarchs.

From Babur’s description it is apparent that the attack on Bajaur in January 1519 was no simple chapqun or raid, although it may have been partly that, but a pacification campaign by which he directly extended his power in the region. This can be inferred from his remark that after the successful assault and massacre of more than three thousand Dilahzak Afghan inhabitants of the fort, he sent heads of some of the captured and executed defenders as proof of his victory not only to Kabul but also north to Badakhshan, Qunduz and Balkh, perhaps to Ways Mirza in Badakhshan and Muhammad Zaman Mirza in Balkh. His purpose in capturing the fort and slaughtering so many Bajauris also seems to have been designed as a pointed lesson to other Afghans, for one of Babur’s new-found allies, the Yusufzai Afghan chief Shah Mansur who was present during these events, was sent back east to his Yusufzai kin with “chastising orders.” The Bajaur massacre seems to have had an immediate effect, for one of the competing rulers of Swat, Sultan "Ala" al-Din, offered his homage to Babur a short time later, evidently hoping to exploit the new regional power to counter a rival Afghan chief. Then following a typical post-battle interval of drinking, hunting and eating the local narcotic sweet kamali, Babur continued his campaign by marching due east into the Yusufzai homeland in Swat to attack tribesmen who had not yet submitted to him. While in the vicinity he consolidated his territorial gains by marrying Shah Mansur Yusufzai’s daughter. Babur’s alliance may have been strengthened with another Yusufzai marriage to one of his men, for he mentions that shortly after his marriage Shah Mansur’s younger brother “brought his niece to this yurt,” this “campsite.”


Babur justified what he terms the “general killing” or massacre of the Dilahzak Afghans of Bajaur by claiming that they were not only “rebels” but had adopted “the customs of unbelievers.” By this he does not seem to mean that they imitated their Kafiri neighbors, who never had been and as late as 1890 still were not Muslims. Rather he reports that some thirty or forty years earlier some of the Dilahzak Afghans and the Yusufzais had become heretics by joining a darvish or wandering ascetic or sufi by the name of Shahbaz Qalandar. Having said this without elaborating on the nature of this man’s “heresy,” he reports that after the brief seige of Bajaur, while visiting a nearby hill for the view of the countryside, he happened on the tomb of the qalandar and had it destroyed, since the tomb of a heretic sufi was not a fitting monument for such a lovely spot, where he then sat and enjoyed some ma'jun.

Heresy offended his aesthetic as well as his religious sensibilities, and like many religious men B§bur was more offended by heretics than by unbelievers. Yet he seems to have been more than just a little bit hypocritical when he invoked religion to justify the slaughter of so many Bajauri men—their captive women and children were soon released with the surviving male prisoners. From the time he arrived in Kabul B§bur treated Afgh§ns far more ruthlessly than he did his Turkic and Mongol enemies in Ferghanah. He regularly slaughtered Afgh§ns who either attacked or resisted him, memorializing his hostility with the minarets of skulls that dotted the countryside. When he describes the Bajauris as “ignorant, wretched people,” for refusing his first demand to surrender their fort, he expresses himself with the same visceral contempt used to characterize other Afgh§ns, some of whom he later ridicules for their lack of knowledge of etiquette, as “rustic and stupid.”

~~The Garden of the Eight Paradises: Babur and the Culture of Empire in Central Asia, Afghanistan and India (1483-1530) -by- Stephen F. Dale

Thursday, April 28, 2016

Day 256: Alone in The Universe



Our Sun is often described as being an average star. This is only true in a very narrow sense. Stars that, like the Sun, are maintaining their output of energy by converting hydrogen into helium deep in their interiors are said to lie on the ‘main sequence’ of a kind of graph, called the Hertzprung-Russell Diagram, which astronomers use to relate the temperature of a star to its mass. Because the Sun is on the Main Sequence of this diagram, it is regarded as an ordinary star. But ordinary does not mean average. Some 95 per cent of all stars are less massive than the Sun, and because the brightness of a star is related to its mass, this means that they are dimmer than the Sun. In that respect, the Sun is far from being average, and stars that are bigger and brighter than the Sun are even more rare than stars with the same mass as the Sun, even though massive stars are quite normal.

The Sun may not be ‘average’ in another way. There is some evidence that the brightness of the Sun varies by less than the variation in brightness of other stars with similar masses and chemical compositions. This is very hard to quantify, and we cannot be sure whether this has always been the case or is just a phase the Sun is going through today (or for the past few million years). But it does at least hint that the Sun may be an unusually stable star, with obvious benefits for the evolution of life on Earth.

On a larger scale, being either brighter or dimmer than the Sun has dramatic implications for the CHZ around a star. Most of the stars in the Galaxy – 95 per cent – are smaller and fainter than the Sun. Three quarters of all the stars in our neighbourhood are so-called red dwarfs, a category also known as M-type stars, which have only about a tenth as much mass as our Sun. Red dwarfs live for much longer than stars like the Sun (which is a yellow-orange G-type star; the initials are a historical accident and have no significance except as labels). This would be a good thing in terms of allowing time for intelligence to evolve. Unfortunately, though, the conditions on any planet orbiting a red dwarf are likely to be unsuitable for the emergence of a technological civilization.

The first problem is that the life zone around a red dwarf is very narrow, and very close to the parent star. In order to have liquid water on its surface, a planet would have to orbit within 5 million km of the star, at a distance only one thirtieth of the distance of the Earth from the Sun. Even at its closest, Mercury, the innermost planet in our Solar System, never gets within 46 million km of the Sun. It isn’t clear that planets could even form, or occupy stable orbits, within 5 million km of a star, but even if they could there would be complications. Just as tidal forces have locked the Moon into a rotation which keeps one face always turned towards the Earth, so planets in the life zone around a red dwarf would be locked into a rotation with one side always facing the star. So one side would be in eternal darkness, and the other in eternal light. Except, possibly, for a narrow twilight zone, the conditions would be either uncomfortably hot or uncomfortably cold. The most likely consequence of this is that convection would carry gases from the hot side of the planet to the cold side, where they would cool and freeze. Any atmosphere the planet originally possessed before the tidal locking was completed would freeze out on the dark side.

Another problem – as if that weren’t enough – is that red dwarf stars are much more active than the Sun. They produce frequent flares of activity which release large amounts of ultraviolet radiation, X-rays and particles. This would be particularly damaging because of the proximity of the planet to the star. Apart from the direct consequences for life, these outbursts would strip away any atmosphere that started to form around the planet. Overall, it seems we can rule out red dwarf systems as likely homes for other civilizations. We have already found that the Galactic Habitable Zone only includes 10 per cent of the stars in the Milky Way, and now we are ruling out 75 per cent of that 10 per cent. That leaves us with only 2.5 per cent of all the stars to consider, and we have barely started identifying all the reasons why we are here on Earth.

Bigger, brighter stars than the Sun form only a small part of that 2.5 per cent, and in terms of habitable zones alone are no better than red dwarf stars as possible places to find planets harbouring technological civilizations. A brighter star has a larger habitable zone, but it doesn’t live as long as the Sun, and the habitable zone moves out more rapidly than the Sun’s habitable zone as the star ages. A star with 30 times as much mass as our Sun would have to burn its nuclear fuel so fast that the rate at which it pours out energy is 10,000 times that of the Sun, and it will live for only a few tens of millions of years on the stable Main Sequence. Such stars also emit large amounts of ultraviolet radiation, damaging both to life and to the atmospheres of prospective Earth-like planets. The brightest stars on the Main Sequence, known as O and B stars, together make up less than one tenth of 1 per cent of all stars, though, so taking them out of the equation hardly makes much difference.

Slightly smaller, cooler A-type stars could provide any planets in their life zones with a stable environment for about a billion years, which is certainly long enough for life to get started, judging by the example of the rapid establishment of life on Earth, but may not be enough for a civilization like ours to develop. Even a star with just 1.5 times the mass of our Sun would leave the Main Sequence after only a couple of billion years. But there are some stars, the F-types, which are a little more massive than our Sun, have Main Sequence lifetimes of about 4 billion years, and which don’t seem to produce excessive amounts of ultraviolet radiation.

Putting everything together, reasonably large, reasonably long-lasting life zones may exist around stars which are in the Galactic Habitable Zone and are like the Sun (G-type), or stars a little more massive (the F-types) or a little less massive (known as K-types). A generous assessment would make that no more than 2 per cent of the stars in the Galaxy. In that sense, we can already see that the Sun is special. But even within that 2 per cent, the Sun is not an average star, because most stars have companions – they live in binary or even triple star systems.

It is actually very difficult to make stars. The large clouds of gas and dust in the thin disc of the Milky Way (known as giant molecular clouds, because they are big and contain molecules) rotate, which tends to stop them collapsing, and are threaded by magnetic fields which also help to hold them up against the inward tug of gravity. If a star with the same mass as the Sun formed from a cloud spread out to the density of a slowly rotating interstellar cloud, by the time this had shrunk to the size of the Sun it would be spinning so fast that its surface would be moving at 80 per cent of the speed of light. This is because a property known as angular momentum is conserved when a cloud shrinks – or, indeed, when it expands. In order to have the same angular momentum, provided it has the same mass a small object has to spin faster than a large object. This is exactly why a spinning ice skater can spin faster or slower by pulling their arms in or out. In order to shrink, a collapsing cloud of gas has to get rid of angular momentum. If two or more stars form from the same collapsing cloud, a lot of the angular momentum goes in to the orbital motion of the stars around each other, rather than into the spin of the stars themselves.

An average giant molecular cloud is about 65 light years across and contains about a third of a million solar masses of material. When a cloud passes through the density jump of a spiral arm, it gets squeezed, and if a supernova explodes nearby shock waves from the blast will go rolling through it. Under these conditions, turbulence stirring up the cloud can produce regions of greater density where gravity can take over and cause some of those local regions to collapse to form stars. Stellar ‘nurseries’ where this process is going on have been photographed in the infrared part of the spectrum, where radiation penetrates the dust in the clouds, from unmanned space observatories such as Herschel, confirming astronomers’ understanding of what goes on based on their knowledge of the laws of physics.

Turbulence seems to produce ‘pre-stellar cores’ on which stars grow as gravity tugs more matter towards them. A typical core would be about a fifth of a light year across, and contain about 70 per cent as much mass as the Sun. Only the very centre of such a core collapses and heats up to the point where it begins to generate energy by nuclear fusion, becoming initially a tiny proto-star with less than a hundredth (perhaps as little as a thousandth) of the mass of our Sun; the nuclear reactions begin when it has grown to about a fifth of the mass of the Sun. The final size of the star that grows onto this core doesn’t depend on the size of the core – all such cores start out with roughly the same mass. What matters is the amount of matter close enough to be captured by the gravity of the young star, before the radiation from the star and any companions forming nearby disperses the clump in the giant molecular cloud from which they have formed. For a star like the Sun, 99 per cent of its mass is gathered in this way by accretion. But this is a very inefficient process. Although roughly half of the mass in a clump gets turned into stars, only a few per cent of the material in the whole cloud is converted into stars as it makes the passage through a spiral arm.

Because of the angular momentum problem, it is hard to see how a star could form in isolation, and observations of our stellar neighbourhood show that at least 70 per cent of Sun-like stars have at least one companion, although systems with more than three stars bound together by gravity are extremely rare. Computer simulations of the way stars in multiple systems interact with one another and with nearby systems explain how this proportion has arisen, and why there are at least some stars which, like our Sun, do not have a stellar companion.

When three stars are orbiting around one another, they follow a complicated dance in which it is quite easy for one of the stars to gain a lot of energy and be ejected from the system, carrying angular momentum off with it, while the other two move closer together in a tighter embrace. Binary pairs are more stable, unless they pass close by another star (or a binary or a triple), in which case gravitational interactions can break up the pair and leave at least one isolated star, although its companion can sometimes be captured by the other system. Computer simulations suggest that if out of every 100 new star systems 40 are triple and 60 are binaries (making a total of 240 stars) then, allowing for how close these systems are in the star-forming regions of the Milky Way, by the time the star systems have moved apart into the Galaxy at large and things have settled down there will be 25 triples, 65 binaries and just 35 single stars. The same 240 stars are now shared out in such a way that just under 20 per cent are unaccompanied, roughly matching our observations of the stars in our neighbourhood.

Binary and triple star systems are bad news for life – certainly for the prospects of a technological civilization arising on any planet in such a system. Stable orbits can exist, either if the two stars in a binary are very close together (within about a fifth of the distance from the Earth to the Sun) and the planets orbit around both stars, or if the two stars are far apart (at least 50 times the distance from the Earth to the Sun) and the planets orbit one of the stars. But although the orbits may be stable, they will not be as beautifully circular as the Earth’s orbit around the Sun, and the planets will be affected by the heat and light from two stars, making it difficult to establish a long-lasting habitable zone. Judging by the evidence of the geological record of the evolution of life on Earth, even a change in the amount of heat reaching a planet from its star or stars of 10 per cent could cause severe problems. A rough rule of thumb is that a 1 per cent change in the output of the Sun causes a 1 °C change in the average temperature at the surface of the Earth, and there is serious concern today that a global warming of 4–5 °C could cause the collapse of civilization.

~~Alone in The Universe: Why Our Planet is Unique -by- John Gribbin

Wednesday, April 27, 2016

Day 255: Dying to Win



Dhanu, the single name of a young woman from Jaffna, is the most famous Tamil Tiger suicide bomber. On May 21, 1991, she hid a girdle of grenades beneath her gown, presented a garland to Rajiv Gandhi, India’s top political figure, and exploded, instantly killing them both. Dhanu has become a heroine to the women of Sri Lanka’s Hindu Tamil minority. The Tigers targeted Gandhi because they feared that, if the Congress under Gandhi were to win the upcoming election, the new government would order the recently withdrawn Indian Peacekeeping Force to return to Sri Lanka to suppress the Tigers’ insurgency. For the Tigers, the assassination was a strategic victory. For Dhanu, a remarkably beautiful woman in her late twenties, motivation probably came directly from revenge: reportedly her home in Jaffna was looted by Indian soldiers, she was gang-raped, and her four brothers were killed.

Dhanu was the first attacker to use a “suicide belt,” and this novelty determined the operational plan of attack. It is not known how the Tigers hit upon the idea. A suicide belt is an undergarment with specially made pockets to hold explosives and triggering devices so that they closely conform to the contours of the human bomb’s body. However, there is a close match between Dhanu’s suicide belt and one described in a dramatic scene in a Frederick Forsyth best-seller published in 1989, The Negotiator. In the novel, kidnappers use a belt bomb to kill the son of the U.S. president. The fictitious belt bomb is virtually identical to the belt worn by Dhanu, which investigators pieced together after the attack. Both belts are three inches wide, made from leather and denim, with a Velcro closure, and with explosives inserted to lie across the backbone. The main difference is the detonation mechanism. The belt in the novel is set off by a remote-control device hidden in the buckle, while the woman assassin had no such device and triggered the bomb herself with a manual switch.

The plan was simple. According to accomplices and messages captured after the attack, the LTTE sent a squad of four assassins to Madras, the largest city in the southern Tamil Nadu region of India, about three weeks before Rajiv Gandhi was scheduled to speak at a major political rally. Dhanu was the designated assassin. It was her job to wear the belt bomb, carry a garland for Gandhi, “accidentally” drop it at his feet, bend over to pick it up, and explode the bomb at the precise moment when Gandhi (and she) would receive its full force. Two members of the squad were to ensure that Dhanu would reach her target. The last served as a cameraman, taking live footage of the attack so that LTTE leaders, cadre, and future recruits could view the mission as it actually happened.

The assassination went off according to plan. However, the cameraman was too close. He died in the blast, and the tape fell into the hands of the Indian police, providing an unusually vivid account that helped elucidate the assassination plot.

On page 228 are two of the ten surviving still frames of the actual attack. The first shows Dhanu at the far left smiling, garland in hand, waiting for the approaching Gandhi. The second shows the last moments of Gandhi’s life. [pics not posted]

Dhanu belonged to the female suicide bomber unit of the Liberation Tigers of Tamil Eelam that goes by the name Black Tigresses. Since the early 1980s, the Tamil population has fought a civil war for independence from the Sinhalese Buddhist majority of Sri Lanka. The Tamil leader Velupillai Prabhakaran formed the LTTE with support and arms from India, and began a terror campaign against the Sinhalese government in which more than 60,000 people have died. Although precise numbers are hard to come by, the LTTE is estimated to number well over 10,000 guerrillas and has had as many as 14,000 during the 1990s. Of these, as many as 4,000 are women.

LTTE guerrillas all manifest a high degree of personal commitment to the cause of independence for their Tamil homeland. The most evident sign is the small cyanide capsule that hangs around the neck of each guerrilla, and that puts him or her only seconds from death. Literally hundreds have died at their own hands, biting through their capsules and consuming the deadly contents rather than accepting capture by the Sinhalese authorities.

The Black Tigresses (and Black Tigers) are different. These units of the LTTE are trained especially for suicide terrorist operations. For them, it is not a matter of committing suicide rather than accepting the humiliation and possible torture that comes with capture. Rather, suicide is an inextricable part of their mission. They are trained to kill others while killing themselves in order to maximize the chances of a successful mission—typically, the assassination of a prominent political leader or the infliction of the most possible casualties on Sinhalese civilians or unsuspecting soldiers.

Members of the LTTE’s suicide squads perform only one mission. Their selection and training are dedicated to ensuring that this single mission achieves results—not simply their own death, but the deaths of others.

Dhanu, alias Anbu alias Kalaivani, was from Jaffna, the principal town in the Tamil region of Sri Lanka. She appears to have been a member of the LTTE since the mid-1980s and to have gone through the typical process of becoming a Black Tigress in the late 1980s, possibly after her personal trauma at the hands of Indian troops.

Joining the LTTE’s suicide squads involves a number of steps. First, the suicide attackers are carefully selected. Although every LTTE guerrilla is given the option to join these groups, many more are rejected than accepted. At any given point, of the 10,000 or so cadres, there are probably 150 to 200 who are Black Tigresses and Tigers. The main selection criterion is a high level of motivation to complete the mission, a criterion that puts a premium on mental stability over tactical military competence.

Second, the suicide attackers are trained in special camps. They are segregated from the regulars and trained only for suicide missions. The training involves daily physical exercises, arms training, and political classes that all emphasize results. According to reliable reports, the Black Tigresses and Tigers have a simple motto: “You die only once.”
...
Although detailed information on her mental state is not available, Dhanu’s behavior during the weeks before the assassination does not display signs of depression or personal trauma. Indeed, what we know about her activities suggests a person enjoying the good things in life. For Dhanu, her trip to Madras was the first time she had traveled beyond the Tamil areas of Sri Lanka. Even though much of the three weeks prior to the attack was devoted to preparations and rehearsals for the mission, she took advantage of her new surroundings. With money and encouragement from the LTTE, she went to the market, the beach, and restaurants every day, enjoying many luxuries rarely found in the jungles of Jaffna. She bought dresses, jewelry, cosmetics, and even her first pair of glasses. In the last twenty days of her life, she took in six movies at a local cinema.

Dhanu clearly had nerves of steel. She clearly understood the consequences of her actions and worked hard to ensure that her mission would surely succeed. Some of the female suicide bombers in Sri Lanka are believed to be victims of rape at the hands of Sinhalese or Indian soldiers, a stigma that destroys their prospects for marriage and rules out procreation as a means of contributing to the community. “Acting as a human bomb,” a Tamil woman told Ana Cutter, the former editor of Columbia University’s Journal of International Affairs, “is an understood and accepted offering for a woman who will never be a mother.”16

~~Dying to Win: The Strategic Logic of Suicide Terrorism -by- Robert A. Pape

Tuesday, April 26, 2016

Day 254: The Flamingo's Smile



Buffalo Bill played his designated role in reducing the American bison from an estimated population of 60 million to near extinction. In 1867, under a contract to provide food for railroad crews, he and his men killed 4,280 animals in just eight months. His slaughter may have been indiscriminate, but the resulting beef was not wasted. Other despoilers of our natural heritage killed bison with even greater abandon, removed the tongue only (considered a great delicacy in some quarters), and left the rest of the carcass to rot.

Tongues have figured before in the sad annals of human rapacity. The first examples date from those infamous episodes of gastronomical gluttony—the orgies of Roman emperors. Mr. Stanley, Gilbert’s “modern major general,” could “quote in elegiacs all the crimes of Heliogabalus” (before demonstrating his mathematical skills, in order to cadge a rhyme, by mastering “peculiarities parabolous” in the study of conic sections). Among his other crimes, the licentious teen-aged emperor presided at banquets featuring plates heaped with flamingo tongues. Suetonius tells us that the emperor Vitcllius served a gigantic concoction called the Shield of Minerva and made of parrot-fish livers, peacock and pheasant brains, lamprey guts and flamingo tongues, all “fetched in large ships of war, as far as from the Carpathian sea and the Spanish straights.”

Lampreys and parrot fishes (though not without beauty) have rarely evoked great sympathy. But flamingos, those elegant birds of brilliant red (as their name proclaims), have inspired passionate support from the poets of ancient Rome to the efforts of modern conservationists. In one of his most poignant couplets, Martial castigated the gluttony of his emperors (circa 80 A.D.) by speculating about different scenarios, had the flamingo’s tongue been gifted with song like the nightingale’s, rather than simple good taste:

Dat mihi penna rubens nomen; sed lingua gulosis
Nostra sapit: quid, si garrula lingua foret?

(My red wing gives me my name, but epicures regard my tongue as tasty. But what if my tongue could sing?)

Most birds have skinny pointed tongues, scarcely fit for an emperor, even in large quantities. The flamingo, much to its later and unanticipated sorrow, evolved a large, soft, fleshy tongue. Why?

Flamingos have developed a surpassingly rare mode of feeding, unique among birds and evolved by very few other vertebrates. Their bills are lined with numerous, complex rows of horny lamellae—filters that work like the whalebone plates of giant baleen whales. Flamingos are commonly misportrayed as denizens of lush tropical islands—something amusing to watch while you sip your rum and coke on the casino veranda. In fact, they dwell in one of the world’s harshest habitats—shallow hypersaline lakes. Few creatures can tolerate the unusual environments of these saline deserts. Those that thrive can, in the absence of competitors, build their populations to enormous numbers. Hypersaline lakes therefore provide predators with ideal conditions for evolving a strategy of filter feeding—few types of potential prey, available in large numbers and at essentially uniform size. Phoenicopterus ruber, the greater flamingo (and most familiar species of our zoos and conservation areas in the Bahamas and Bonaire), filters prey in the predominant range of an inch or so—small mollusks, crustacea, and insect larvae, for example. But Phoeniconaias minor, the lesser flamingo, has filters so dense and efficient that they segregate cells of blue-green algae and diatoms with diameters of 0.02 to 0.1 mm.

Flamingos pass water through their bill filters in two ways (as documented by Penelope M. Jenkin in her classic article of 1957): either by swinging their heads back and forth, permitting the water to flow passively through, or by the usual and more efficient system that inspired the Roman gluttons—an active pump maintained by a large and powerful tongue. The tongue fills a large channel in the lower beak. It moves rapidly back and forth, up to four times a second, drawing water through the filters on the backwards pull and expelling it on the forward drive. The tongue’s surface also sports numerous denticles that scrape the collected food from the filters (just as whales collect krill from their baleen plates).

The extensive literature on feeding in flamingos has highlighted the unique filters—and often neglected another, intimately related, feature equally remarkable and long appreciated by the great naturalists. Flamingos feed with their heads upside down. They stand in shallow water and swing their heads down to the level of their feet, subtly adjusting the head’s position by lengthening or shortening the s-curve of the neck. This motion naturally turns the head upside down, and the bills therefore reverse their conventional roles in feeding. The anatomical upper bill of the flamingo lies beneath and serves, functionally, as a lower jaw. The anatomical lower bill stands uppermost, in the position assumed by upper bills in nearly all other birds.

With this curious reversal, we finally reach the theme of this essay: Has this unusual behavior led to any changes of form and, if so, what and how? Darwin’s theory, as a statement about adaptation to immediate environments (not general progress or global direction), predicts that form should follow function to establish good fit for peculiar life styles. In short, we might suspect that the flamingo’s upper bill, working functionally as a lower jaw, would evolve to approximate, or even mimic, the usual form of a bird’s lower jaw (and vice versa for the anatomical lower, and functionally upper, beak). Has such a change occurred?

~~The Flamingo's Smile -by- Stephen Jay Gould

Monday, April 25, 2016

Day 253: Post-Jazz Poetics



Ron Mann’s 1982 film Poetry in Motion features 24 poets performing and discussing their work, sometimes accompanied by musicians whose compositions respond to the spoken word. One such musical- poetic performance takes center stage in the five- minute segment on Jayne Cortez’s work. Here Cortez describes the improvisatory ethic that guides both her writing and her work with the Firespitters, the band with whom she performs her pieces:

    I’m playing with the visual and the verbal connections. On paper. And then when I’m reading, then it’s, you know, the verbal and the, the music coming together. It’s sound, it’s about sound. The sound of the poetry against the sound of the music. The way I work is, improvised or invented off of the word, it’s like the call and response pattern, which is an old African pattern. I am making statements, or I’m asking questions, and the music is responding to me, and I’m responding back to them, and we’re listening to each other. Making not only comments on what you’re doing, but extending that, taking it out and exploring the possibilities of the    poetry and the music together. (Poetry in Motion)

Cortez provides this analysis in voiceover as the picture cuts back and forth between a scene in which she sits on a couch with Mann and one in which she and the band react to one another midperformance.

This collage precedes the group’s rendition of Cortez’s poem “I See Chano Pozo.” Here she wears a red- and- blue- striped dashiki with dark pants and a large silver necklace in the shape of a stylized African face. Her decisions to wear traditional African clothing and to perform this particular poem with the band demonstrate that naming takes a central place in her poetics. Like Sonia Sanchez, she depicts her performance’s cultural roots as African in order to invoke a specific set of historical associations. The poem affirms Pozo, a Cuban conga player who is credited alongside trumpeter Dizzy Gillespie with introducing Afro-Cuban elements to jazz. The poem’s repeated lines assert that Pozo’s music motivates “Atamo,” “Mpebi,” “Donno,” “Obonu,” “Atumpan,” “Mpintintoa,” “Ilya Ilu,” “Ntenga,” “Siky Akkua,” “Batá,” and “Fontomfrom” : “various Africa drums” (Cortez, Coagulations 111) whose timbres resonate with music’s power to articulate human emotion. By repeating her affirmations of Chano Pozo and varying through vocal tone and inflection the printed versions of the poem, Cortez oversignifies the poetic form, defying the rhetorical boundaries of both print and previous performance. Her naming of Pozo’s accomplishments functions as a political act that positions his work in opposition to Western methods of public communication. Her commitment to social protest through linguistic alteration roots her textual improvisations in a cultural ethic that resembles Sanchez’s later strategies.

Jayne Cortez’s work explodes, through hyperbolic poetic images and innovative performance strategies, the social and rhetorical structures that enable oppressive social conditions. Her poetry’s themes, many of which are articulated via scatological imagery, align her work with contemporary artistic movements such as Black Arts and the early-century surrealists. Franklin Rosemont describes surrealism as an “unparalleled freedom” of image and a commitment to revolution “in every aspect of life” (“Introduction” 65, 67), both elements that Cortez continues to explore. She has published work in the surrealist journal Arsenal and in Franklin Rosemont’s 1980 collection Surrealism and Its Popular Accomplices (Woodson 73), though specifically African- American sources help to shape her brand of surrealism. Tony Bolden argues, for instance, that surrealism’s “radical politics . . . are compatible with the ideas of Black Arts theorists” ( Afro- Blue 121). Aldon Nielsen calls Cortez’s artistic approach “a black American surrealism” drawn from “the compacted imagery of the blues” (Black Chant 225). D. H. Melhem uses Cortez’s own term “supersurrealism” to denote the associations she creates in her poetry between graphic surrealist imagery and political offenses (“Supersurrealist” 206). She employs surrealism in order to critique modern social conditions, while her poetry’s performative elements signify a rebellion against conventional methods of public expression.

Cortez crafts poetry based in conceptually difficult surrealist imagery, yet she also advocates the communal, egalitarian atmosphere of spokenword performance. This juxtaposition, which might at first seem contradictory, illustrates her work’s political resolve. Her audience cannot participate in her poem’s affirmation of Chano Pozo as an innovative cultural icon without understanding the rationale behind the images—like “a very fine tube of frictional groans” (line 4)—that she uses to characterize him. The poetry occupies a narrow space on the edge of competing differences, often refusing reconciliation and synthesis even as it moves with a sense of deliberate uneasiness among oral and written traditions. The best theoretical framework for understanding these differences derives from the cultural traditions of the African- American music to which Cortez owes much of her source material. Some of her vocal performance strategies, for instance, originate in gospel music. The esoteric imagery of her surrealism also shares thematic material with the blues and artistic impulses with heavily theorized jazz traditions like the bebop and free jazz movements; several of her musician friends worked in these movements. The historically black musics of blues and jazz thus provide her with a flexible yet culturally specific language in which to voice her critiques of contemporary political situations.

~~Post-Jazz Poetics: A Social History -by- Jennifer D. Ryan

Sunday, April 24, 2016

Day 252: Nur Jahan



Nur Jahan's heroic role in the rebellion of Mahabat Khan was short-lived. Having come out of seclusion for the climactic episode of her political life and having maneuvered her husband out from the hands of his abductor, she had proven herself capable of an exquisitely executed victory. She had not become victorious, however, by fighting in battle but by the means she had always used best: strategy measured out from behind the palace walls. Her skills at duplicity, her easy use of charm at all levels of government, and most of all her tenacious powers of endurance had proved their mettle. But with the close of the rebellion of Mahabat Khan, Nur Jahan's role as manager of political events came to an end. She would now be forced, most reluctantly, to pass the brokering of power over to her brother, Asaf Khan, and, more particularly, to his protege the future king, Shah Jahan.
 Shah Jahan had not been well-off in the last two years of Jahangir's reign. Back in the Deccan after his unsuccessful revolt against his father, he had fallen ill and had found few followers for support or security. Hearing the news of Mahabat Khan's coup, however, Shah Jahan left Ahmadnagar on June 7, 1626, and marched north through the pass of Nasik Trimbak. Although Kamgar Khan stated that Shah Jahan "resolved that he would hasten immediately to the Emperor his father" in order to save him from his abductor, most believe that the prince wanted to gain whatever advantage he could for himself out of the unsettled situation.
 Although Shah Jahan had not yet chosen sides, it would eventually become clear to him that his best chances lay in an alliance with Mahabat Khan. The two were not friends—in fact, they had most recently been on either ends of a pursuit that had veered all over India—but Mahabat Khan was now a fugitive and his natural animosity toward the imperial court would be an especially beneficial factor to the exiled prince. Eventually, Shah Jahan would see such an alliance as eminently agreeable to both the failed minister and to himself: Mahabat Khan was an excellent soldier and an experienced courtier who, only because of circumstances, had been unable recently to exhibit the loyal qualities for which he was best known.

 On the way north, Shah Jahan found it difficult to get troops together. Both Khan Jahan and Raja Nar Singh Deo made excuses when asked to join the prince and after reaching Ajmer, where Raja Kishan Singh died, Shah Jahan saw that his men had dwindled to only about four or five hundred in number. Because "it was impossible for him to carry out his design of going to the Emperor" with so small an army, Shah Jahan resolved to go to Tatta, where he would "wait patiently for a while" in the hopes of recruiting more troops. But the route was unusually dry and barren and "his journey was attended with great hardship," and when he reached Tatta in October of 1626, he found that patient waiting was impossible. Under Sharifulmulk, the governor of the district and a devoted supporter of Shahryar's through Nur Jahan, three to four thousand cavalry and ten thousand infantry stopped Shah Jahan's progress at the gate. Though overpowering, Shahryar's forces were afraid to strike and retreated to within the city, thus encouraging some of Shah Jahan's men to attack anyway despite their prince's insistent instructions not to. Many men died in the attack, and although he knew beforehand that it was a futile seige, Shah Jahan was nevertheless "greatly affected by his ill-success."

 Spurned at Tatta, Shah Jahan now thought to enlist the aid of his old friend, Shah Abbas of Persia. He wrote several letters to the shah, but none of them received a promising response; the second of the replies, in fact, made quite clear to Shah Jahan that Abbas thought the prince should lay low and submit to his father. With Persia no longer an obvious source of support, then, and still so weak and ill that "he was obliged to travel in a palki," Shah Jahan now turned around and went back through Gujarat to the Deccan. There he was warmly greeted by the son of Malik Ambar, who had taken over the government after his father died. The new ruler "received Khurram with honor and helped him with whatever he required," and in the time that followed, Shah Jahan was able to strengthen further his alliances with the noble families of the Deccan. On his way back to the Deccan, Shah Jahan received the news that his older brother Parviz had died. Suffering from intemperance, the family affliction, Parviz had succumbed "after a long illness" on October 28, 1626, at the age of thirty-eight. Jahangir's grief had been "immeasurable," for he loved deeply this son, who "was more gentle and obedient than the other sons . . . [and who had] always submissively obeyed the King's commands." Rumors persisted that Shah Jahan had had a hand in his brother's death, that "he [had] caused his second brother, Sultan Parveen, to be poisoned," but such stories by all accounts were illfounded. Parviz's body was taken back to Agra, where it was eventually entombed in his own garden.

 Parviz's death raised new questions about the future of the crown, and Nur Jahan watched as Jahangir grew increasingly "anxious as to who should succeed to the throne after his death." Shah Jahan was heartened by the news of Parviz, however, for it reduced his competition by one and left him with only Shahryar and Dawar Bakhsh, a son of Khusrau nicknamed Bulaqi, with whom to contend. About this time, as Shah Jahan proceeded toward the Deccan, Mahabat Khan began to make overtures of alliance to him. Mahabat Khan had been forced into hiding by Nur Jahan's seizure of his Bengal treasure outside of Delhi, and, as a result, had taken refuge in the forests of Mewar and had sought asylum, said De Laet, with the Rana of Udaipur. Exceedingly depressed over the death of his protege Parviz, however, and having been in the Rana's district for a while, Mahabat Khan now sought to reverse the infamy by which "his very name . . . seemed to have ceased to exist."

 Furious that Nur Jahan was still harassing his nobles and taking money from them, and knowing that her pursuit of him would not abate quickly, Mahabat Khan came out of hiding and approached Shah Jahan. Sending messengers to the prince "to express his contrition," Mahabat Khan's gamble was successful; the "Prince received his apologies kindly, called him to his presence, and treated him with great favour and kindness." After submitting to Shah Jahan—all "that I have, my treasure and my person, till I die, will be employed in your service" — Mahabat Khan was pardoned and the alliance between the two confirmed. Gifts were exchanged, and both men vowed to work companionably together from this point on to secure Shah Jahan's accession. Said Mundy: Mahabat Khan "never left him [Shah Jahan] till hee brought him to Agra where hee became King by Asaph Ckauns and this mans helpe."

~~Nur Jahan: Empress of Mughal India -by- Ellison Banks Findly

Saturday, April 23, 2016

Day 251: The Disappearing Spoon



As a child in the early 1980s, I tended to talk with things in my mouth—food, dentist’s tubes, balloons that would fly away, whatever—and if no one else was around, I’d talk anyway. This habit led to my fascination with the periodic table the first time I was left alone with a thermometer under my tongue. I came down with strep throat something like a dozen times in the second and third grades, and for days on end it would hurt to swallow. I didn’t mind staying home from school and medicating myself with vanilla ice cream and chocolate sauce. Being sick always gave me another chance to break an old-fashioned mercury thermometer, too.

Lying there with the glass stick under my tongue, I would answer an imagined question out loud, and the thermometer would slip from my mouth and shatter on the hardwood floor, the liquid mercury in the bulb scattering like ball bearings. A minute later, my mother would drop to the floor despite her arthritic hip and begin corralling the balls. Using a toothpick like a hockey stick, she’d brush the supple spheres toward one another until they almost touched. Suddenly, with a final nudge, one sphere would gulp the other. A single, seamless ball would be left quivering where there had been two. She’d repeat this magic trick over and over across the floor, one large ball swallowing the others until the entire silver lentil was reconstructed.

Once she’d gathered every bit of mercury, she’d take down the green-labeled plastic pill bottle that we kept on a knickknack shelf in the kitchen between a teddy bear with a fishing pole and a blue ceramic mug from a 1985 family reunion. After rolling the ball onto an envelope, she’d carefully pour the latest thermometer’s worth of mercury onto the pecan-sized glob in the bottle. Sometimes, before hiding the bottle away, she’d pour the quicksilver into the lid and let my siblings and me watch the futuristic metal whisk around, always splitting and healing itself flawlessly. I felt pangs for children whose mothers so feared mercury they wouldn’t even let them eat tuna. Medieval alchemists, despite their lust for gold, considered mercury the most potent and poetic substance in the universe. As a child I would have agreed with them. I would even have believed, as they did, that it transcended pedestrian categories of liquid or solid, metal or water, heaven or hell; that it housed otherworldly spirits.

Mercury acts this way, I later found out, because it is an element. Unlike water (H2O), or carbon dioxide (CO2), or almost anything else you encounter day to day, you cannot naturally separate mercury into smaller units. In fact, mercury is one of the more cultish elements: its atoms want to keep company only with other mercury atoms, and they minimize contact with the outside world by crouching into a sphere. Most liquids I spilled as a child weren’t like that. Water tumbled all over, as did oil, vinegar, and unset Jell-O. Mercury never left a speck. My parents always warned me to wear shoes whenever I dropped a thermometer, to prevent those invisible glass shards from getting into my feet. But I never recall warnings about stray mercury.

For a long time, I kept an eye out for element eighty at school and in books, as you might watch for a childhood friend’s name in the newspaper. I’m from the Great Plains and had learned in history class that Lewis and Clark had trekked through South Dakota and the rest of the Louisiana Territory with a microscope, compasses, sextants, three mercury thermometers, and other instruments. What I didn’t know at first is that they also carried with them six hundred mercury laxatives, each four times the size of an aspirin. The laxatives were called Dr. Rush’s Bilious Pills, after Benjamin Rush, a signer of the Declaration of Independence and a medical hero for bravely staying in Philadelphia during a yellow fever epidemic in 1793. His pet treatment, for any disease, was a mercury-chloride sludge administered orally. Despite the progress medicine made overall between 1400 and 1800, doctors in that era remained closer to medicine men than medical men. With a sort of sympathetic magic, they figured that beautiful, alluring mercury could cure patients by bringing them to an ugly crisis—poison fighting poison. Dr. Rush made patients ingest the solution until they drooled, and often people’s teeth and hair fell out after weeks or months of continuous treatment. His “cure” no doubt poisoned or outright killed swaths of people whom yellow fever might have spared. Even so, having perfected his treatment in Philadelphia, ten years later he sent Meriwether and William off with some prepackaged samples. As a handy side effect, Dr. Rush’s pills have enabled modern archaeologists to track down campsites used by the explorers. With the weird food and questionable water they encountered in the wild, someone in their party was always queasy, and to this day, mercury deposits dot the soil many places where the gang dug a latrine, perhaps after one of Dr. Rush’s “Thunderclappers” had worked a little too well.

Mercury also came up in science class. When first presented with the jumble of the periodic table, I scanned for mercury and couldn’t find it. It is there—between gold, which is also dense and soft, and thallium, which is also poisonous. But the symbol for mercury, Hg, consists of two letters that don’t even appear in its name. Unraveling that mystery—it’s from hydragyrum, Latin for “water silver”—helped me understand how heavily ancient languages and mythology influenced the periodic table, something you can still see in the Latin names for the newer, superheavy elements along the bottom row.

I found mercury in literature class, too. Hat manufacturers once used a bright orange mercury wash to separate fur from pelts, and the common hatters who dredged around in the steamy vats, like the mad one in Alice in Wonderland, gradually lost their hair and wits. Eventually, I realized how poisonous mercury is. That explained why Dr. Rush’s Bilious Pills purged the bowels so well: the body will rid itself of any poison, mercury included. And as toxic as swallowing mercury is, its fumes are worse. They fray the “wires” in the central nervous system and burn holes in the brain, much as advanced Alzheimer’s disease does.

But the more I learned about the dangers of mercury, the more—like William Blake’s “Tyger! Tyger! burning bright”—its destructive beauty attracted me. Over the years, my parents redecorated their kitchen and took down the shelf with the mug and teddy bear, but they kept the knickknacks together in a cardboard box. On a recent visit, I dug out the green-labeled bottle and opened it. Tilting it back and forth, I could feel the weight inside sliding in a circle. When I peeked over the rim, my eyes fixed on the tiny bits that had splashed to the sides of the main channel. They just sat there, glistening, like beads of water so perfect you’d encounter them only in fantasies. All throughout my childhood, I associated spilled mercury with a fever. This time, knowing the fearful symmetry of those little spheres, I felt a chill.
...
From that one element, I learned history, etymology, alchemy, mythology, literature, poison forensics, and psychology. And those weren’t the only elemental stories I collected, especially after I immersed myself in scientific studies in college and found a few professors who gladly set aside their research for a little science chitchat.

~~The Disappearing Spoon: And Other True Tales of Madness... -by- Sam Keane

Friday, April 22, 2016

Day 250: Debt- The First 5000 Years



Two years ago, by a series of strange coincidences, I found myself attending a garden party at Westminster Abbey. I was a bit uncomfortable. It’s not that other guests weren’t pleasant and amicable, and Father Graeme, who had organized the party, was nothing if not a gracious and charming host. But I felt more than a little out of place. At one point, Father Graeme intervened, saying that there was someone by a nearby fountain whom I would certainly want to meet. She turned out to be a trim, well-appointed young woman who, he explained, was an attorney—“but more of the activist kind. She works for a foundation that provides legal support for anti-poverty groups in London. You’ll probably have a lot to talk about.”

We chatted. She told me about her job. I told her I had been involved for many years with the global justice movement—“anti-globalization movement,” as it was usually called in the media. She was curious: she’d of course read a lot about Seattle, Genoa, the tear gas and street battles, but … well, had we really accomplished anything by all of that?

“Actually,” I said, “I think it’s kind of amazing how much we did manage to accomplish in those first couple of years.”

“For example?”

“Well, for example, we managed to almost completely destroy the IMF.”

As it happened, she didn’t actually know what the IMF was, so I offered that the International Monetary Fund basically acted as the world’s debt enforcers—“You might say, the high-finance equivalent of the guys who come to break your legs.” I launched into historical background, explaining how, during the ’70s oil crisis, OPEC countries ended up pouring so much of their newfound riches into Western banks that the banks couldn’t figure out where to invest the money; how Citibank and Chase therefore began sending agents around the world trying to convince Third World dictators and politicians to take out loans (at the time, this was called “go-go banking”); how they started out at extremely low rates of interest that almost immediately skyrocketed to 20 percent or so due to tight U.S. money policies in the early ‘80s; how, during the ’80s and ’90s, this led to the Third World debt crisis; how the IMF then stepped in to insist that, in order to obtain refinancing, poor countries would be obliged to abandon price supports on basic foodstuffs, or even policies of keeping strategic food reserves, and abandon free health care and free education; how all of this had led to the collapse of all the most basic supports for some of the poorest and most vulnerable people on earth. I spoke of poverty, of the looting of public resources, the collapse of societies, endemic violence, malnutrition, hopelessness, and broken lives.

“But what was your position?” the lawyer asked.

“About the IMF? We wanted to abolish it.”

“No, I mean, about the Third World debt.”

“Oh, we wanted to abolish that too. The immediate demand was to stop the IMF from imposing structural adjustment policies, which were doing all the direct damage, but we managed to accomplish that surprisingly quickly. The more long-term aim was debt amnesty. Something along the lines of the biblical Jubilee. As far as we were concerned,” I told her, “thirty years of money flowing from the poorest countries to the richest was quite enough.”

“But,” she objected, as if this were self-evident, “they’d borrowed the money! Surely one has to pay one’s debts.”
...
Actually, the remarkable thing about the statement “one has to pay one’s debts” is that even according to standard economic theory, it isn’t true. A lender is supposed to accept a certain degree of risk. If all loans, no matter how idiotic, were still retrievable—if there were no bankruptcy laws, for instance—the results would be disastrous. What reason would lenders have not to make a stupid loan?

“Well, I know that sounds like common sense,” I said, “but the funny thing is, economically, that’s not how loans are actually supposed to work. Financial institutions are supposed to be ways of directing resources toward profitable investments. If a bank were guaranteed to get its money back, plus interest, no matter what it did, the whole system wouldn’t work. Say I were to walk into the nearest branch of the Royal Bank of Scotland and say ‘You know, I just got a really great tip on the horses. Think you could lend me a couple million quid?’ Obviously they’d just laugh at me. But that’s just because they know if my horse didn’t come in, there’d be no way for them to get the money back. But, imagine there was some law that said they were guaranteed to get their money back no matter what happens, even if that meant, I don’t know, selling my daughter into slavery or harvesting my organs or something. Well, in that case, why not? Why bother waiting for someone to walk in who has a viable plan to set up a laundromat or some such? Basically, that’s the situation the IMF created on a global level—which is how you could have all those banks willing to fork over billions of dollars to a bunch of obvious crooks in the first place.”
...
Still, for several days afterward, that phrase kept resonating in my head.
“Surely one has to pay one’s debts.”

The reason it’s so powerful is that it’s not actually an economic statement: it’s a moral statement. After all, isn’t paying one’s debts what morality is supposed to be all about? Giving people what is due them. Accepting one’s responsibilities. Fulfilling one’s obligations to others, just as one would expect them to fulfill their obligations to you. What could be a more obvious example of shirking one’s responsibilities than reneging on a promise, or refusing to pay a debt?

It was that very apparent self-evidence, I realized, that made the statement so insidious. This was the kind of line that could make terrible things appear utterly bland and unremarkable. This may sound strong, but it’s hard not to feel strongly about such matters once you’ve witnessed the effects. I had. For almost two years, I had lived in the highlands of Madagascar. Shortly before I arrived, there had been an outbreak of malaria. It was a particularly virulent outbreak because malaria had been wiped out in highland Madagascar many years before, so that, after a couple of generations, most people had lost their immunity. The problem was, it took money to maintain the mosquito eradication program, since there had to be periodic tests to make sure mosquitoes weren’t starting to breed again and spraying campaigns if it was discovered that they were. Not a lot of money. But owing to IMF-imposed austerity programs, the government had to cut the monitoring program. Ten thousand people died. I met young mothers grieving for lost children. One might think it would be hard to make a case that the loss of ten thousand human lives is really justified in order to ensure that Citibank wouldn’t have to cut its losses on one irresponsible loan that wasn’t particularly important to its balance sheet anyway. But here was a perfectly decent woman—one who worked for a charitable organization, no less—who took it as self-evident that it was. After all, they owed the money, and surely one has to pay one’s debts.
...
The very fact that we don’t know what debt is, the very flexibility of the concept, is the basis of its power. If history shows anything, it is that there’s no better way to justify relations founded on violence, to make such relations seem moral, than by reframing them in the language of debt—above all, because it immediately makes it seem that it’s the victim who’s doing something wrong. Mafiosi understand this. So do the commanders of conquering armies. For thousands of years, violent men have been able to tell their victims that those victims owe them something. If nothing else, they “owe them their lives” (a telling phrase) because they haven’t been killed.

Nowadays, for example, military aggression is defined as a crime against humanity, and international courts, when they are brought to bear, usually demand that aggressors pay compensation. Germany had to pay massive reparations after World War I, and Iraq is still paying Kuwait for Saddam Hussein’s invasion in 1990. Yet the Third World debt, the debt of countries like Madagascar, Bolivia, and the Philippines, seems to work precisely the other way around. Third World debtor nations are almost exclusively countries that have at one time been attacked and conquered by European countries—often, the very countries to whom they now owe money. In 1895, for example, France invaded Madagascar, disbanded the government of then–Queen Ranavalona III, and declared the country a French colony. One of the first things General Gallieni did after “pacification,” as they liked to call it then, was to impose heavy taxes on the Malagasy population, in part so they could reimburse the costs of having been invaded, but also, since French colonies were supposed to be fiscally self-supporting, to defray the costs of building the railroads, highways, bridges, plantations, and so forth that the French regime wished to build. Malagasy taxpayers were never asked whether they wanted these railroads, highways, bridges, and plantations, or allowed much input into where and how they were built. To the contrary: over the next half century, the French army and police slaughtered quite a number of Malagasy who objected too strongly to the arrangement (upwards of half a million, by some reports, during one revolt in 1947). It’s not as if Madagascar has ever done any comparable damage to France. Despite this, from the beginning, the Malagasy people were told they owed France money, and to this day, the Malagasy people are still held to owe France money, and the rest of the world accepts the justice of this arrangement. When the “international community” does perceive a moral issue, it’s usually when they feel the Malagasy government is being slow to pay their debts.

But debt is not just victor’s justice; it can also be a way of punishing winners who weren’t supposed to win. The most spectacular example of this is the history of the Republic of Haiti—the first poor country to be placed in permanent debt peonage. Haiti was a nation founded by former plantation slaves who had the temerity not only to rise up in rebellion, amidst grand declarations of universal rights and freedoms, but to defeat Napoleon’s armies sent to return them to bondage. France immediately insisted that the new republic owed it 150 million francs in damages for the expropriated plantations, as well as the expenses of outfitting the failed military expeditions, and all other nations, including the United States, agreed to impose an embargo on the country until it was paid. The sum was intentionally impossible (equivalent to about 18 billion dollars), and the resultant embargo ensured that the name “Haiti” has been a synonym for debt, poverty, and human misery ever since.

Sometimes, though, debt seems to mean the very opposite. Starting in the 1980s, the United States, which insisted on strict terms for the repayment of Third World debt, itself accrued debts that easily dwarfed those of the entire Third World combined—mainly fueled by military spending. The U.S. foreign debt, though, takes the form of treasury bonds held by institutional investors in countries (Germany, Japan, South Korea, Taiwan, Thailand, the Gulf States) that are in most cases, effectively, U.S. military protectorates, most covered in U.S. bases full of arms and equipment paid for with that very deficit spending. This has changed a little now that China has gotten in on the game (China is a special case, for reasons that will be explained later), but not very much—even China finds that the fact it holds so many U.S. treasury bonds makes it to some degree beholden to U.S. interests, rather than the other way around.

~~Debt- The First 5000 Years -by- David Graeber

Thursday, April 21, 2016

Day 249: Free Lunch



When, in July 2007, two hedge funds run by the Wall Street investment bank Bear Stearns ran into difficulty, few could have guessed at the scale of the dramatic events that would follow. The funds, which had been worth $1.5 billion at the beginning of the year, were invested in financial products linked to what quickly became the notorious American subprime market. Sub-prime loans, to US households with impaired credit histories (the joke was that they were ‘Ninja’ borrowers, with no income, no job and no assets) had been around for many years. They however, along with adjustable rate mortgages (Arms), had expanded very rapidly from around 2003 and, more significantly, had been used as the basis for financial instruments – structured investment vehicles – sold to investors and traded between the banks. Mortgage-backed securities, as their name suggests, are financial instruments based on household mortgages. Even more sophisticated instruments, so-called credit derivatives based on those securities, ‘sliced and diced’ the original securities up even further and greatly multiplied the potential losses if there were problems with the underlying asset, the mortgage. The upshot was that if enough poor American families in Cleveland, Detroit or Fort Myers fell behind with their payments or defaulted on their mortgages the consequences would be felt by investors and banks many thousands of miles away. Think of it as an inverted pyramid resting on the unstable foundations of risky mortgages.

The Bear Stearns hedge funds were, to risk mixing metaphors, the tip of a very large iceberg, an early warning of the problems that were to follow. Even in early August 2007 after American Home Mortgage had filed for bankruptcy, most experts dismissed talk of a global financial crisis and it seemed that the problems arising from America’s subprime problems would be limited. However, it became clear that an international crisis was brewing when on 9 August the French bank BNP Paribas suspended three of its investment funds because of losses related to the US subprime market. An alarmed European Central Bank responded by pumping tens of billions of euros into Europe’s money markets.

What followed was a kind of domino effect, with banks regarded as weak or excessively dependent on wholesale money markets – rather than savers’ deposits – most heavily exposed. On 13 September, 2007 it was revealed that Northern Rock, Britain’s fifth largest mortgage lender, was being supported by ‘lender of last resort’ assistance from the Bank of England. The following day saw the first run on a British bank since Overend & Gurney in 1866. (Northern Rock was eventually nationalised by Britain’s Labour government, after a five-month attempt to find a viable private-sector buyer.)

After the excitement of August and September, when money markets froze from a lack of confidence between the banks in each other, there were hopes that the worst might be over. It was, however, a vain hope. In March 2008, after months in which Wall Street investment banks and America’s other large banks had announced ever-larger write-downs and losses on their subprime-related investments, Bear Stearns was forced to sell itself at a knockdown price to competitor J. P. Morgan. The deal was only possible because it was accompanied by a $30 billion loan from the Federal Reserve, America’s central bank. Bear Stearns, founded in 1923, had been part of Wall Street’s aristocracy, surviving the infamous crash of 1929 but now unable to weather the credit crunch of 2007–8. Indeed, the problems at its hedge funds eight months earlier had first exposed the crunch; now it was a victim of it. Soon afterwards, the International Monetary Fund said that the world was facing the biggest financial shock since the Great Depression of the 1930s.

Comparisons with the Great Depression and the bank runs of the Victorian era provided confirmation that something highly unusual was happening in the global economy. Indeed, policymakers looked to Walter Bagehot, the nineteenth-century economist, social theorist and constitutional reformer, who was editor of The Economist during the run on Overend & Gurney in the 1860s. Apart from computer technology, the global nature of the crisis and the fact that every move was played out on twenty-four-hour television, very little appeared to have changed since Bagehot’s day. ‘Every great crisis reveals the excessive speculations of many houses which no one before suspected,’ he wrote in Lombard Street: A Description of the Money Market, published in 1873. And, ‘the good times too of high price almost always engender much fraud. There is a happy opportunity for ingenious mendacity. Almost everything will be believed for a little while, and long before discovery the worst and most adroit deceivers are geographically and legally beyond the reach of punishment.’ Bagehot also understood what engendered financial panics: ‘Any notion that money is not to be had, or that it may not be had at any price, only raises alarm to panic and enhances panic to madness.’ As for the way such panics could envelop even those regarding themselves as too good, or too big to fail he comments: ‘A panic grows by what it feeds on; if it devours these second-class men shall we, the first-class, be safe?’

People turned to history for the answers because the events of 2007–8 were so unusual in the modern era. What, for example, was a credit crunch? Defined as a sudden reduction in the availability of credit and an increase in its price, this was a modern-day rarity. Recent history is littered with examples of governments or central banks deliberately restricting the flow of credit to the economy and increasing interest rates. For such a phenomenon to occur ‘naturally’ as a result of a sudden collapse of confidence in the banking and financial system was, however, different. It resulted, for example, in a 70 percent downward slide over twelve months in mortgage approvals – the number of new loans being granted – in Britain. The consequence of that extreme mortgage rationing was a dramatic drop in house prices. The discussion of Britain’s housing market and the debate over prices in Chapter Two of this book does not, you will see, even consider this possibility. While interest rates can and do rise and fall, the idea of a sudden turning off of the credit taps did not come into the debate. This was, if not uncharted territory, outside the direct experience of policymakers. The ready availability of credit had almost come to be regarded as the economic equivalent of oxygen or running water.

As comparisons with the Great Depression were made by the IMF and others, economists scurried for their reference works. J. K. Galbraith’s The Great Crash, 1929 first published in the 1950s, jumped back into the bestseller lists. Ben Bernanke, chairman of the Federal Reserve in succession to Alan Greenspan, suddenly appeared to be in the right place at the right time, as one of the foremost academic authorities on Depression-era economics. He had always argued that understanding the Depression was the most important challenge for economists, if only to prevent history from repeating itself. Mention of the Depression also brought John Maynard Keynes, who gets a chapter to himself in this book (Chapter Ten), to the fore.

~~Free Lunch: Easily Digestible Economics -by- David Smith

Wednesday, April 20, 2016

Day 248: The Bombing War



The bombing of Bulgaria recreated in microcosm the many issues that defined the wider bombing offensives during the Second World War. It was a classic example of what has come to be called ‘strategic bombing’. The definition of strategic bombing is neither neat nor precise. The term itself originated in the First World War when Allied officers sought to describe the nature of long-range air operations carried out against distant targets behind the enemy front line. These were operations organized independently of the ground campaign, even though they were intended to weaken the enemy and make success on the ground more likely. The term ‘strategic’ (or sometimes ‘strategical’) was used by British and American airmen to distinguish the strategy of attacking and wearing down the enemy home front and economy from the strategy of directly assaulting the enemy’s armed forces.

The term was also coined in order to separate independent bombing operations from bombing in direct support of the army or navy. This differentiation has its own problems, since direct support of surface forces also involves the use of bombing planes and the elaboration of target systems at or near the front whose destruction would weaken enemy resistance. In Germany and France between the wars ‘strategic’ air war meant using bombers to attack military and economic targets several hundred kilometres from the fighting front, if they directly supported the enemy’s land campaign. German and French military chiefs regarded long-range attacks against distant urban targets, with no direct bearing on the fighting on the ground, as a poor use of strategic resources. The German bombing of Warsaw, Belgrade, Rotterdam and numerous Soviet cities fits this narrower definition of strategic bombing. Over the course of the Second World War the distinction between the more limited conception of strategic air war and the conduct of long-range, independent campaigns became increasingly blurred; distant operations against enemy military, economic or general urban targets were carried out by bomber forces whose role was interchangeable with their direct support of ground operations. The aircraft of the United States Army Air Forces in Italy, for example, bombed the monastery of Monte Cassino in February 1944 in order to break the German front line, but also bombed Rome, Florence and the distant cities of northern Italy to provoke political crisis, weaken Axis economic potential and disrupt military communications. The German bombing of British targets during the summer and autumn of 1940 was designed to further the plan to invade southern Britain in September, and was thus strategic in the narrower, German sense of the term. But with the shift to the Blitz bombing from September 1940 to June 1941, the campaign took on a more genuinely ‘strategic’ character, since its purpose was to weaken British willingness and capacity to wage war and to do so without the assistance of German ground forces. For the unfortunate populations in the way of the bombing, in Italy or in Britain or elsewhere, there was never much point in trying to work out whether they had been bombed strategically or not, for the destructive effects on the ground were to all intents and purposes the same: high levels of death and serious injury, the widespread destruction of the urban landscape, the reduction of essential services and the arbitrary loss of cultural treasure. Being bombed as part of a ground campaign could, as in the case of the French port of Le Havre in September 1944 or the German city of Aachen in September and October the same year, produce an outcome considerably worse than an attack regarded as strategically independent.

In The Bombing War no sharp dividing line is drawn between these different forms of strategic air warfare, but the principal focus of the book is on bombing campaigns or operations that can be regarded as independent of immediate surface operations either on land or at sea. Such operations were distinct from the tactical assault by bombers and fighter-bombers on fleeting battlefield targets, local troop concentrations, communications, oil stores, repair depots, or merchant shipping, all of which belong more properly to the account of battlefield support aviation. This definition makes it possible to include as ‘strategic’ those operations that were designed to speed up the advance of ground forces but were carried out independently, and often at a considerable distance from the immediate battleground, such as those in Italy or the Soviet Union, or the aerial assault on Malta. However, the heart of any history of the bombing war is to be found in the major independent bombing campaigns carried out to inflict heavy damage on the enemy home front and if possible to provoke a political collapse. In all the cases where large-scale strategic campaigns were conducted – Germany against Britain in 1940–41, Britain and the United States against Germany and German-occupied Europe in 1940–45, Britain and the United States against Italian territory – there was an implicit understanding that bombing alone might unhinge the enemy war effort, demoralize the population and perhaps provoke the politicians to surrender before the need to undertake dangerous, large-scale and potentially costly amphibious operations. These political expectations from bombing are an essential element in the history of the bombing war.

The political imperatives are exemplified by the brief aerial assault on Bulgaria. The idea of what is now called a ‘political dividend’ is a dimension of the bombing war that has generally been relegated to second place behind the more strictly military analysis of what bombing did or did not do to the military capability and war economy of the enemy state. Yet it will be found that there are many examples between 1939 and 1945 of bombing campaigns or operations conducted not simply for their expected military outcome, but because they fulfilled one, or a number, of political objectives. The early bombing of Germany by the Royal Air Force in 1940 and 1941 was partly designed, for all its military ineffectiveness, to bring war back to the German people and to create a possible social and political crisis on the home front. It was also undertaken to impress the occupied states of Europe that Britain was serious about continuing the war, and to demonstrate to American opinion that democratic resistance was still alive and well. For the RAF, bombing was seen as the principal way in which the service could show its independence of the army and navy and carve out for itself a distinctive strategic niche. For the British public, during the difficult year that followed defeat in the Battle of France, bombing was one of the few visible things that could be done to the enemy. ‘Our wonderful R.A.F. is giving the Ruhr a terrific bombing,’ wrote one Midlands housewife in her diary. ‘But one thinks also of the homes from where these men come and what it means to their families.’

The political element of the bombing war was partly dictated by the direct involvement of politicians in decision-making about bombing. The bombing of Bulgaria was Churchill’s idea and he remained the driving force behind the argument that air raids would provide a quick and relatively cheap way of forcing the country to change sides. In December 1943, when the Mediterranean commanders dragged their feet over the operations because of poor weather, an irritated Churchill scribbled at the foot of the telegram, ‘I am sorry the weather is so adverse. The political moment may be fleeting.’ Three months later, while the first Bulgarian peace feelers were being put out, Churchill wrote ‘Bomb with high intensity now’, underlining the final word three times. The campaign in the Balkans also showed how casually politicians could decide on operations whose effectiveness they were scarcely in a position to judge from a strategic or operational point of view. The temptation to reach for air power when other means of exerting direct violent pressure were absent was hard to resist. Bombing had the virtues of being flexible, less expensive than other military options, and enjoying a high public visibility, rather like the gunboat in nineteenth-century diplomacy. Political intervention in bombing campaigns was a common feature during the war, culminating in the decision eventually taken to drop atomic weapons on Hiroshima and Nagasaki in August 1945. This (almost) final act in the bombing war has generated a continuing debate about the balance between political and military considerations, but it could equally be applied to other wartime contexts. Evaluating the effects of the bombing of Bulgaria and other Balkan states, it was observed that bombing possessed the common singular virtue of ‘demonstrating to their peoples that the war is being brought home to them by the United Nations’. In this sense the instrumental use of air power, recently and unambiguously expressed in the strategy of ‘Shock and Awe’, first articulated as a strategic aim at the United States National Defense University in the 1990s and applied spectacularly to Baghdad and other Iraqi cities in 2003, has its roots firmly in the pattern of ‘political’ bombing in the Second World War.

~~The Bombing War : Europe 1939–1945 -by- Richard Overy

Tuesday, April 19, 2016

Day 247: The Diversity of Life



In the Amazon basin the greatest violence sometimes begins as a flicker of light beyond the horizon. There in the perfect bowl of the night sky, untouched by light from any human source, a thunderstorm sends its premonitory signal and begins a slow journey to the observer, who thinks: the world is about to change. And so it was one night at the edge of rain forest north of Manaus, where I sat in the dark, working my mind through the labyrinths of field biology and ambition, tired, bored, and ready for any chance distraction.

Each evening after dinner I carried a chair to a nearby clearing to escape the noise and stink of the camp I shared with Brazilian forest workers, a place called Fazenda Dimona. To the south most of the forest had been cut and burned to create pastures. In the daytime cattle browsed in remorseless heat bouncing off the yellow clay and at night animals and spirits edged out onto the ruined land. To the north the virgin rain forest began, one of the great surviving wildernesses of the world, stretching 500 kilometers before it broke apart and dwindled into gallery woodland among the savannas of Roraima.

Enclosed in darkness so complete I could not see beyond my outstretched hand, I was forced to think of the rain forest as though I were seated in my library at home, with the lights turned low. The forest at night is an experience in sensory deprivation most of the time, black and silent as the midnight zone of a cave. Life is out there in expected abundance. The jungle teems, but in a manner mostly beyond the reach of the human senses. Ninety-nine percent of the animals find their way by chemical trails laid over the surface, puffs of odor released into the air or water, and scents diffused out of little hidden glands and into the air downwind. Animals are masters of this chemical channel, where we are idiots. But we are geniuses of the audiovisual channel, equaled in this modality only by a few odd groups (whales, monkeys, birds). So we wait for the dawn, while they wait for the fall of darkness; and because sight and sound are the evolutionary prerequisites of intelligence, we alone have come to reflect on such matters as Amazon nights and sensory modalities.

I swept the ground with the beam from my headlamp for signs of life, and found—diamonds! At regular intervals of several meters, intense pinpoints of white light winked on and off with each turning of the lamp. They were reflections from the eyes of wolf spiders, members of the family Lycosidae, on the prowl for insect prey. When spotlighted the spiders froze, allowing me to approach on hands and knees and study them almost at their own level. I could distinguish a wide variety of species by size, color, and hairiness. It struck me how little is known about these creatures of the rain forest, and how deeply satisfying it would be to spend months,, years, the rest of my life in this place until I knew all the species by name and every detail of their lives. From specimens beautifully frozen in amber we know that the Lycosidae have survived at least since the beginning of the Oligocene epoch, forty million years ago, and probably much longer. Today a riot of diverse forms occupy the whole world, of which this was only the minutest sample, yet even these species turning about now to watch me from the bare yellow clay could give meaning to the lifetimes of many naturalists.

The moon was down, and only starlight etched the tops of the trees. It was August in the dry season. The air had cooled enough to make the humidity pleasant, in the tropical manner, as much a state of mind as a physical sensation. The storm I guessed was about an hour away. I thought of walking back into the forest with my headlamp to hunt for new treasures, but was too tired from the day's work. Anchored again to my chair, forced into myself, I welcomed a meteor's streak and the occasional courtship flash of luminescent click beetles among the nearby but unseen shrubs. Even the passage of a jetliner 10,000 meters up, a regular event each night around ten o'clock, I awaited with pleasure. A week in the rain forest had transformed its distant rumble from an urban irritant into a comforting sign of the continuance of my own species.

But I was glad to be alone. The discipline of the dark envelope summoned fresh images from the forest of how real organisms look and act. I needed to concentrate for only a second and they came alive as eidetic images, behind closed eyelids, moving across fallen leaves and decaying humus. I sorted the memories this way and that in hope of stumbling on some pattern not obedient to abstract theory of textbooks. I would have been happy with any pattern. The best of science doesn't consist of mathematical models and experiments, as textbooks make it seem. Those come later. It springs fresh from a more primitive mode of thought, wherein the hunter's mind weaves ideas from old facts and fresh metaphors and the scrambled crazy images of things recently seen. To move forward is to concoct new patterns of thought, which in turn dictate the design of the models and experiments. Easy to say, difficult to achieve.

The subject fitfully engaged that night, the reason for this research trip to the Brazilian Amazon, had in fact become an obsession and, like all obsessions, very likely a dead end. It was the kind of favorite puzzle that keeps forcing its way back because its very intractability makes it perversely pleasant, like an overly familiar melody intruding into the relaxed mind because it loves you and will not leave you. I hoped that some new image might propel me past the jaded puzzle to the other side, to ideas strange and compelling.

~~The Diversity of Life -by- Edward O. Wilson