Wednesday, September 30, 2015

Day 47 : Book Excerpt : The Silk Road In World History

From the time Eurasians started using polished stone tools to plant and harvest crops and to keep domesticated animals, they began to split into two distinct societies divided by the Tianshan, Altai, and Caucasus mountain ranges. To the fertile south, people became farmers. But on the Eurasian steppe, people continued to herd livestock such as cattle, sheep, and horses. Their herds fed in the cool mountains in summer, where the grass was lush, and were shepherded in winter to warmer valleys and plains. Each group of nomads grazed its animals according to a fixed annual pattern. However, climate changes and political conflicts with other nomads or with agricultural societies to the south often forced nomads out of their normal rounds. The movements of nomadic populations and their livestock continually threatened the settled lives of farmers, whose crops could be quickly destroyed by herds. Sometimes these displaced people and their herds moved westward in search of more fertile grasslands in western Asia and eastern Europe.

Some time around 600 BCE, horseback riding had begun to spread on the Eurasian steppe, and by the 400s BCE, nomads on the north border of the agricultural zone had learned to combine horsemanship with archery to become masters of the horse as a military machine. It is about this time, when these cavalries emerged, that our story of organized trade and communication along the steppe thoroughfares begins, for it was nomads on the Central Asian steppe who brought West and East together.

In the fifth century bce, seven agricultural states in what is now eastern China were fi ghting each other for supremacy. In addition to fighting with each other, three northern states, the Qin, Zhao, and Yan, also had to cope with frequent incursions of nomadic cavalry. Nomads from the steppe raided villages and towns, looting millet and wheat, the major grains of north China, and silks, which were common in China but considered rare and precious among nomads on the western steppe. Sericulture, the process of raising silk worms and extracting silk yarn, had appeared in China in the third millennium bce; Zhou Dynasty folk songs of the early first millennium bce frequently refer to silk weaving and textiles.

The mounted archers of the steppe had the advantage of speed and surprise. In an effort to defend themselves, the three northern states built walls along the mountain ranges to divide the agricultural and pastoral zones. Realizing the advantage of the nomads’ tactics and horseman-ship, the state of Zhao, under King Wuling, reformed its army in the fourth century bce. His troops began to master the bow and arrow and began to dress in trousers and tight-sleeved robes as the nomads did. The members of his court heaped criticism on these reforms, since they considered the nomads “barbarians” and unworthy of any emulation.
...
Nevertheless, the Zhao state’s adoption of its enemies’ military practices continued and improved its defenses.  Once the superiority of nomadic tactics and weaponry to the traditional horse and chariots and infantry was demonstrated, other northern Chinese states followed Zhao’s example.

Such reforms increased the need for horses. The agricultural societies did not have the knowledge or the pasture to produce good horses, especially military mounts. Only the vast grassland could breed large numbers of fast, hardy horses with great endurance. Obtaining such horses was not easy. During the third century bce, the Yuezhi, who lived in a region relatively near China, northwest of its western borders, between the northern foothills of the eastern end of the Tianshan Mountains and the Turfan Depression, had emerged as a powerful confederacy on the steppe. They maintained a friendly trading relationship with agricultural China. The minister and economist Guanzi (?–645 bce) in his treatise on the economics of the Qi state argued that jade supplied by the Yuezhi should be the most highly valued currency of the state. “Our ancestor kings attributed the highest value to jade, as it came from a long distance. Gold is the second, and copper currency is the third.”

~~The Silk Road In World History -by- Xinru Liu

Tuesday, September 29, 2015

Day 46 : Book Excerpt : Making Starships and Stargates

Ernst Mach, an Austrian physicist of the late nineteenth and early twentieth centuries, is now chiefly known for Mach “numbers” (think Mustang Mach One, or the Mach 3, SR71 Blackbird). But during his lifetime, Mach was best known for penetrating critiques of the foundations of physics. In the 1880s he published a book – The Science of Mechanics – where he took Newton to task for a number of things that had come to be casually accepted about the foundations of mechanics – in particular, Newton’s notions of absolute space and time, and the nature of inertia, that property of real objects that causes them to resist changes in their states of motion.

Einstein, as a youngster, had read Mach’s works, and it is widely believed that Mach’s critiques of “classical,” that is, pre-quantum mechanical, physics deeply influenced him in his construction of his theories of relativity. Indeed, Einstein, before he became famous, had visited Mach in Vienna, intent on trying to convince Mach that atoms were real. (The work Einstein had done on Brownian motion, a random microscopic motion of very small particles, to get his doctoral degree had demonstrated the fact that matter was atomic). Mach had been cordial, but the young Einstein had not changed Mach’s mind.

Nonetheless, it was Mach’s critiques of space, time, and matter that had the most profound effect on Einstein. And shortly after the publication of his earliest papers on General Relativity Theory (GRT) in late 1915 and early 1916, Einstein argued that, in his words, Mach’s principle should be an explicit property of GRT. Einstein defined Mach’s principle as the “relativity of inertia,” that is, the inertial properties of material objects should depend on the presence and action of other material objects in the surrounding spacetime, and ultimately, the entire universe. Framing the principle this way, Einstein found it impossible to show that Mach’s principle was a fundamental feature of GRT. But Einstein’s insight started arguments about the “origin of inertia” that continue to this day. Those arguments can only be understood in the context of Einstein’s theories of relativity, as inertia is an implicit feature of those theories (and indeed of any theory of mechanics). Since the issue of the origin of inertia is not the customary focus of examinations of the theories of relativity, we now turn briefly to those theories with the origin of inertia as our chief concern.

Einstein had two key insights that led to his theories of relativity. The first was that if there really is no preferred reference frame – as is suggested by electrodynamics – it must be the case that when you measure the speed of light in vacuum, you always get the same number, no matter how you are moving with respect to the source of the light. When the implications of this fact for our understanding of time are appreciated, this leads to Special Relativity Theory (SRT), in turn, leads to a connection between energy and inertia that was hitherto unappreciated. The curious behavior of light in SRT is normally referred to as the speed of light being a “constant.” That is, whenever anyone measures the speed of light, no matter who, where, or when they are, they always get the same number – in centimeter-gram-second (cgs) units, 3 10 10 cm/s. Although this works for SRT, when we get to General Relativity Theory (GRT) we will find this isn’t quite right. But first we should explore some of the elementary features of SRT, as we will need them later. We leave detailed consideration of Einstein’s second key insight – the Equivalence Principle – to the following section, where we examine some of the features of general relativity theory.

Mention relativity, and the name that immediately jumps to mind is Einstein. And in your mental timescape, the turn of the twentieth century suffuses the imagery of your mind’s eye. The principle of relativity, however, is much older than Einstein. In fact, it was first articulated and argued for by Galileo Galilei in the early seventeenth century. A dedicated advocate of Copernican heliocentric astronomy, Galileo was determined to replace Aristotelian physics, which undergirded the prevailing Ptolemaic geocentric astronomy of his day, with new notions about mechanics. Galileo hoped, by showing that Aristotelian ideas on mechanics were wrong, to undercut the substructure of geocentric astronomy. Did Galileo change any of his contemporaries’ minds? Probably not. Once people think they’ve got something figured out, it’s almost impossible to get them to change their minds. As Max Planck remarked when asked if his contemporaries had adopted his ideas on quantum theory (of which Planck was the founder), people don’t change their minds – they die. But Galileo did succeed in influencing the younger generation of his day.


~~Making Starships and Stargates- The Science of Interstellar Transport and Absurdly Benign Wormholes -by- James F. Woodward

Monday, September 28, 2015

Day 45 : Book Excerpt : Intel Wars

Interviews with a dozen American and Pakistani intelligence officials have revealed that since 9/11, the CIA and the rest of the U.S. intelligence community have had what can only be described as a tempestuous, love-hate relationship with Pakistan’s powerful intelligence service, the Inter-Services Intelligence Directorate, which virtually everyone refers to by its initials, ISI.

It is not an exaggeration to say that many of the CIA’s greatest intelligence successes and failures since 9/11 stem directly from its inordinately convoluted relationship with the ISI. Even in the best of times, the U.S.-Pakistani intelligence relationship has been dogged by mutual suspicion and even open animosity, fueled to a certain degree by the strong undercurrent of anti-Americanism that pervades the ranks of the Pakistani military and intelligence services. This should come as no surprise, since the U.S. government and its policies in the Muslim world are extremely unpopular in Pakistan, with recent State Department polling data showing that 68 percent of all Pakistanis have a decidedly unfavorable view of the United States.

This has meant that every CIA chief of station in Pakistan since 9/11 has worked hard to improve their personal relationships with the director of ISI and his senior staff. The CIA chief of station spends much of his time during the workweek commuting back and forth between the fortresslike U.S. embassy in Islamabad’s diplomatic enclave (one former CIA staff officer sarcastically referred to the heavily protected area as the “ghetto of the damned”) to ISI’s headquarters, located just four miles away at the intersection of Khayaban-e-Suhrawardy Road and Service Road East.

In keeping with its penchant for secrecy, everything about the ISI is hidden from public view. The ISI’s 40-acre headquarters complex is surrounded by a ten-foot-high wall and guarded twenty-four hours a day by a contingent of elite Pakistani Army troops who not only shoo away unwanted visitors but have orders to shoot anyone foolish enough to try to enter the compound without authorization. Beyond the main gate is a circular driveway, which leads to a white multistory office building where most of ISI’s senior officials have their offices. Most of ISI’s staff work in a series of drab multistory office buildings that extend back several hundred yards from the ISI headquarters building.

But according to a Western European intelligence source, most of the ISI’s really sensitive intelligence-gathering and covert action activities, including all activities relating to Afghanistan, are run from two military bases eight miles to the south called the Hamza Camp and the Ojri Camp, both of which are hidden away behind high walls and guard towers in the city of Rawalpindi. “We’ve been trying to find out what goes on in those camps for years,” a former CIA case officer revealed in a 2010 interview, “but without much success. Everything they [the ISI] did not want us to see or hear about they hid in ’Pindi.”

The chief of ISI on 9/11, Lt. General Mahmood Ahmed, had been the bête noire of the U.S. intelligence community for years because of his overtly pro-Taliban views. At the request of the U.S. government, after 9/11 Ahmed went to Afghanistan and met with Taliban leader Mullah Mohammed Omar in Kandahar to try to stave off the U.S. invasion of Afghanistan. But evidence suggests that Ahmed instead urged the Taliban to fight. After his return from Afghanistan, CIA officials privately told Pakistani president Pervez Musharaf in no uncertain terms that they did not trust Ahmed, and that the general had to go as part of the price tag for Pakistan joining the U.S.-led war on terror. It came as no surprise to intelligence insiders when General Ahmed was abruptly and unceremoniously forced to take early retirement on October 7, 2001, only three weeks before the U.S. invasion of Afghanistan began.

His replacement, Lt. General Ehsan ul Haq, ran the ISI for three years from October 2001 until he was promoted to the position of chief of staff of the Pakistani armed forces in October 2004. General ul Haq was handpicked for the post not because he was an intelligence professional but rather because he was a close personal friend and confidant of President Musharaf.

During General ul Huq’s tenure, the top task for the CIA and ISI was to hunt down and capture or kill the remnants of al Qaeda that had fled into the wilds of northern Pakistan after the Battle of Tora Bora in December 2001. According to a half-dozen retired and current-serving CIA officials, the ISI aggressively collaborated with the CIA in going after the remnants of al Qaeda. Almost all of the senior al Qaeda officials captured since 9/11 and now biding their time behind bars at the military-run Guantánamo Bay detention facility in Cuba were captured in Pakistan during General ul Haq’s tenure in office, including Abu Zubaydah (captured March 28, 2002), Sheikh Ahmed Salim Swedan (July 11, 2002), Ramzi Bin al-Shibh (September 11, 2002), Abu Umar and Abu Hamza (January 9, 2003), and the biggest capture of them all, al Qaeda’s operations chief, Khalid Sheikh Mohammed, on March 1, 2003. “We would not have gotten any of these guys without the help of ISI,” a former senior CIA official said in an interview.

As a reward for this assistance, the CIA has secretly funneled hundreds of millions of dollars every year since 2002 to the ISI, with senior American intelligence officials confirming media reports that until 2009 the agency was directly subsidizing about one third of ISI’s annual budget, which did not include the tens of millions of dollars of training, equipment, and logistical support that the agency also provided to Pakistan. According to author Bob Woodward, in 2008 the annual CIA subsidy to the ISI amounted to a staggering $2 billion.

~~Intel Wars -by- Matthew M. Aid

Sunday, September 27, 2015

Day 44 : Representing Political Regimes in the Shrek Trilogy -Aurélie Lacassagne

The trilogy Shrek has been among the most successful animated movies at the box office in the history of cinema. DreamWorks, the production company, decided to make the green ogre a worldwide cultural product, by designing hundreds of products related to the monster. The profits of the franchise are estimated at 1.4 billion dollars (“Interview,” 2007). Just the first movie, Shrek, made a total box office of 479.2 million dollars (Hopkins, 2004: p. 33). This fact could clearly lead to insights pertaining to the political economy of film. This chapter, however, will focus on the narratives of the movies. We are interested in these narratives (in our case visual representations) and their interplay with power politics, especially race and gender conflicts. Insofar as movies constitute partly social reality, how can we interpret these visual texts? Our contention is that popular culture, including children’s movies, constitutes and represents the social world. Therefore, proposing an interpretation of these movies as texts also offers an interpretation and a representation of the world. Children (and in our case adults also) are more than just socialized by movies; the films as texts directly affect their representation of the world and participate in the constitution of the social world. As the early writers on cultural studies, such as Hall (1997), showed, popular culture is a site of struggles between the hegemonic discourse and resistance to it. The immanent divisions of our capitalist societies (in terms of class, race, and gender) are, at the same time, produced, reproduced, and contested through popular culture.
...
The narratives being very rich, we will focus on representations of political regimes. Indeed, the movie series depicts a number of regime types: a liberal capitalist democracy, in the form of Far Far Away; totalitarianism as instantiated in the Kingdom of Farquaad; and finally, an individualist anarchist space— Shrek’s swamp. All of these regimes are disrupted by rebellions led by groups excluded from the established social order. The three political regimes identified are all territorially based. The space is segregated into an inside and an outside. This spatial segregation is associated with a social segregation. In international relations literature, Andrew Linklater (1990) speaks about this “tension” between “men” and “citizens.” Citizens of a particular spatially defined political community are entitled to specific rights, while outsiders are deprived of those very rights. But even within the community of citizens, appears the logic of “established” and “outsiders,” to speak in Eliasian terms (Elias and Scotson, 1994). This logic often relies upon exclusion based on perceptions of gender, race, class, ethnicity, and bodies. This chapter explores how these logics of exclusion are constructed. It is divided into three sections, each describing a particular political regime.

Individual Anarchism

The first few scenes of the first movie, Shrek, open with the ogre living by himself in his swamp. The space is clearly delimited by the “décor” of the swamp; but the ogre goes further and territorially marks his space with signs to signify to the others that this territory belongs to him and that no one can trespass. Two images can come to one’s mind while watching this scene. First, the absence of authority: Shrek lives alone in his swamp and he is the sole master of his life. It refers to individualist anarchism. Second, for anyone familiar with French literature and political philosophy, Shrek evokes images of the myth of le bon sauvage (the noble savage) depicted by Montaigne (1595/1960) in his Essays and Rousseau (1754/1983) in his Discourse on the Origin of Inequality Among Men.

Individualist anarchism encompasses various conceptions. It is not the point here to refer to a particular conception of this philosophy but to make the point that, Shrek living in his swamp matches with the spirit of individualist anarchism. There is no state, no society. Nothing seems to prevent Shrek from fulfilling his self-interest. Shrek also appears very reluctant to engage in any form of social relations. One can say that he is an egoist. He represents more the tradition of Max Stirner than William Godwin. Shrek looks fully in control of himself—of his mind and body. Even if one can see a sort of melancholia, he seems satisfied and happy, enjoying the calm of his swamp and the easiness of his life. He eats whatever he finds around him and has arranged his shelter to his taste. He does not appear to have intellectual or spiritual concerns. As long as he can live alone in his swamp, he fully accepts the body he has; his physical appearance becomes an issue only when he enters into social transactions. These control and acceptance of his body are two key elements for the story itself as well as for his portrayal as an egoistic individualist anarchist.

~~Investigating Shrek- Power, Identity, and Ideology -eds.- Aurélie Lacassagne, Tim Nieguth, and François Dépelteau

Saturday, September 26, 2015

Day 43 : Essay Excerpt : A Fistful of Yojimbo

There are many ways to read the relationship between a film and its remake: in terms of fidelity, imitation, plagiarism, appropriation, or other enactments of power. For the most part, such models rely on a binary system to analyse the relationship between two films in isolation from their surroundings. In this chapter I wish to examine such a relationship in terms of a wider model of understanding, based on possibilities of dialogue with a wider film genre. The case study will be the relationship between Akira Kurosawa’s film Yojimbo (Yojimbo 1961) and Sergio Leone’s remake, A Fistful of Dollars (Per un Pugno di Dollari 1964). The two films themselves are very well known. Akira Kurosawa (1910–1998) made Yojimbo because he had always wanted to make a movie in the Western genre after the style of John Ford, whose movies he had seen as a child. Sergio Leone (1929–1989) was electrified by Yojimbo and made his own version starring Clint East wood, a relative unknown. Both films broke box-office records, inspired sequels and made huge stars of their main actors, Toshiro Mifune and Eastwood. As we shall see later in this chapter, Leone’s film has been credited with single-handedly creating a new genre in European cinema, the ‘Spaghetti Western’. Taken individually, these films had a massive impact on the Japanese and Italian film industries respectively. Both have been critically examined in terms of this impact, but, surprisingly, they are hardly ever discussed in relation to each other. When they are, critics focus on the fact that although Leone’s film was extremely close to Kurosawa’s, he failed to credit Kurosawa on the screen titles, giving rise to charges of plagiarism (Galbraith 2001: 311); or, alternately, to analyses that compare the scripts to see how different Leone’s film was from the original (Frayling 1998: 148–150). The main approaches to the films so far have thus taken the form of ‘fidelity discourse’.

Examining the plot, both films tell the story of a nameless hero who arrives in a town being torn apart by the power struggles between two rival gangs. This so-called ‘hero’ decides to amuse himself and cause some trouble, hiring himself out to the highest bidder as a bodyguard. Both films derive humour from the mannerisms of the hero – Mifune’s ronin, or masterless samurai, far from being a noble warrior, spends all his time scratching, cursing and stuffing himself with rice and saké, while Eastwood’s cowboy smokes constantly, falls asleep instantly and hardly speaks for the duration of the film. Both men are only out for money. Both redeem themselves in a side plot, saving a young woman and returning her to her family, but the films escalate into an apocalypse of violence and death, ending with dust and smoke swirling around the empty streets of what is now a ghost town. Leone’s film reprises the same story, characters and even dialogue as Kurosawa’s film. One cannot deny that the two films are very close. However, in this chapter I wish to get away from fidelity discourse and find some way of analysing these two films that will give us a broader understanding of the relationship between them, as well as a better understanding of their combined impact on the Hollywood Western genre.  
...
Both Kurosawa and Leone were innovative in their use and depiction of violence, breaking with the conventions of Japanese jidaigeki (historical costume drama) and the Hollywood Western. In terms of camerawork, Kurosawa and Leone employed existing techniques but played upon convention to produce startling results, many of which have since become staples of the Western genre.

For both the domestic and international audience, perhaps the most influential scenes in Yojimbo were the images of violence: the dog trot ting past with a human hand in its mouth; a severed arm lying on the ground; Unosuke, the villain, lying in a pool of his own blood. Before 1961, blood on the Japanese screen had been the preserve of horror movies, not jidaigeki. After Yojimbo, fights which had previously  been choreographed to the last detail now took on the unpredictable and explosive action of Toshiro Mifune. Nishimura argues that Mifune’s instantaneous explosions of violence had so much impact not only because they were so different to the choreography of chambara swordplay movies, but also because the primary effect for the spectator was one of powerful, emotional catharsis (Nishimura 2000: 116). However, Japanese directors were more captivated by the violence itself than its emotional effects. Japanese cinema became even more bloody after the sequel to Yojimbo, Sanjuro (Sanjuro 1962), featured a geyser of blood erupting from the villain’s chest. Such graphic violence influenced jidaigeki to the extent that it degenerated into the zankoku eiga or ‘cruel film’ genre, seen in many Toei  ̄and Toho productions of 1962–1963 (Nishimura 2000: 117–118; Yoshimoto 2000: 290–291). Yojimbo opened in art-house cinemas in the United States in 1962 to mixed reviews. Seneca International picked it up for wider distribution, adding English subtitles and later releasing a dubbed version. The same violence which so influenced Japanese jidaigeki made a great impact on Hollywood directors. However, Hollywood was not shocked by the blood in Yojimbo so much as impressed by Kurosawa’s intelligent hand ling of violence. Arthur Penn later used Kurosawa’s technique of inter spersing slow motion with normal speed by using multicamera filming to achieve the climactic violent ending of Bonnie and Clyde (1967). It may be argued that the impact of Kurosawa’s violence on Hollywood was the exact opposite of its impact on jidaigeki, as American directors were more interested in the emotional effects. Kurosawa’s beautiful and shocking images stay in the mind because they emphasise the horror of the brutality behind them. Leone’s film was also violent for the time, featuring more blood and realistic death throes than would be expected from either the Hollywood or European Western. By the time that audi ences had seen both Yojimbo and A Fistful of Dollars, other directors were also attempting more realistic gunfights, although few were to attempt the scale of Leone’s massacre by the river. While Leone also used the shock of explosive action, his fight scenes were effected differently: the unbearable tension of the drawn-out standoff was to become a Leone staple and classic feature of the Hollywood Western.


~~Essay - A Fistful of Yojimbo: Appropriation and Dialogue in Japanese Cinema -by- Rachael Hutchinson from the book World Cinema’s ‘Dialogues’ with Hollywood

Friday, September 25, 2015

Day 42 : Book Excerpt : The Strides of Vishnu

One evening in early August 1943, Brigadier General Mortimer Wheeler was resting in his tent after a long day of poring over maps, drawing up plans for the invasion of Sicily. Wheeler was a tall, rugged-looking man who sported a bushy moustache in the fashion of English officers of his time. Through the open flap of his tent, he spotted the corps commander, General Sir Brian Horrock, hurrying across the encampment, waving a telegram in his hand. Barely concealing his excitement, Horrock handed the telegram to Wheeler and exclaimed: "I say, have you seen this—they want you as [reading] ‘Director General of archaeology in India!’—Why, you must be rather a king-pin at this sort of thing! You know, I thought you were a regular soldier!"

Thus in the hot Algerian evening, with his eyes cast across the Mediterranean on the historic battle ahead, Mortimer Wheeler begins his heroic autobiographical narrative about archaeology in India. The moment in the desert reads as both trivial and momentous: the redirected career of one British officer ushers in a new era for Indian archaeology. Of course, the general noted, he could not leave his post before the invasion. He finally boarded his ship— the City of Exeter— to join a convoy of allied ships headed east in February 1944.

Mortimer Wheeler had been invited to become the director general of archaeology by the India Office of the British government in its last years of rule in South Asia, and by the viceroy of India (Lord Wavell), who governed on behalf of the Crown in Delhi. Summoning  a general from the battlefields of Europe was an extraordinary measure, an admission both of the desperate condition of Indian archaeology and an acknowledgment of its vital importance. By the 1940s, India had distinguished itself as one of the great archaeological locations in the world, along with Greece, Egypt, and Mesopotamia. A succession of eminent archaeologists preceded Wheeler at the directorship of Indian archaeology, even before its official found founding in 1871, when Alexander Cunningham became its first official director. The renowned scientists who followed Cunningham at the post included James Burgess, John Marshall, N. G. Majumdar, and K. N. Dikshit. These men— and many others— had supervised some of the most remarkable discoveries in archaeological science and brought India its prestige as a storehouse of great historical treasures.

Cunningham’s colleague James Prinsep not only discovered the famous rock edict of King Ashoka at Dhauli, Orissa, he was the man who between 1834 and 1837 deciphered the Brahmi and Kharoshthi scripts in which the Indian king had his pronouncements written down. This achievement was critical in establishing a firm toehold for dating in India’s history, which had been notoriously lacking in datable evidence. Decades later, it was John Marshall who excavated the Indus River cities of Harappa and Mohenjodaro and pushed back the age of Indian civilizations to the early centuries of the third millennium BCE— contemporary with the Nile and Mesopotamian civilizations and far earlier than Greece. It was Marshall, too, who excavated Taxila— the great Indian Hellenistic center in northwest India and the first place on Mortimer Wheeler’s itinerary as he set out to survey his vast new realm.

The new director took the Frontier Mail train from Bombay to Delhi and from there to Rawalpindi— the British military base in what was then called the North-west Frontier, the northern region of Punjab and Kashmir. Taxila, or Takshasila, was a further 20 miles from the city, in a valley bounded by the massive Himalaya range. Wheeler described a valley covered with yellow mustard seed and flooded with sunlight as he arrived. As he surveyed the beautiful scene and the long-neglected archaeological dig at the four sites of Taxila, Wheeler knew that it was time to fix Indian archaeology and that Taxila was the perfect place to launch his campaign. After all, it was here that the young Macedonian king, Alexander, had begun his own conquest of the lands of the Indus River and its six tributaries in 327 BCE— the anchor date for Indian historiography. But it was Taxila, too, that made Wheeler conscious of what exactly ailed Indian archaeology and what had to be done.

Despite impressive early discoveries, Indian archaeology suffered from a number of serious flaws. Too much work was invested in uncovering spectacular objects, "treasures of the past," which then found their way into  museums. Monuments— especially religious objects such as Buddhist stupas, Hindu temples, statues, and artwork—were highly prized, drawing both attention and money for exploration and preservation. Though valuable and inspiring, such archaeological work contributed far too little to the scientific reconstruction of past cultures in their contemporary and sequential settings. Worse, Marshall, who had focused on a small number of prestigious digs, failed to apply the principle of stratification to his work, opting instead for what Wheeler contemptuously called the bench-level method. The cure for this, Wheeler insisted, was stratigraphy.


~~The Strides of Vishnu - Hindu Culture in Historical Perspective -by- Ariel Glucklich

Thursday, September 24, 2015

Day 41: Book Excerpt: Countdown to Zero Day- Stuxnet and the launch of the World's First Digital Weapon

It was January 2010 when officials with the International Atomic Energy Agency (IAEA), the United Nations body charged with monitoring Iran’s nuclear program, first began to notice something unusual happening at the uranium enrichment plant outside Natanz in central Iran.

Inside the facility’s large centrifuge hall, buried like a bunker more than fifty feet beneath the desert surface, thousands of gleaming aluminum centrifuges were spinning at supersonic speed, enriching uranium hexafluoride gas as they had been for nearly two years. But over the last weeks, workers at the plant had been removing batches of centrifuges and replacing them with new ones. And they were doing so at a startling rate.
At Natanz each centrifuge, known as an IR-1, has a life expectancy of about ten years. But the devices are fragile and prone to break easily. Even under normal conditions, Iran has to replace up to 10 percent of the centrifuges each year due to material defects, maintenance issues, and worker accidents.

In November 2009, Iran had about 8,700 centrifuges installed at Natanz, so it would have been perfectly normal to see technicians decommission about 800 of them over the course of the year as the devices failed for one reason or another. But as IAEA officials added up the centrifuges removed over several weeks in December 2009 and early January, they realized that Iran was plowing through them at an unusual rate.

Inspectors with the IAEA’s Department of Safeguards visited Natanz an average of twice a month—sometimes by appointment, sometimes unannounced—to track Iran’s enrichment activity and progress. Anytime workers at the plant decommissioned damaged or otherwise unusable centrifuges, they were required to line them up in a control area just inside the door of the centrifuge rooms until IAEA inspectors arrived at their next visit to examine them. The inspectors would run a handheld gamma spectrometer around each centrifuge to ensure that no nuclear material was being smuggled out in them, then approve the centrifuges for removal, making note in reports sent back to IAEA headquarters in Vienna of the number that were decommissioned each time.

IAEA digital surveillance cameras, installed outside the door of each centrifuge room to monitor Iran’s enrichment activity, captured the technicians scurrying about in their white lab coats, blue plastic booties on their feet, as they trotted out the shiny cylinders one by one, each about six feet long and about half a foot in diameter. The workers, by agreement with the IAEA, had to cradle the delicate devices in their arms, wrapped in plastic sleeves or in open boxes, so the cameras could register each item as it was removed from the room.

The surveillance cameras, which weren’t allowed inside the centrifuge rooms, stored the images for later perusal. Each time inspectors visited Natanz, they examined the recorded images to ensure that Iran hadn’t removed additional centrifuges or done anything else prohibited during their absence. But as weeks passed and the inspectors sent their reports back to Vienna, officials there realized that the number of centrifuges being removed far exceeded what was normal.

Officially, the IAEA won’t say how many centrifuges Iran replaced during this period. But news reports quoting European “diplomats” put the number at 900 to 1,000. A former top IAEA official, however, thinks the actual number was much higher. “My educated guess is that 2,000 were damaged,” says Olli Heinonen, who was deputy director of the Safeguards Division until he resigned in October 2010.

Whatever the number, it was clear that something was wrong with the devices. Unfortunately, Iran wasn’t required to tell inspectors why they had replaced them, and, officially, the IAEA inspectors had no right to ask. The agency’s mandate was to monitor what happened to uranium at the enrichment plant, not keep track of failed equipment.

What the inspectors didn’t know was that the answer to their question was right beneath their noses, buried in the bits and memory of the computers in Natanz’s industrial control room. Months earlier, in June 2009, someone had quietly unleashed a destructive digital warhead on computers in Iran, where it had silently slithered its way into critical systems at Natanz, all with a single goal in mind—to sabotage Iran’s uranium enrichment program and prevent President Mahmoud Ahmadinejad from building a nuclear bomb.

The answer was there at Natanz, but it would be nearly a year before the inspectors would obtain it, and even then it would come only after more than a dozen computer security experts around the world spent months deconstructing what would ultimately become known as one of the most sophisticated viruses ever discovered—a piece of software so unique it would make history as the world’s first digital weapon and the first shot across the bow announcing the age of digital warfare.

~~Countdown to Zero Day- Stuxnet and the launch of the World's First Digital Weapon -by- Kim Zetter

Wednesday, September 23, 2015

Day 40: Book Excerpt: The Information


Solomonoff, Kolmogorov, and Chaitin tackled three different problems and came up with the same answer. Solomonoff was interested in inductive inference: given a sequence of observations, how can one make the best predictions about what will come next? Kolmogorov was looking for a mathematical definition of randomness: what does it mean to say that one sequence is more random than another, when they have the same probability of emerging from a series of coin flips? And Chaitin was trying to find a deep path into Gödel incompleteness by way of Turing and Shannon—as he said later, “putting Shannon’s information theory and Turing’s computability theory into a cocktail shaker and shaking vigorously.” They all arrived at minimal program size. And they all ended up talking about complexity.

The following bitstream (or number) is not very complex, because it is rational:

D: 14285714285714285714285714285714285714285714285714…


It may be rephrased concisely as “PRINT 142857 AND REPEAT,” or even more concisely as “1/7.” If it is a message, the compression saves keystrokes. If it is an incoming stream of data, the observer may recognize a pattern, grow more and more confident, and settle on one-seventh as a theory for the data.

In contrast, this sequence contains a late surprise:

E: 10101010101010101010101010101010101010101010101013


The telegraph operator (or theorist, or compression algorithm) must pay attention to the whole message. Nonetheless, the extra information is minimal; the message can still be compressed, wherever pattern exists. We may say it contains a redundant part and an arbitrary part.

It was Shannon who first showed that anything nonrandom in a message allows compression:

F: 101101011110110110101110101110111101001110110100111101110


Heavy on ones, light on zeroes, this might be emitted by the flip of a biased coin. Huffman coding and other such algorithms exploit statistical regularities to compress the data. Photographs are compressible because of their subjects’ natural structure: light pixels and dark pixels come in clusters; statistically, nearby pixels are likely to be similar; distant pixels are not. Video is even more compressible, because the differences between one frame and the next are relatively slight, except when the subject is in fast and turbulent motion. Natural language is compressible because of redundancies and regularities of the kind Shannon analyzed. Only a wholly random sequence remains incompressible: nothing but one surprise after another.
...
Even Π retains some mysteries:

C: 3.1415926535897932384626433832795028841971693993751…


The world’s computers have spent many cycles analyzing the first trillion or so known decimal digits of this cosmic message, and as far as anyone can tell, they appear normal. No statistical features have been discovered—no biases or correlations, local or remote. It is a quintessentially nonrandom number that seems to behave randomly. Given the nth digit, there is no shortcut for guessing the nth plus one. Once again, the next bit is always a surprise.

How much information, then, is represented by this string of digits? Is it information rich, like a random number? Or information poor, like an ordered sequence?

The telegraph operator could, of course, save many keystrokes—infinitely many, in the long run—by simply sending the message “Π.” But this is a cheat. It presumes knowledge previously shared by the sender and the receiver. The sender has to recognize this special sequence to begin with, and then the receiver has to know what Π is, and how to look up its decimal expansion, or else how to compute it. In effect, they need to share a code book.

This does not mean, however, that Π contains a lot of information. The essential message can be sent in fewer keystrokes. The telegraph operator has several strategies available. For example, he could say, “Take 4, subtract 4/3, add 4/5, subtract 4/7, and so on.” The telegraph operator sends an algorithm, that is. This infinite series of fractions converges slowly upon Π, so the recipient has a lot of work to do, but the message itself is economical: the total information content is the same no matter how many decimal digits are required.

The issue of shared knowledge at the far ends of the line brings complications. Sometimes people like to frame this sort of problem—the problem of information content in messages—in terms of communicating with an alien life-form in a faraway galaxy. What could we tell them? What would we want to say? The laws of mathematics being universal, we tend to think that Π would be one message any intelligent race would recognize. Only, they could hardly be expected to know the Greek letter. Nor would they be likely to recognize the decimal digits “3.1415926535 …” unless they happened to have ten fingers.

~~The Information -by- James Gleick

Tuesday, September 22, 2015

Day 39: Book Excerpt: The Arab Spring

To what extent is the notion of ‘revolution’ appropriate in reading the current uprisings in the Arab world — and do these revolutions perhaps posit and purpose a new language for reading them that accords to them the primacy of authoring their own meaning?

Hannah Arendt’s On Revolution (1963) — a comparative study of the American (1776) and the French (1789) revolutions — is usually read and interpreted as a critical rebuttal of Marxist thinking on revolution by way of pitting what she believed to be the success of the American Revolution against the failure of the French Revolution. Arendt’s criticism of the French Revolution is that the economic plight of the French masses distracted the revolutionaries from the more pertinent (to Arendt) legal stability and political purpose of the revolution. Those economic needs, she thought, were in fact regenerative and insatiable and thus derailed the revolutionary course from its political purpose, the opening of the public space for a wider, more effective, more inclusive, participation of citizens. Arendt (1906–1975) was not that sanguine about American Revolution either. She thought that it had stayed the course of constitutional guarantees of political rights but that it had become so ossified that the majority of Americans did not in fact participate in the political process.

Arendt’s primary concern was to posit the political possibility of maximum public participation, minus the chaotic anarchy that she associated with socialist revolutions — revolutions that thought of themselves as recommencing the advent of history. Arendt argued that the modern conception of revolution as ‘the course of history suddenly begins anew’ was entirely unknown before the French and American revolutions. Instead, she made a crucial distinction, in her reading of revolution, between liberty and freedom. Liberty she defined as freedom from unjustified restraint, freedom as the ability to participate in public affairs, a purposeful expansion of the public space for political participation. Using the French and the American revolutions as her model, she proposed that initially revolutions had a restorative force to them but that in the course of events something of an epistemic violence occurs in the revolutionary uprising. It was in the aftermath of the French Revolution in particular, she thought, that the very idea of ‘revolution’ assumed its radical, contemporary, and enduring, disposition. It was wrong for the French revolutionaries to forget, she thought, that their task was merely to liberate people from oppression so that they could find freedom, and not to address the unending (as she saw it) economic scarcity and poverty. It was futile and even dangerous for the revolutionaries to imagine they could find a political solution to economic deprivation. The advantage of the American Revolution, she believed, was that it left the economic issues at the door of the constitutional assembly. In a chapter titled ‘Constitution Libertatis’ she praises American revolutionaries for their consensus view that the principal aim of the revolution was the constitution of freedom and the foundation of a republic.
...
My reading of the Arab Spring offers the idea of an ‘open-ended’ revolt as a way of coming to terms with the dynamics of these unfolding dramatic events — reading them more as a novel than an epic. No national hero such as Jawaharlal Nehru, Gamal Abd al-Nasser, or Mohammad Mosaddegh will emerge from these revolutions — and how fortunate that is, for it was precisely in the shadow of those heroes that tyrants like Muammar Gaddafi, Hafiz al-Assad, and Ayatollah Khomeini grew. To see the events as an open-ended course of revolutionary uprisings, we need to decipher the new revolutionary language — concepts, ideas, aspirations, imagination — with which people talk about their revolutions, so that events are not assimilated retrogressively to the false assumptions of Islamism, nationalism, or socialism, or even, conversely, translated into the tired old clichés of Orientalism, as we have understood these to date. The task we face is to recognize the inaugural moment of these revolutionary uprisings and thus be able to read them in the language that they exude and not in the vocabularies we have inherited. Even the sacrosanct idea of ‘democracy’ now needs to be rethought, and if need be reinvented. No justification is required for such reconsideration. The world has many democracies, but both within and outside those democracies misery abounds — and the fragile peace and prosperity enjoyed by some living within these democracies is very much contingent on conditions that entail and indeed sustain others’ misery.

~~The Arab Spring -by- Hamid Dabashi

Monday, September 21, 2015

Day 38: Book Excerpt: The Unwanted Sound of Everything we Want


One of my favorite depictions of noise comes from Kiran Desai’s 2006 novel, The Inheritance of Loss. Desai is a master of description who can create an unforgettable image with just a sentence or two; in her hands the graffiti inside a gum-studded Manhattan phone booth becomes “the sick sweet roting mulch of the human heart.” When her character Biju, a young Indian immigrant, encounters New York taxicabs, she writes,

    They harassed Biju with such blows from their horns as could split the world into whey and solids: paaaaaawww!

Obviously Desai does not require typographical gimmicks to create vivid impressions. Having those taxi horns “split the world into whey and solids” is impressive enough. Nevertheless, she chooses to break up the uniformity of her typeface and to have one nonword stand out amid scores of carefully wrought sentences, outrageously demanding our attention, because that is exactly what noise does.

Of course, noise does not have to be loud to have that effect. Harold Pinter’s darkly comic play The Homecoming contains a passage about the ticking of a clock during a sleepless night. Says his character Lenny:

 “All sorts of objects, which, in the day, you wouldn’t call anything but commonplace. They give you no trouble. But in the night any given one of a number of them is liable to start letting out a bit of a tick.” It’s possible Lenny suffers from hyperacusis, a condition in which certain sounds are perceived as painfully loud, though he has other troubles to keep him on edge. It’s also possible that someone else would be reassured by the ticking. In a song by rock group Death Cab for Cutie, comfort comes from the sound of a leaky faucet.

Noise does not even have to originate from an acoustical source. If Lenny was one of the millions of people who suffer from tinnitus (50 million in the United States alone, of whom at least 12 million have symptoms serious enough to require medical intervention), he might hear a ringing or buzzing in his ears, or a sound like crickets, a constant hiss, or an unceasing roar. He might hear it even if he were deaf.

Some people are not satisfied with calling noise “unwanted sound.”

One of them is Les Blomberg, founder and director of the Noise Pollution Clearinghouse in Montpelier, Vermont. Out of his small two-person office, to which he travels each day by bike, Blomberg maintains what is probably the largest accessible noise-related database in the world. For Blomberg noise is best defined by the name of his organization: It’s a pollutant. “Do we define air pollution as ‘unwanted particulates’?” he once asked me. On another occasion, he said that if he could go back and name his organization all over again, he’d get rid of the word noise.

With degrees in both physics and philosophy, Les Blomberg is the first person who helped me to understand noise as more than an annoyance. Though Blomberg’s interest in noise began when he was awakened by garbage trucks emptying dumpsters in his neighborhood at 4:00 in the morning, he claims not to be among the 12 to 15 percent of the general population who are acutely noise sensitive. For him noise is not “personal” the way it is for many anti-noise activists, but it is serious—too serious to be defined as “unwanted sound.”

Defining noise in this way is relatively new, Blomberg told me. It dates from the early decades of the twentieth century, when scientists and engineers were developing the electronic communication devices that would determine so much of our modern acoustic environment. (For a history of this period he referred me to Emily Thompson’s fascinating The Soundscape of Modernity 1900-1933.) To these experts, noise was primarily interference, static. It was a technical problem rather than a health issue or a social injustice. Ironically, this highly technical agenda gave us what Blomberg regards as an overly subjective definition. “Do we really want desire in science?”

To make his point, Blomberg gave the illustration of a kid who loses some of his hearing at a rock concert, something people have been doing in spite of repeated warnings for well over a generation. Rock concerts can reach sound levels in excess of 120 decibels, the equivalent of a jet at takeoff. (By way of comparison, the Occupational Safety and Health Administration requires that hearing protection be worn by workers with prolonged exposure to sounds exceeding 85 dB.) Most of us would say that the kid in Blomberg’s example was partially deafened by noise. But can we say that he was deafened by “unwanted sound” when he wanted to go to the concert, paid a lot of money to go, and may also have wanted it to be loud?

Probably he wants his MP3 player to be loud as well, a preference that has been blamed for contributing to the hearing losses of some 5 million children in the United States. As of now, one American child in eight has noise-induced hearing loss. Effects like these are trivialized, in Blomberg’s view, when we define noise in terms of desire.

~~The Unwanted Sound of Everything we Want -by- Garret Keizer

Sunday, September 20, 2015

Day 37: Book Excerpt: Willpower- Rediscovering The Greatest Human Strength

As psychologists were identifying the benefits of self-control, anthropologists and neuroscientists were trying to understand how it evolved. The human brain is distinguished by large and elaborate frontal lobes, giving us what was long assumed to be the crucial evolutionary advantage: the intelligence to solve problems in the environment. After all, a brainier animal could presumably survive and reproduce better than a dumb one. But big brains also require lots of energy. The adult human brain makes up 2 percent of the body but consumes more than 20 percent of its energy. Extra gray matter is useful only if it enables an animal to get enough extra calories to power it, and scientists didn’t understand how the brain was paying for itself. What, exactly, made ever-larger brains with their powerful frontal lobes spread through the gene pool?

One early explanation for the large brain involved bananas and other calorie-rich fruits. Animals that graze on grass don’t need to do a lot of thinking about where to find their next meal. But a tree that had perfectly ripe bananas a week ago may be picked clean today or may have only unappealing, squishy brown fruits left. A banana eater needs a bigger brain to remember where the ripe stuff is, and the brain could be powered by all the calories in the bananas, so the “fruit-seeking brain theory” made lots of sense—but only in theory. The anthropologist Robin Dunbar found no support for it when he surveyed the brains and diets of different animals. Brain size did not correlate with the type of food. Dunbar eventually concluded that the large brain did not evolve to deal with the physical environment, but rather with something even more crucial to survival: social life. Animals with bigger brains had larger and more complex social networks. That suggested a new way to understand Homo sapiens. Humans are the primates who have the largest frontal lobes because we have the largest social groups, and that’s apparently why we have the most need for self-control. We tend to think of willpower as a force for personal betterment—adhering to a diet, getting work done on time, going out to jog, quitting smoking—but that’s probably not the primary reason it evolved so fully in our ancestors. Primates are social beings who have to control themselves in order to get along with the rest of the group. They depend on one another for the food they need to survive. When the food is shared, often it’s the biggest and strongest male who gets first choice in what to eat, with the others waiting their turn according to status. For animals to survive in such a group without getting beaten up, they must restrain their urge to eat immediately. Chimpanzees and monkeys couldn’t get through meals peacefully if they had squirrel-sized brains. They might expend more calories in fighting than they’d consume at the meal.

Although other primates have the mental power to exhibit some rudimentary etiquette at dinner, their self-control is still quite puny by human standards. Experts surmise that the smartest nonhuman primates can mentally project perhaps twenty minutes into the future—long enough to let the alpha male eat, but not long enough for much planning beyond dinner. (Some animals, like squirrels, instinctively bury food and retrieve it later, but these are programmed behaviors, not conscious savings plans.) In one experiment, when monkeys were fed only once a day, at noon, they never learned to save food for the future. Even though they could take as much as they wanted during the noon feeding, they would simply eat their fill, either ignoring the rest or wasting it by getting into food fights with one another. They’d wake up famished every morning because it never occurred to them to stash some of their lunch away for an evening snack or breakfast.

Humans know better thanks to the large brain that developed in our Homo ancestors two million years ago. Much of self-control operates unconsciously. At a business lunch, you don’t have to consciously restrain yourself from eating meat off your boss’s plate. Your unconscious brain continuously helps you avoid social disaster, and it operates in so many subtly powerful ways that some psychologists have come to view it as the real boss. This infatuation with unconscious processes stems from a fundamental mistake made by researchers who keep slicing behavior into thinner and briefer units, identifying reactions that occur too quickly for the conscious mind to be directing. If you look at the cause of some movement in a time frame measured in milliseconds, the immediate cause will be the firing of some nerve cells that connect the brain to the muscles. There is no consciousness in that process. Nobody is aware of nerve cells firing. But the will is to be found in connecting units across time. Will involves treating the current situation as part of a general pattern. Smoking one cigarette will not jeopardize your health. Taking heroin once will not make you addicted. One piece of cake won’t make you fat, and skipping one assignment won’t ruin your career. But in order to stay healthy and employed, you must treat (almost) every episode as a reflection of the general need to resist these temptations. That’s where conscious self-control comes in, and that’s why it makes the difference between success and failure in just about every aspect of life.

~~Willpower- Rediscovering The Greatest Human Strength -by-Roy F. Baumeister and John Tierney

Saturday, September 19, 2015

Day 36: Book Excerpt: Vagina- A New Biography

Let’s look back again at the 1970s, where the feminism of a Betty Dodson and a Shere Hite, and the market opportunity grabbed by Hugh Hefner and his fellow pornographers in the following decades, “set” our model in the West of female sexuality.

This model of the feminist vulva and vagina— joined eventually by pornography’s elaboration of this model— was the one that was formative for women of my generation. The vagina and vulva were primarily understood as mediating sexual pleasure. What was important was technique—one’s own masturbatory technique, and the skills one taught to a partner. Feminists and pornographers alike defined the vagina and vulva in terms of the mechanics of orgasm.

But while technique is important, this model leaves a great deal out of the “meaning” of the vagina and vulva. It leaves out the connections to the vagina of spirituality and poetry, art and mysticism, and the context of a relationship in which orgasm may or may not be taking place. It certainly leaves behind the larger question of the quality of a masturbating woman’s relationship to herself.

The Dodson model of the empowered female did a great deal of good, but also caused some harm. The good is that feminism of that era had to break the association of heterosexual female sexual awakening with dependency on a man. The harm is that the feminism of this era successfully broke the association of heterosexual female sexual awakening with dependency on a man. “A woman needs a man like a fish needs a bicycle,” as one seventies-era feminist bumper sticker insisted. The feminist model of heterosexuality—that straight women can fuck like men, or get by with a great vibrator and no other attention to self-love, and be simply instrumentalist about their pleasure—turned out to have created a new set of impossible ideals, foisted, if through the best of intentions, upon “liberated” women. Feminism has evaded the far more difficult question of how to be a liberated heterosexual woman and how to acknowledge deep physical needs for connection with men. As nature organized things, we ideally have a partner in the dance. If we don’t have a partner, there is attention we should give to self-love as self-care. It does not solve straight women’s existential dilemma, the tension between our dependency needs and our needs for independence, simply to declare that the dance has changed.

The harm of this model of female sexuality is that it reaffirms a fractured, commercialized culture’s tendency to see people, including “sexually liberated women,” as isolated, self-absorbed units, and to see pleasure as something one needs to acquire the way one acquires designer shoes, rather than as a medium of profound intimacy with another, or with one’s self, or as a gateway to a higher, more imaginative, fully realized dimension that includes and affects all aspects of one’s life.

Recent data collected in 2009 by sociologist Marcus Buckingham, drawn from multicountry surveys, show that Western women report lower and lower levels of happiness and satisfaction, even as their freedoms and options have grown, relative to men. Both feminists and antifeminist commentators sought to find answers for this broadly confirmed trend: feminists sought to argue that it was inequality or wage differences in the workplace and the “second shift” at home—but the surveys were adjusted to account for sex discrimination. Antifeminist commentators argued, of course, that this was all the fault of feminism, making women seek fulfillment in professional spheres unnatural to them.

I think it is very possible, judging from the tremendous amount of data we have seen about what women need psychologically, which they are generally not getting, that they are saying they are dissatisfied because the “available models of sexuality”—the post-Dodson, post-Hefner, post-porn, married, two-career, hurried, or young and single drunk-with-a-stranger-in-a-bar-or-dorm-room models—are, long term, just plain physically untenable. These models of female sexuality—left to us by a combination of pressures ranging from an incomplete development of feminism in the 1970s, to a marketplace that likes us overemployed and undersexed, to the speeding up of sexual pacing set by pornography—doom women eventually to emotional strain caused by physiological strain. These models of female sexuality are simply extremely physically, emotionally, and existentially unsatisfying. (This model of sex may well doom Western heterosexual men in other ways, deserving of their own book.)

Now that we know that the vagina is a gateway to a woman’s happiness and to her creative life, we can create and engage with an entirely different model of female sexuality, one that cherishes and values women’s sexuality. This is where the “Goddess” model comes in, a model that focuses on “the Goddess Array”—that set of behaviors and practices that should precede or accompany lovemaking. But where is a “Goddess” model to be found in contemporary life?

My search to locate a working “Goddess” model led me first into the past, into the historical differences between Eastern and Western attitudes toward female sexuality. Of course, women were subjugated in the East as well as in the West, but in two cultures in particular—the India of the Tantrists, about fifteen hundred years ago, and the Han dynasty of China about a thousand years ago—women were, for a time, elevated and enjoyed relative freedom. These two cultures viewed the vagina as life-giving and sacred, and, as I noted, they believed that balance and health for men depended upon treating the vagina—and women—extremely well sexually. Both cultures appear to have understood aspects of female sexual response that modern Western science is only now catching up with.

~~Vagina- A New Biography -by- Naomi Wolf

Friday, September 18, 2015

Day 35: Book Excerpt: The Bonobo and the Atheist

Animals crawling out of the mud recall our lowly beginnings. Everything started simple. This holds not only for our bodies—with hands derived from frontal fins and lungs from a swim bladder—but equally for our mind and behavior. The belief that morality somehow escapes this humble origin has been drilled into us by religion and embraced by philosophy. It is sharply at odds, however, with what modern science tells us about the primacy of intuitions and emotions. It is also at odds with what we know about other animals. Some say that animals are what they are, whereas our own species follows ideals, but this is easily proven wrong. Not because we don’t have ideals, but because other species have them, too.

Why does a spider repair her web? It’s because she has an ideal structure in mind, and as soon as her web deviates from it, she works hard to bring it back to its original shape. How does a “Mama Grizzly” keep her young safe? Anybody moving between a sow and her cubs will discover that she has an ideal configuration in mind, which she doesn’t like to be messed with. The animal world is full of repair and correction, from disturbed beaver dams and anthills to territorial defense and rank maintenance. Failing to obey the hierarchy, a subordinate monkey upsets the accepted order, and all hell breaks loose. Corrections are by definition normative: they reflect how animals feel things ought to be. Most pertinent for morality, which is also normative, social mammals strive for harmonious relationships. They are at pains to avoid conflict whenever they can. The gladiatorial view of nature is plainly wrong. In one field experiment, two fully grown male baboons refused to touch a peanut thrown between them, even though they both saw it land at their feet. Hans Kummer, the Swiss primatologist who worked all his life with wild hamadryas baboons, describes how two harem leaders, finding themselves in a fruit tree too small to feed both of their families, broke off their inevitable confrontation by literally running away from each other. They were followed by their respective females and offspring, leaving the fruit unpicked. Given the huge, slashing canine teeth of a baboon, few resources are worth a fight. Chimp males face the same dilemma. From my office window, I often see several of them hang around a female with swollen genitals. Rather than competing, these males are trying to keep the peace. Frequently glancing at the female, they spend their day grooming each other. Only when everyone is sufficiently relaxed will one of them try to mate.

If fighting does break out, primates react the way the spider does to a torn web: they go into repair mode. Reconciliation is driven by the importance of social relationships. Studies on a great variety of species show that the closer two individuals are, and the more they do together, the more likely they are to make up after aggression. Their behavior reflects awareness of the value of friendships and family bonds. This often requires them to overcome fear or suppress aggression. If it weren’t for the need to bury the hatchet, it wouldn’t make any sense for apes to kiss and embrace former opponents. The smart thing to do would be to stay away from them.
This brings me back to my bottom-up view of morality. The moral law is not imposed from above or derived from well-reasoned principles; rather, it arises from ingrained values that have been there since the beginning of time. The most fundamental one derives from the survival value of group life. The desire to belong, to get along, to love and be loved, prompts us to do everything in our power to stay on good terms with those on whom we depend. Other social primates share this value and rely on the same filter between emotion and action to reach a mutually agreeable modus vivendi. We see this filter at work when chimpanzee males suppress a brawl over a female, or when baboon males act as if they failed to notice a peanut. It all comes down to inhibitions.
...
We are mammals, a group of animals marked by sensitivity to each other’s emotions. Even though I tend to favor primate examples, much of what I describe applies equally to other mammals. Take the work by the American zoologist Marc Bekoff, who analyzed videos of playing dogs, wolves, and coyotes. He concluded that canid play is subject to rules, builds trust, requires consideration of the other, and teaches the young how to behave. The highly stereotypical “play bow” (an animal crouches deep on her forelimbs while lifting her rear in the air) helps to set play apart from sex or conflict, with which it risks getting confused. Play ceases abruptly, though, as soon as one partner misbehaves or accidentally hurts the other. The transgressor “apologizes” by performing a new play bow, which may prompt the other to “forgive” the offense and continue to play. Role reversals make play even more exciting, such as when a dominant pack member rolls onto his back for a puppy, thus exposing his belly in an act of submission. This way, he lets the little one “win,” something he’d never permit in real life. Bekoff, too, sees a relation with morality:

    During social play, while individuals are having fun in a relatively safe environment, they learn ground rules that are acceptable to others—how hard they can bite, how roughly they can interact—and how to resolve conflicts. There is a premium on playing fairly and trusting others to do so as well. There are codes of social conduct that regulate what is permissible and what is not permissible, and the existence of these codes might have something to say about the evolution of morality.

~~The Bonobo and The Atheist -by- Frans De Waal

Thursday, September 17, 2015

Day 34: Book Excerpt: American Arsenal

On April 12, 1945, President Roosevelt, vacationing at a favorite spa in Warm Springs, died of a cerebral hemorrhage. Harry Truman, sworn in as president that evening, seemed an unlikely successor. He was a failed Kansas City haberdasher whose debts had been paid by the political machine of Kansas City’s Boss Pendergast, whose patronage interests he had faithfully served—he was known as “the Senator from Pendergast.” Under pressure from the Democratic Party’s leaders, Roosevelt had selected Truman to replace the incumbent vice president, the left-leaning Henry Wallace, as his 1944 running mate. Truman had shown little experience or judgment in foreign affairs. When Germany invaded Russia in 1941, Senator Truman said, “If we see that Germany is winning we ought to help Russia and if Russia is winning we ought to help Germany, and that way let them kill as many as possible.”

Truman held his first cabinet meeting immediately after his swearing-in. After the meeting, Secretary of War Henry Stimson waited for the others to leave the room and then told the president that a project was under way to develop a new explosive of almost unbelievable destructive power. The next day, James Byrnes, whom Truman would soon appoint secretary of state, told the president “with great solemnity” that “we were perfecting an explosive great enough to destroy the whole world.”

Truman and his closest military and political advisors, with the exception of Stimson, seemed to have had no real understanding of the atomic bomb. Byrnes, a master Senate politician who had been Truman’s mentor there, would see the bomb as a club to be used to bully the Russians; Hap Arnold, chief of the AAF, likely saw it in the same way as did Curtis LeMay, who said that he understood that the bomb would make a “big bang” but that it “didn’t make much of an impression on me.” To them, it was just a bigger bomb. Stimson warned Truman that the United States could not keep its nuclear monopoly for long and urged the president to appoint a committee of leading citizens to advise him on its use and to consider international control. The president agreed, and named an Interim Committee that included Stimson, Byrnes, Bush, Conant, and Karl Compton. A scientific advisory panel, including Oppenheimer, Fermi, Lawrence, and Arthur Compton, was formed to provide technical expertise to the Interim Committee. The advisory committee excluded scientists who were hesitant to use the bomb, such as Szilard, James Franck, and Urey.

On May 31, the Interim Committee made its recommendation. Oppenheimer and Arthur Compton had advised the committee that the bomb would have the explosive equivalent of ten thousand tons of TNT and would kill about twenty thousand people if dropped over a city, with more injured by burns or radiation. (Both estimates were low. The estimate of those killed was based on the assumption that Japanese civilians would take shelter from an air raid, while the actual bombing was by a single plane without advance warning.) The committee discussed alternatives to bombing a Japanese city— arranging a demonstration of the bomb for the Japanese, dropping it on a preannounced target in Japan, or dropping it on a neutral area, perhaps an uninhabited island. All these ideas were rejected. Dropping the bomb on an uninhabited area would not show its destructive power. And what if the bomb fizzled, or what if the Japanese brought American POWs to a preannounced site? The Interim Committee’s minutes for May 31: “Secretary [Stimson] expressed the conclusion, on which there was general agreement, that we could not give the Japanese any warning; that we could not concentrate on a civilian area; but that we should seek to make a profound psychological impression on as many of the inhabitants as possible. At the suggestion of Dr. Conant the Secretary agreed that the most desirable target would be a vital war plant employing a large number of workers and closely surrounded by workers’ houses.”
...
In his advice to Truman, Stimson showed his disconnection from the conduct of the war in Japan. After the March 9–10 firebombing of Tokyo, he had asked Arnold whether there had been any deviation from the policy of precision bombing. When Arnold told him that there had been no change in strategy, that the civilian casualties were only incidental, Stimson accepted his answer. As late as May 16, Stimson told Truman that insofar as possible he was holding the AAF to precision bombing, and that similar rules would be applied to the atomic bomb. But he must have known what was happening, as his decisions and advice to the president show. Stimson removed Kyoto from the target list for both conventional and atomic bombing because of its historic and artistic treasures, saying he “did not want to have the reputation of the United States outdoing Hitler in atrocities,” and he told Truman that he worried that Japan’s cities would be so “thoroughly bombed out that the new weapon would not have a fair background to show its strength.” Truman’s response was to laugh and say he understood. Stimson deceived himself about the firebombing campaign in Japan, and he deceived his new president. The historian Tami Biddle has written that over Hiroshima, “no moral threshold was crossed that had not been crossed much earlier in the year.”

On June 14, the Joint Chiefs met to plan the invasion, and they acted almost as if the atomic bomb did not exist. The Army was certain that an invasion would be necessary, and the Navy and AAF agreed to plan for a conditional invasion if blockade and bombing failed to end the war. On June 18, the Joint Chiefs met with Truman, who expressed his concern about the American casualties that would result from facing Japan’s two-million-man homeland army. No one seemed able to offer a definitive alternative to an invasion. Gen. Ira Eaker represented Arnold, who did not attend. Like Arnold, Eaker was under orders to support Marshall, and he endorsed the decision to invade the southernmost Japanese island of Kyushu. As far as the Army was concerned, there was nothing to discuss. LeMay arrived for a meeting the next day, and Marshall slept through his presentation.

At the Potsdam Conference in July, Truman received news of the successful Trinity test in New Mexico of the first atomic bomb, and he mentioned to Stalin almost in passing that America had “a new weapon of unusual destructive force.” Stalin (who of course knew all about it from Klaus Fuchs and other spies) said that he hoped that the Americans “would make good use of it against the Japanese.” America, Britain, and China agreed on the Potsdam Declaration, a July 26 ultimatum to Japan to surrender or face “prompt and utter destruction.” It made no mention of the bomb, nor did it offer to preserve the emperor system. The declaration was delivered by radio broadcast rather than through neutral countries’ diplomatic channels, which may have led the Japanese to view it as propaganda.

~~American Arsenal -by- Patrick Coffey

Wednesday, September 16, 2015

Day 33: Book Excerpt : After Tamerlane

The Portuguese were the oceanic frontiersmen of European expansion. The Portuguese kingdom was a small weak state perched on the Atlantic periphery. But by c. 1400 its rulers and merchants were able to exploit its one magnificent asset, the harbour of Lisbon. Europe’s Atlantic coast had become an important trade route between the Mediterranean and North West Europe. Lisbon was where the two great maritime economies of Europe – the Mediterranean and the Atlantic – met and overlapped. It was an entrepôt for trade and commercial information and for the exchange of ideas about shipping and seamanship. It was the jumping-off point for the colonization of the Atlantic islands (Madeira was occupied in 1426, the Azores were settled in the 1430s), and for the crusading filibuster that led to the capture of Ceuta in Morocco in 1415. Thus, long before they ventured beyond Cape Bojador on the west coast of Africa in 1434, the Portuguese had experimented with different kinds of empire-building. Their geographical ideas were shaped not only by knowledge of the great Asian trade routes that had their western terminus in the Mediterranean, but also by the influence of crusading ideology. Ironically, the crusading impulse assumed that Portugal lay at the western edge of the known world and that the object was to drive eastward towards its centre in the Holy Land. Perhaps it was this and Portugal’s early forays into North Africa after 1415 (where it heard of Morocco’s West African gold supplies) that pulled the Portuguese first south and east rather than westward across the Atlantic. The tantalizing vision of alliance with the Christian empire of Prester John (supposedly lying somewhere south of Egypt) encouraged the hope of navigators, merchants, investors and rulers that, by turning the maritime flank of the Islamic states in North Africa, Christian virtue would reap a rich reward.

Prester John was only a legend, and so was his empire. Nevertheless, by the 1460s the Portuguese were pushing ever further south in search of a route that would take them to India – the goal triumphantly achieved by Vasco da Gama in 1498. But it took more than navigational skill to carry Portuguese sea power into the Indian Ocean. Two vital African factors made possible their sea venture into Asia. The first was the existence of the West African gold trade that flowed north from the forest belt to the Mediterranean and the Near East. By the 1470s, the Portuguese had managed to divert some of this trade towards their new Atlantic sea route. In 1482–4 they brought the stones to build the great fort of San Jorge da Mina (now Elmina in Ghana) as the ‘factory’ for the gold trade. (A ‘factory’ was a compound, sometimes fortified, where foreign merchants both lived and traded.) It was a crucial stroke. Mina’s profits were enormous. Between 1480 and 1500 they were nearly double the revenues of the Portuguese monarchy.6 In the 1470s and ’80s, they supplied the means for the expensive and hazardous voyages further south to the Cape of Storms (later renamed the Cape of Good Hope) rounded by Bartolomeu Dias in 1488. The second great factor was the lack of local resistance in the maritime wilderness of the African Atlantic. South of Morocco, no important state had the will or the means to contest Portugal’s use of African coastal waters. Most African states looked inland, regarding the ocean as an aquatic desert and (in West Africa) seeing the dry desert of the Sahara as the real highway to distant markets.

In these favourable conditions, the Portuguese traversed the empty seas and then pushed north from the Cape until they ran across the southern terminus of the Indo-African trade route near the mouth of the Zambezi. From there they could rely upon local knowledge, and a local pilot who could direct them to India. Once north of the Zambezi, Vasco da Gama re-entered the known world, as if emerging from a long detour through pathless wastes. When he arrived in Calicut on India’s Malabar coast, he re-established contact with Europe via the familiar Middle Eastern route used by travellers and merchants. It was a feat of seamanship, but in other respects his visit was not entirely auspicious. When he was taken to a temple by the local Brahmins, Vasco assumed that they were long-lost Christians. He fell on his knees in front of the statue of the Virgin Mary. It turned out to be the Hindu goddess Parvati. Meanwhile the Muslim merchants in the port were distinctly unfriendly, and, after a scuffle, Vasco decided to beat an early retreat and sail off home.

But what were the Portuguese to do now that they had found their way to India by an Atlantic route that they were anxious to keep secret? Even allowing for the lower costs of seaborne transport, it was unlikely that a few Portuguese ships in the Indian Ocean would divert much of its trade towards the long empty sea lanes round Africa. In fact the Portuguese soon showed their hand. The Malabar coast, with its petty coastal rajas and its reliance on trade (the main route between South East Asia and the Middle East passed along its shores), was the perfect target. Within four years of Vasco’s voyage to Calicut, they had returned in strength with a fleet of heavily armed caravels. Under Afonso Albuquerque, they began to establish a network of fortified bases from which to control the movement of seaborne trade in the Indian Ocean, beginning at Cochin (1503), Cannalore (1505) and Goa (1510). In 1511, after an earlier rebuff, they captured Malacca, the premier trading state in South East Asia. By the 1550s they had some fifty forts from Sofala in Mozambique to Macao in southern China, and ‘Golden Goa’ had become the capital of their Estado da India.

~~After Tamerlane -by- John Darwin

Tuesday, September 15, 2015

Day 32


MACBETH: You have been writing science fiction short stories and novels for several years now, but your story ‘You and Me and the Continuum’ is one of a recent group which, I think, in structure are really quite different from your earlier ones. Perhaps the most striking feature to someone reading ‘You and Me and the Continuum’, for example, for the first time, is that it is constructed not in continuous narrative, but in a sequence of short paragraphs, each of which has a heading – in fact, they’re arranged in alphabetical order. But the key point, I think, is that they are broken up. Why did you move on to using this technique of construction?

BALLARD: I was dissatisfied with what I felt were linear systems of narrative. I had been using in my novels and in most of my short stories a conventional linear narrative, but I found that the action and events – of the novels in particular – were breaking down as I wrote them. The characterisation and the sequences of events were beginning to crystallise into a series of shorter and shorter images and situations. This ties in very much with what I feel about the whole role of science fiction as a speculative form of fiction. For me, science fiction is above all a prospective form of narrative fiction; it is concerned with seeing the present in terms of the immediate future rather than the past.

MACBETH: Could I break in there? Would you contrast that with what the traditional novel does in the sense it’s concerned with perhaps the history of a family or a person?

BALLARD: Exactly. The great bulk of fiction still being written is retrospective in character. It’s concerned with the origins of experience, behaviour, development of character over a great span of years. It interprets the present in terms of the past, and it uses a narrative technique, by and large the linear narrative, in which events are shown in more-or-less chronological sequence, which is suited to it. But when one turns to the present – and what I feel I’ve done in these pieces of mine is to rediscover the present for myself – I feel that one needs a non-linear technique, simply because our lives today are not conducted in linear terms. They are much more quantified; a stream of random events is taking place.
......
MACBETH: You do literally, in many of these stories, draw connections between pictures of parts of the human body and certain landscapes, don’t you?

BALLARD: Yes. In the story ‘You: Coma: Marilyn Monroe’ I directly equate the physical aspect of Marilyn Monroe’s body with the landscape of dunes around her. The hero attempts to make sense of this particular equation, and he realises that the suicide of Marilyn Monroe is, in fact, a disaster in space-time, like the explosion of a satellite in orbit. It is not so much a personal disaster, though of course Marilyn Monroe committed suicide as an individual woman, but a disaster of a whole complex of relationships involving this screen actress who is presented to us in an endless series of advertisements, on a thousand magazine covers and so on, whose body becomes part of the external landscape of our environment. The immense terraced figure of Marilyn Monroe stretched across a cinema hoarding is as real a portion of our external landscape as any system of mountains or lakes.

MACBETH: Are you aware of deliberately using surrealism as references in these stories? Quite often you refer to Dali in particular and sometimes Ernst, and sometimes to real pictures by them. How far is there a direct connection with those pictures and the events or descriptions in the stories?

BALLARD: The connection is deliberate, because I feel that the surrealists have created a series of valid external landscapes which have their direct correspondences within our own minds. I use the phrase ‘spinal landscape’ fairly often. In these spinal landscapes, which I feel that painters such as Ernst and Dali are producing, one finds a middle ground (an area which I’ve described as ‘inner space’) between the outer world of reality on the one hand, and the inner world of the psyche on the other. Freud pointed out that one has to distinguish between the manifest content of the inner world of the psyche and its latent content. I think in exactly the same way today, when the fictional elements have overwhelmed reality, one has to distinguish between the manifest content of reality and its latent content. In fact the main task of the arts seems to be more and more to isolate the real elements in this goulash of fictions from the unreal ones, and the terrain ‘inner space’ roughly describes it.

~~ Interview from Extreme Metaphors, Interviews with J. G. Ballard