Sunday, July 31, 2016

Day 351: This Unquiet Land



That evening in July 1999, the assault on Tiger Hill began from a Bofors gun position; individual guns had been ranged so as to directly fire at the three flanks of the mountain. An intricate fire plan prepared by the 41 Field Regiment provided covering fire to the soldiers of 8 Sikh and 18 Grenadiers who were stealthily moving up escarpments and sheer cliffs from three different directions. Usually, six guns were deployed to provide covering fire to every infantry unit. In Kargil this was increased to eighteen guns. Kargil has often been called a classic gunner’s war and the Bofors gun was its mainstay. The gun could fire three rounds in twelve seconds, and it had a range of thirty kilometres in high altitude terrain; 250 of them were deployed at the front line and 250,000 rounds were fired during the fifty-day war. During the Tiger Hill operation alone, 9,000 shells were used. The artillery points had become both the first tier of attack and defence in the war. Field gun positions were now veritable forward posts, inviting attack on themselves as soon as the Bofors gun fired the first round; 80 per cent of the casualties on both sides were from mortar fire.

Writing in Artillery: The Battle Winning Arm about the peak period of the war, when each artillery battery fired one round per minute for seventeen days continuously, senior military analyst Major General Jagjit Singh (who started his career in the Royal Indian Navy and saw anti-submarine action during World War II) said, ‘Such high rates of fire over long periods had not been witnessed anywhere since World War II… The men at the guns had blisters on their hands from carrying and loading shells and cartridges. Very few of them got more than a couple of hours of sleep in every 24-hour cycle.’

With the advantage of height, Pakistani observation posts had a clear line of vision on Indian gun points. So assault positions had to be shifted as soon as they became vulnerable to counter-attack. As they moved, so would we, jumping on to the back of a Jonga jeep or just darting across the smoke-saturated road, unsure of where it might be safe to halt even for a second, trying all the while to keep pace with the magnitude of what was unravelling before our camera’s gaze.

Two hours into what would end up being a thirteen-hour battle, an enemy artillery shell landed close to a 122 multi-barrelled ‘Grad’ rocket launcher right outside the headquarters of the 56 Mountain Brigade, which had just taken over the sector. With forty rockets stacked upon the back of a single carrier, the Grad was a fire-breathing dragon that spat flames into the sky. In Russian, its name meant ‘hailstorm’, a fitting appellation, as it hailed destruction down upon the intruders.

Suddenly an enemy shell landed within spitting distance of the launcher. It was time to move to a new vantage point; the commander of the Grad immediately halted the operation so that the MBRL could be shifted. Four soldiers had already been killed in a counter-attack on a gun position earlier and he wasn’t taking any chances. Over the next twenty minutes the battle escalated. The town of Drass was carpet-bombed by Pakistan. A curtain of grey closed over the highway as the final acts of the fight to take back Tiger Hill began.

‘Run, Run, Run, Now, Now, Now, Run,’ shouted out an anxious voice behind us, and so we did, our bodies bent over, our hands forming a useless protective cover over our heads, our camera shaking and jerky, but still switched on and filming, trying to get some of this across to the news centre, our hearts pumping with adrenalin—it offered a temporary antidote against paralysing fear. At this point in the battle I got separated from Ajmal Jami, my cameraman. As Pakistani shells began to pound the area right around the rocket launcher, I took shelter behind a broken wall, and frantically worked the satellite phone. Suddenly, a long arm lunged out and pulled me back, unfailingly polite even in that life and death moment. ‘Ma’am, you’re standing next to an ammunition dump,’ said the soldier. ‘If a shell hits the target, this place will explode and you will be finished.’ He waved me towards the shelter of his own underground bunker.

‘I need to call my office in Delhi,’ I insisted, ‘and I have lost my cameraman, I can’t go down leaving him out here.’

He offered to find Jami for me and urged me to hunker down immediately before I got injured in the shelling. I still had the call to make but there was no way the satellite phone would pick up any signal underground. I sat with my legs inside the dugout and the rest of me leaning out of it, holding the phone outwards. ‘I can’t talk long; I’m calling from a bunker,’ I said, ‘just tell everyone, the assault on Tiger Hill has begun.’ We would spend the next few hours here, trying to make sense of the latest war between India and Pakistan.
...
‘I sincerely hope that they (relations between Indian and Pakistan) will be friendly and cordial. We have a great deal to do…and think that we can be of use to each other and the world.’

In August 1947, Pakistan’s founder, Mohammed Ali Jinnah, declared that Partition had resolved the antagonism between Hindus and Muslims and India and Pakistan could now live in harmony. Mahatma Gandhi echoed the sentiment. Both ‘fathers’ of their respective nations turned out to be grievously wrong.

The partition of India was a bloody and cataclysmic upheaval and the largest forced mass migration of people in the world. Between 1 and 2 million people were killed and an estimated 17 million were uprooted from their homes. The violent rupture proved impossible to heal.

In both countries, many families put locks on their doors but left most of their possessions inside as if they were going on a brief journey from which they would be coming back home very soon. My own family was among them. My grandfather, Krishan Gopal Dutt, a freedom fighter who went on to become Punjab’s finance minister in independent India, used to live in a palatial kothi called Pillar Palace in Sialkot, famous for its manufacture of sporting goods. My father was a child of eight when the mob violence spread like a forest fire across both sides of the Punjab province. My grandfather reached out to his friend Chet Ram, the then governor of Jammu, for help in crossing over to the newly demarcated territory of India. A truck with armed guards was sent into Sialkot on the pretext that the governor had to retrieve money from the Imperial Bank (which later became the State Bank of India) in Sialkot. On this truck, my grandfather, dressed only in his dhoti-kurta, left with his family. When he arrived in Delhi, he was penniless and homeless like millions of other refugees.

Decades later, as a college student, I travelled with my father to our ancestral home in Pakistan; its fifty rooms were too expensive to maintain for the family that now owned it—they occupied only one of its residential wings. We had arrived at the house without warning the new owners. Yet, although we were complete strangers, they welcomed us without any questions or suspicions—and having heard our story—handed over the keys of the kothi to us. As we wandered through empty rooms, past bare walls—‘the piano was in this corner, your grandmother slept here, that’s the fountain made from marble’—I understood for the first time, the anguish that my father, and millions like him, had felt at being displaced in a manner that was so violent, unforgiving and permanent. It was one of only two times I’d seen my father cry (the other time was when his wife died) and it came home to me, in that instant, standing in the abandoned old house in Sialkot, just how deep a wound Partition had carved into the psyche of both countries.

~~This Unquiet Land: Stories from India's Fault Lines -by- Barkha Dutt

Saturday, July 30, 2016

Day 350: Churchill's Secret War



Six years after Churchill’s avowal and two days after the Nazis began their blitzkrieg into Poland, on September 3, 1939, the United Kingdom declared war on Germany. So did the viceroy of India, on behalf of nearly 400 million subjects of the British Empire. The colony was vital to the defense of British interests around the world. It sat in the middle of the supply and communication route that stretched from the United Kingdom, through the Suez Canal or around the Cape of Good Hope, and across the Indian Ocean to Singapore, Australia, and New Zealand. Throughout World War II, ships would transport food, armaments, and troops from the colonies and dominions on the periphery of the Indian Ocean to the United Kingdom, as well as to war theaters around the Mediterranean Sea or in Southeast Asia.

The Indian population would play a significant role in the war. Of the colony’s prewar budget, a third went toward defense, and that fraction had increased to two-fifths by 1939. The Indian Army’s primary domestic tasks were to guard the northwestern border against Soviet incursions southward across Afghanistan and to ensure internal security. Just as important, this army was ideally situated to defend British dependencies in the Middle East, Africa, and Southeast Asia, and could be dispatched to diverse theaters under direct orders from London. At the start of the war, it comprised 43,500 British and 131,000 Indian troops, some of whom had already been sent to Egypt and Singapore. Churchill, then a member of the War Cabinet, recommended that a further 60,000 British troops “be sent to India to maintain internal security and complete their training,” while at least 40,000 trained troops be brought back. While being trained, the white soldiers would forestall any uprising among the increasingly restive population of Indians intent on independence.

“I was kept for this job,” Churchill confided to his doctor when he succeeded Neville Chamberlain as prime minister on May 10, 1940. Over his sixty-five years, Churchill had repeatedly placed himself in danger and had had several narrow escapes, which had bolstered his profound conviction that he was destined for a mighty task. It had taken him most of his life to discover what that something was: to lead The Island Race, as he would entitle his history of the British, in a great struggle. “I felt as if I were walking with destiny, and that all my past life had been but a preparation for this hour and for this trial,” Churchill wrote of his accession to the most powerful position in the British Empire. Three days after his appointment he addressed the Parliament and the nation, promising nothing but “blood, toil, tears and sweat.” The aim of the war, he declared, was “victory, victory at all costs . . . for without victory, there is no survival. Let that be realised; no survival for the British Empire, no survival for all that the British Empire has stood for, no survival for the urge and impulse of the ages, that mankind will move forward towards its goal.” The prime minister would not only defend the British Isles from invasion and subjugation by Hitler’s armies; he would safeguard its vast and sprawling empire. But India, like some of the other colonies and dominions, would sacrifice at least as much as the United Kingdom did in the defense of an empire from which it had long been struggling to break free.

To make sure India obeyed him and did its part to support the war, Churchill needed a lieutenant with a record of firmness in dealing with colonies. The very day he gave his rousing “blood, toil, tears and sweat” peroration, the prime minister summoned the respected elder statesman Leopold S. Amery and asked him to serve as secretary of state for India.

Amery was bitterly disappointed by the request. He was sixty-six, a year older than Churchill, and up to that point his career had broadly paralleled that of the prime minister. Amery had covered the Boer War as a correspondent, had served in World War I, and had subsequently been appointed first Lord of the Admiralty and colonial secretary. At the very least, he had expected a significant role in the War Cabinet helping to direct the war effort. It was even said that if Amery had been “half a head taller and his speeches half an hour shorter” he might have become prime minister himself. Amery had also just played a central role in the Tory Party mutiny that had brought down Chamberlain and installed Churchill. A week earlier, he had denounced Chamberlain from the floor of Parliament: “You have sat too long here for any good you have been doing,” Amery had declaimed, invoking the words of Oliver Cromwell, the seventeenth-century British leader who had deposed and killed King Charles I: “Depart, I say, and let us have done with you. In the name of God, go!”

Amery protested to Churchill that he was “side tracking me from the real conduct of the war.” Not so, the prime minister responded: it was important to ensure that India contribute as much as possible to the war, which might even move east. Amery was not persuaded, and believed that Chamberlain had urged against his appointment to the War Cabinet. Historian William Roger Louis holds, however, that by giving him a relatively subordinate role Churchill sought to contain a potential rival, one reputed to be “a man of integrity and judgment who had the courage to speak his convictions regardless of consequence.” Eventually the patriot in Amery prevailed—even as he maintained a private hope that a cabinet reshuffle would bring him closer to power. He accepted the position.

The new secretary of state for India rapidly put mechanisms in place “to utilize Indian supplies to the utmost,” as he described in his diary, and moved to impart to the marquess of Linlithgow, the viceroy in New Delhi, emergency powers of arrest and detention, control of the press, prohibition of seditious groups, and so on. “My whole conception is that of India humming from end to end with activity in munitions and supply production and at the same time with the bustle of men training for active service of one sort or another, the first operation largely paying for the cost of the second,” Amery explained to Linlithgow.

The Indian Army was slated to play a crucial role in the war, and in June 1940 the prime minister directed Amery to ensure that additional divisions were shipped westward. “The fact that we are somewhat reducing the quality of our British garrisons [in India], makes it all the more desirable that a larger number of Indian troops should also be employed outside India,” Churchill explained. That is, because recent recruits from the United Kingdom, who were in need of training, were replacing more experienced white troops in India (the latter were either returning home to defend Britain or moving to the war theaters), any mutiny by the native soldiers would be all the more difficult to quell. So India’s internal security required that as many of the sepoys as possible should also be abroad. Moreover, Churchill continued, it appeared that the war would “spread to the Middle East, and the climate of Iraq, Palestine and Egypt are well suited to Indian troops.” The prime minister’s greater apprehension of a mutiny than of an external attack would mean that when Japanese forces suddenly and ominously arrived at India’s eastern border in March 1942, the colony’s most highly trained and best-equipped divisions would be on another continent.

Apart from supplying soldiers for some of the toughest combat in countries around the Mediterranean Sea, India was designated to provide the bulk of supplies for those theaters. Starting in May, Amery oversaw the effort to ship from India around 40,000 tons of grain per month, a tenth of its railway engines and carriages, and even railway tracks uprooted from less important train lines. The colony’s entire commercial production of timber, woolen textiles, and leather goods, and three-quarters of its steel and cement production, would be required for the war. Factories near Calcutta were soon turning out ammunition, grenades, bombs, guns, and other weaponry; Bombay’s mills were producing uniforms and parachutes, while plants all over the country were contributing boots, jeep bodies and chassis, machine parts, and hundreds of ancillary items such as binoculars for which the need had suddenly swelled. Apart from the United Kingdom itself, India would become the largest contributor to the empire’s war—providing goods and services worth more than £2 billion.

~~Churchill's Secret War -by- Madhusree Mukerjee

Friday, July 29, 2016

Day 349: Anticipating India



On 12 August 1990, Ruchika Girhotra, just 14, went to play at the Haryana Lawn Tennis Association (HLTA) courts at Panchkula, near Chandigarh. She complained to her father that S.P.S. Rathore, a senior police officer and president of the HLTA, felt her up. After some deliberation, he and her friend’s parents made a formal complaint to the then Haryana chief minister, Hukam Singh. He asked the then director-general of police (DGP), R.R. Singh, to investigate. Singh concluded after enquiries that an FIR should be filed against Rathore.

The very next day, on 4 September 1990, the state financial commissioner accepted the DGP’s report and asked for a case to be registered under Sections 342 and 354 of the IPC. For one and a half years nothing happened. Nothing. Until 13 June 1992, when the state law department woke up again and recommended that an FIR be registered against Rathore. This is when the real action began.

By this time Ruchika’s brother Ashu had turned fourteen and, boy, wasn’t he going to be made to pay for his sister’s ‘sins’. Between 6 September 1992 and 30 August 1993, Haryana Police, instead of moving against Rathore for molesting Ruchika, registered six FIRs against her brother for auto thefts. All cases went to court. In each, he was fully acquitted. But the harassment, the humiliation, the expense of litigation claimed their victim. Four months after the sixth FIR was filed against her brother, Ruchika, now 17, committed suicide.

In early 1994, the Haryana chief secretary again recommended action against Rathore. Again, nothing happened. Ruchika’s family went to pieces, even into hiding. In July 1997, Ruchika’s friends’ parents gathered the courage to file a PIL in the Punjab and Haryana High Court asking for a CBI probe. On 17 November 2000, the CBI filed a chargesheet—chargesheet, not merely an FIR— accusing Rathore of molesting Ruchika.

If the story doesn’t sicken you already, if it doesn’t make you bristle with anger—and fright in case you happen to be the parent of a teenager—read on. Ruchika’s father, who had been in hiding fearing police harassment, asked how it was that Rathore was charged only with molestation, but not for driving his daughter to suicide? The brother’s life, after the humiliation, the torture and the litigation at such a young age, is a mess.

And Mr Rathore? He is now the DGP of Haryana and continues to be in that job despite the chargesheet. Here, Advaniji, is a first in your long and distinguished political career—someone charged in a court with molesting a fourteen-year-old child, yet commanding the police force next door to Delhi. Surely, Sardar Patel wouldn’t have approved of this.

Had Ruchika survived the trauma, had she been stronger, born with a thicker skin, she would have been a woman of twenty-four. She would, by now, have voted in three elections, may have even raised a family of her own. But she chose to complain when she was harassed as a child, and paid for it. What lesson does her fate hold out for other young women in our schools and colleges, workplaces, playgrounds? Shut up and suffer silently if some old uncleji feels you up? Particularly if he happens to be powerful, even more so if he happens to be a cop? And mind you, this did not happen in some unreachable political jungle of western Bihar. This happened in an upper-middleclass suburb, the kind of place people like us inhabit.

Quite frankly, Haryana Chief Minister Om Prakash Chautala’s reasoning for not removing or suspending Rathore is so ludicrous there is no point wasting time countering it. The CBI, he says, is famous for framing people with fictional chargesheets—he should know, he says, having been a ‘victim’.* But the point at this stage, Mr Chautala, is not whether Rathore is guilty or not. The point is, in which civilised society would you appoint as your DGP a man accused of molesting a fourteenyear-old, whose brother’s life was devastated with trumped-up cases, whose father went into hiding and who, eventually, committed suicide? Which parent, and which child, will feel safe in that state any more? What view will that state’s police sub-inspectors, station house officers take of all the reforms the courts and activists have brought about in the police’s treatment of women? As such, it is not a state known to possess the most polite policemen in the country. Now, when they see their government toss aside the National Human Rights Commission’s strong suggestions to remove the DGP—based on a series of reports in the Indian Express—or the Central Vigilance Commission’s advice to do so, they will draw the obvious conclusion.

Who is to tell Chautala any of this? The BJP, which supports his government in the state, has demanded Rathore’s removal, but he couldn’t care less. As for Rathore, it’s life as usual. The case, he says, is a frame-up: ‘I am under no moral obligation to resign.’

This isn’t merely one more case of police high-handedness and political protectionism. It raises some very serious questions. First of all, why isn’t there, in the media and Parliament, the kind of outrage that would have erupted had Rathore been a politician instead of a senior IPS officer? The Supreme Court and Narasimha Rao had made almost half his cabinet resign because they had been chargesheeted in the hawala case which was like a bicycle theft compared to child molestation. Only a fortnight ago, the BJP forced two of its own ministers from Gujarat to resign because they had been chargesheeted in a rioting case. Why should the same principle not apply to senior civil servants? Innocent until proven guilty, but step aside from authority or a position where you could influence the case.

The opposition’s lack of concern we can understand. There is special delight and gain in attacking rival politicians for their misdemeanours. Civil servants are less interesting targets. But why should we see the same relative indifference at the popular level? Why are we so much in awe of the civil servant? Because he falls in the PLU (people like us) category? Would the response of the media in general have been the same had Rathore been the home minister of Haryana rather than its DGP?

The second question is an even nastier one but more relevant in the context of Chandigarh. This case has dragged on for a decade now. Why has this not evoked a hundredth of the kind of protest that the Rupan Deol Bajaj–K.P.S. Gill case did? It is nobody’s case that one kind of sexual harassment is different, or lesser or greater, in its severity than any other. But Rupan was a senior IAS officer and more capable of defending herself against a DGP than a fourteen-year-old child on the tennis courts at Panchkula. Where are all the women’s organisations, civil libertarians, legal luminaries who hit the streets on the Rupan case? The impetus in that case had come from members of the civil service in Chandigarh, so outraged at so blatant a case of sexual harassment. Where were they for four years while the file on Rathore’s prosecution was put in deep freeze, while Ruchika’s kid brother was being tortured and buried under false cases? If they had shown even a fraction of the dogged outrage they did in the Rupan case, Ruchika would probably have been alive today.

Maybe even the almighty bureaucratic protests in the Rupan case were more about protecting the honour of a fellow IAS officer rather than just another victimised woman? Class camaraderie more than moral indignation? And the feminists and civil libertarians and so on? Would it be too unkind to suggest that, as in the case of politicians, cynicism gets the better of them as well? Maybe the protest and anger in the Gill case were not so much about gender equality or civil liberties as about the political opportunity to destroy a tough, brutal cop whose guts and methods you hated?

This argument can go on and on. But for people like Vajpayee and Advani, honourable, middleclass people with sound family values, great personal integrity, the facts are clear enough. They need to only look at the chronology of events. If, after that, they do not find enough reason to force Chautala to move his DGP aside, it could only mean that, as politicians, they are no different from the others. They could, then, go and see, along with their families, Mahesh Manjrekar’s Kurukshetra, which is all about a chief minister fighting to save his rapist son, killing his victim in the hospital, destroying her family. Bollywood is not particularly known for political understatement but when you go home and review the facts of the Panchkula story, you would wonder how fast real life is catching up with dark cinema. It will shame you.

~~Anticipating India: The Best Of National Interest -by- Shekhar Gupta

Thursday, July 28, 2016

Day 348: Overdressed



China’s garment industry operates on an intimidating scale. It’s several times bigger than any garment industry that’s happened anywhere in the world at any point in history. They have more than forty thousand clothing manufacturers and 15 million garment industry jobs. Compare that to the 1.45 million garment and textile industry jobs the United States had at peak employment some forty years ago.

China’s supersize garment industry has achieved a degree of specialization that is beyond belief. There is a coastal city near Shanghai in northern China that produces most of the world’s socks; nine billion pairs a year. Not too far away in Zhejiang Province is a city dedicated to children’s clothing, with around five thousand factories doing just that. There’s also a proverbial Sweater City and Underwear City, where huge volumes of each are churned out in highly concentrated areas. If you ever wonder how we went from living in a world of relative clothing scarcity to feeling like we’re swimming in the stuff, ponder no further than China.

In 1935 the founder of Filene’s department store claimed that the main economic conundrum facing the industrialized world was finding ways to distribute all the consumer goods we were able to produce. This was almost fifty years before China’s own industrial revolution blanketed the world with virtually every imaginable consumer product. On the ride to Dongguan, I started up a game of name-that-factory with Lily. We’d pass an austere compound and I’d ask, “What does that one make?” She’d rattle off “laptop,” “TV,” “cell phone,” and an occasional “garment!” One factory she didn’t know the word for and so she made a gesture toward an electronic spire on the top of a building. An antenna factory?

For decades, China solved the conundrum of how to distribute its inexhaustible supply of factory-made goods—by making things inexpensively. At most of the factories I visited, I could in theory buy a couple of thousand skirts for $5 apiece, sell them in the United States for $20 apiece (assuming I could sell them directly and pay minimal shipping costs) and make quite a nice profit. Importing goods is not that simple, but there’s an undeniable appeal to making money from China’s miraculous capacity to make cheap, attractive products.

For a half century, Americans have been the world’s leading consumers. We have been busy shopping while the developing world, and more recently China, has been busy making things for us to buy. We have been sucking up more than our fair share of the planet’s resources, but our consumption was somewhat offset by the fact that the developing world used very little. Our consumer habits are now spreading to China, which has more than four times our population and may soon have more than four times the buying power. Ponder this for a second: a population of 1.3 billion people consuming clothes with the furious intensity that Americans do.

This is embarrassing to admit now, but when I packed for my meetings with Chinese factories, I intentionally chose the blandest things I owned. Still imagining Communist-era austerity ruling the Chinese fashion winds, I didn’t want anyone to be overwhelmed by my New York fashion sensibility. But as I walked down the palm-tree-lined pedestrian plazas of Shenzhen in a pair of khakis, canvas slip-ons, and a plain black blouse, I was decidedly outdressed by sharply dressed twentysomethings in knee-high boots and chic leather messenger bags. Lily and Katy were both better dressed than I, in the latest styles for China’s college-educated up-and-comers.

A decade ago China’s fashion industry was almost nonexistent. Today, it’s on the verge of exploding and the country has the world’s fastest-growing fashion and luxury markets. China has had its own edition of Vogue since 2005, and the Shenzhen Garment Industry Association has organized a collective runway show for the city’s designers at London Fashion Week since 2010. High-end American designer Diane von Furstenberg has had a store in Shanghai since 2007.

Sal Giardina recalls that when he first traveled to China for work in 2005 very few people were driving nice cars or wearing fashionable clothes. Just a few years later, fashion had taken hold. In the factories I visited in the spring of 2011, most of the sewing-machine girls were wearing puffer jackets and bedazzled stretch denim, and the boys were in trendy tracksuits, their hair gelled into spiky points. Sewing-machine operators still make a pittance, but it’s obvious that many of them are spending their spare cash on trendy clothes. Giardina agrees. “They’re developing a taste level for better items.”

Initially, as the trappings of Communism fell away, the Chinese followed Western styles and sought out our brands, but this too is changing. Chinese fashion brands are beginning to challenge their foreign competitors for loyalty at home. And Chinese brands are also moving into the American market, such as contemporary women’s wear label JNBY, which has had a flagship store in SoHo since 2010.

China’s growing consumer class and incredible industrial output pose enormous sustainability issues for the global economy and the world’s resources. Giardina states, “If every man, woman, and child in China bought two pair of wool socks, there would be no more wool left in the world. Think about that. So, yes, there will be problems with scarcity of resources. And what’s going to happen is prices will go up.” The country’s growing clothing consumption is already putting upward pressure on the price of fibers, particularly cotton, as demand is outstripping supply. According to the Oerlikon fiber study, cotton production is already reaching its limitations as competition for arable land intensifies.

Many Americans have forgotten what industrial cities—polluted, inhuman, and deeply ugly—look like. When I was in Dongguan, I was constantly thinking that the planet has no other option but to buckle under all of this manufacturing and that it clearly already was. To see industry on a scale that looks like science fiction seems as if it would take an equally fictional solution to stop. Also deeply unsettling is the fact that fast fashion is gaining hold among Chinese consumers too. Inditex, Zara’s parent company, saw a 32 percent rise in profit in 2010, largely attributed to sales in China, and opened seventy-five stores there in 2010 alone. If China begins to consume clothing at disposable levels, which fast-fashion companies are angling for, the environmental and social problems of fashion are just going to increase exponentially from here.

~~Overdressed: The Shockingly High Cost of Cheap Fashion -by- Elizabeth L. Cline

Wednesday, July 27, 2016

Day 347: The Emperor of All Maladies



On the morning of May 19, 2004, Carla Reed, a thirty-year-old kindergarten teacher from Ipswich, Massachusetts, a mother of three young children, woke up in bed with a headache. “Not just any headache,” she would recall later, “but a sort of numbness in my head. The kind of numbness that instantly tells you that something is terribly wrong.”

Something had been terribly wrong for nearly a month. Late in April, Carla had discovered a few bruises on her back. They had suddenly appeared one morning, like strange stigmata, then grown and vanished over the next month, leaving large map-shaped marks on her back. Almost indiscernibly, her gums had begun to turn white. By early May, Carla, a vivacious, energetic woman accustomed to spending hours in the classroom chasing down five- and six-year-olds, could barely walk up a flight of stairs. Some mornings, exhausted and unable to stand up, she crawled down the hallways of her house on all fours to get from one room to another. She slept fitfully for twelve or fourteen hours a day, then woke up feeling so overwhelmingly tired that she needed to haul herself back to the couch again to sleep.

Carla and her husband saw a general physician and a nurse twice during those four weeks, but she returned each time with no tests and without a diagnosis. Ghostly pains appeared and disappeared in her bones. The doctor fumbled about for some explanation. Perhaps it was a migraine, she suggested, and asked Carla to try some aspirin. The aspirin simply worsened the bleeding in Carla’s white gums.

Outgoing, gregarious, and ebullient, Carla was more puzzled than worried about her waxing and waning illness. She had never been seriously ill in her life. The hospital was an abstract place for her; she had never met or consulted a medical specialist, let alone an oncologist. She imagined and concocted various causes to explain her symptoms—overwork, depression, dyspepsia, neuroses, insomnia. But in the end, something visceral arose inside her—a seventh sense—that told Carla something acute and catastrophic was brewing within her body.

On the afternoon of May 19, Carla dropped her three children with a neighbor and drove herself back to the clinic, demanding to have some blood tests. Her doctor ordered a routine test to check her blood counts. As the technician drew a tube of blood from her vein, he looked closely at the blood’s color, obviously intrigued. Watery, pale, and dilute, the liquid that welled out of Carla’s veins hardly resembled blood.

Carla waited the rest of the day without any news. At a fish market the next morning, she received a call.

“We need to draw some blood again,” the nurse from the clinic said.

“When should I come?” Carla asked, planning her hectic day. She remembers looking up at the clock on the wall. A half-pound steak of salmon was warming in her shopping basket, threatening to spoil if she left it out too long.
In the end, commonplace particulars make up Carla’s memories of illness: the clock, the car pool, the children, a tube of pale blood, a missed shower, the fish in the sun, the tightening tone of a voice on the phone. Carla cannot recall much of what the nurse said, only a general sense of urgency. “Come now,” she thinks the nurse said. “Come now.”

I heard about Carla’s case at seven o’clock on the morning of May 21, on a train speeding between Kendall Square and Charles Street in Boston. The sentence that flickered on my beeper had the staccato and deadpan force of a true medical emergency: Carla Reed/New patient with leukemia/14th Floor/Please see as soon as you arrive. As the train shot out of a long, dark tunnel, the glass towers of the Massachusetts General Hospital suddenly loomed into view, and I could see the windows of the fourteenth floor rooms.

Carla, I guessed, was sitting in one of those rooms by herself, terrifyingly alone. Outside the room, a buzz of frantic activity had probably begun. Tubes of blood were shuttling between the ward and the laboratories on the second floor. Nurses were moving about with specimens, interns collecting data for morning reports, alarms beeping, pages being sent out. Somewhere in the depths of the hospital, a microscope was flickering on, with the cells in Carla’s blood coming into focus under its lens.

I can feel relatively certain about all of this because the arrival of a patient with acute leukemia still sends a shiver down the hospital’s spine—all the way from the cancer wards on its upper floors to the clinical laboratories buried deep in the basement. Leukemia is cancer of the white blood cells—cancer in one of its most explosive, violent incarnations. As one nurse on the wards often liked to remind her patients, with this disease “even a paper cut is an emergency.”

For an oncologist in training, too, leukemia represents a special incarnation of cancer. Its pace, its acuity, its breathtaking, inexorable arc of growth forces rapid, often drastic decisions; it is terrifying to experience, terrifying to observe, and terrifying to treat. The body invaded by leukemia is pushed to its brittle physiological limit—every system, heart, lung, blood, working at the knife-edge of its performance. The nurses filled me in on the gaps in the story. Blood tests performed by Carla’s doctor had revealed that her red cell count was critically low, less than a third of normal. Instead of normal white cells, her blood was packed with millions of large, malignant white cells—blasts, in the vocabulary of cancer. Her doctor, having finally stumbled upon the real diagnosis, had sent her to the Massachusetts General Hospital.


In the long, bare hall outside Carla’s room, in the antiseptic gleam of the floor just mopped with diluted bleach, I ran through the list of tests that would be needed on her blood and mentally rehearsed the conversation I would have with her. There was, I noted ruefully, something rehearsed and robotic even about my sympathy. This was the tenth month of my “fellowship” in oncology—a two-year immersive medical program to train cancer specialists—and I felt as if I had gravitated to my lowest point. In those ten indescribably poignant and difficult months, dozens of patients in my care had died. I felt I was slowly becoming inured to the deaths and the desolation—vaccinated against the constant emotional brunt.
There were seven such cancer fellows at this hospital. On paper, we seemed like a formidable force: graduates of five medical schools and four teaching hospitals, sixty-six years of medical and scientific training, and twelve postgraduate degrees among us. But none of those years or degrees could possibly have prepared us for this training program. Medical school, internship, and residency had been physically and emotionally grueling, but the first months of the fellowship flicked away those memories as if all of that had been child’s play, the kindergarten of medical training.

Cancer was an all-consuming presence in our lives. It invaded our imaginations; it occupied our memories; it infiltrated every conversation, every thought. And if we, as physicians, found ourselves immersed in cancer, then our patients found their lives virtually obliterated by the disease. In Aleksandr Solzhenitsyn’s novel Cancer Ward, Pavel Nikolayevich Rusanov, a youthful Russian in his midforties, discovers that he has a tumor in his neck and is immediately whisked away into a cancer ward in some nameless hospital in the frigid north. The diagnosis of cancer—not the disease, but the mere stigma of its presence—becomes a death sentence for Rusanov. The illness strips him of his identity. It dresses him in a patient’s smock (a tragicomically cruel costume, no less blighting than a prisoner’s jumpsuit) and assumes absolute control of his actions. To be diagnosed with cancer, Rusanov discovers, is to enter a borderless medical gulag, a state even more invasive and paralyzing than the one that he has left behind. (Solzhenitsyn may have intended his absurdly totalitarian cancer hospital to parallel the absurdly totalitarian state outside it, yet when I once asked a woman with invasive cervical cancer about the parallel, she said sardonically, “Unfortunately, I did not need any metaphors to read the book. The cancer ward was my confining state, my prison.”)
...
In children, leukemia was most commonly ALL—lymphoblastic leukemia—and was almost always swiftly lethal. In 1860, a student of Virchow’s, Michael Anton Biermer, described the first known case of this form of childhood leukemia. Maria Speyer, an energetic, vivacious, and playful five-year-old daughter of a Würzburg carpenter, was initially seen at the clinic because she had become lethargic in school and developed bloody bruises on her skin. The next morning, she developed a stiff neck and a fever, precipitating a call to Biermer for a home visit. That night, Biermer drew a drop of blood from Maria’s veins, looked at the smear using a candlelit bedside microscope, and found millions of leukemia cells in the blood. Maria slept fitfully late into the evening. Late the next afternoon, as Biermer was excitedly showing his colleagues the specimens of “exquisit Fall von Leukämie” (an exquisite case of leukemia), Maria vomited bright red blood and lapsed into a coma. By the time Biermer returned to her house that evening, the child had been dead for several hours. From its first symptom to diagnosis to death, her galloping, relentless illness had lasted no more than three days.

Although nowhere as aggressive as Maria Speyer’s leukemia, Carla’s illness was astonishing in its own right. Adults, on average, have about five thousand white blood cells circulating per milliliter of blood. Carla’s blood contained ninety thousand cells per milliliter—nearly twentyfold the normal level. Ninety-five percent of these cells were blasts—malignant lymphoid cells produced at a frenetic pace but unable to mature into fully developed lymphocytes. In acute lymphoblastic leukemia, as in some other cancers, the overproduction of cancer cells is combined with a mysterious arrest in the normal maturation of cells. Lymphoid cells are thus produced in vast excess, but, unable to mature, they cannot fulfill their normal function in fighting microbes. Carla had immunological poverty in the face of plenty.

White blood cells are produced in the bone marrow. Carla’s bone marrow biopsy, which I saw under the microscope the morning after I first met her, was deeply abnormal. Although superficially amorphous, bone marrow is a highly organized tissue—an organ, in truth—that generates blood in adults. Typically, bone marrow biopsies contain spicules of bone and, within these spicules, islands of growing blood cells—nurseries for the genesis of new blood. In Carla’s marrow, this organization had been fully destroyed. Sheet upon sheet of malignant blasts packed the marrow space, obliterating all anatomy and architecture, leaving no space for any production of blood.
Carla was at the edge of a physiological abyss. Her red cell count had dipped so low that her blood was unable to carry its full supply of oxygen (her headaches, in retrospect, were the first sign of oxygen deprivation). Her platelets, the cells responsible for clotting blood, had collapsed to nearly zero, causing her bruises.
Her treatment would require extraordinary finesse. She would need chemotherapy to kill her leukemia, but the chemotherapy would collaterally decimate any remnant normal blood cells. We would push her deeper into the abyss to try to rescue her. For Carla, the only way out would be the way through.

~~The Emperor of All Maladies: A Biography of Cancer -by- Siddhartha Mukherjee

Tuesday, July 26, 2016

Day 346: One-Straw Revolutionary



The person most responsible for articulating the principles of the organic farming movement was Sir Albert Howard (1873–1947). Throughout his life Howard published many books and articles. His best known are 'An Agricultural Testament' (1940) and 'The Soil and Health' (1947). They were written with both general readers and scientists in mind. Howard grew up in the English countryside and was trained as a mycologist at Cambridge University, in London. In 1905, after spending a few years working in the West Indies and a few years teaching agricultural science in England, he traveled to India where he would spend the next twenty-six years of his life directing agricultural research.

Howard’s first appointment was to The Research Institute at Pusa, near Calcutta. Since he was not familiar with the farming in India, he spent most of his time there learning from local farmers, whom he referred to as his “professors.” He watched them produce healthy crops of wheat, chickpeas, and tobacco without using chemical fertilizer or insecticides. Howard also noticed that the draft oxen used at the institute did not suffer from the contagious diseases that plagued animals on the neighboring farms even though they were in such close proximity that they rubbed noses across the fence lines.

“From these observations on plants and animals, Sir Albert was led to the conclusion that the secret of health and disease lay in the soil. The soil must be fertile to produce healthy plants and fertility meant a high percentage of humus. Humus was the key to the whole problem, not only of yields but of health and disease. From healthy plants grown on humus-rich soil, animals would feed and be healthy.” To replace the humus removed from the soil, Howard turned to composting, crediting the Chinese with the idea. In a memorial article for her husband, Louise Howard wrote, “On this crucial question of returning wastes to the soil, he always acknowledged his debt to the great American missionary, F. H. King, whose famous book, The Farmers of Forty Centuries . . . was to him a kind of bible.”

The Chinese system of agriculture described by King and Howard became the model for the worldwide organic farming movement as popularized by J. I. Rodale through Organic Gardening magazine and countless other Rodale publications. Two linked features characterize this system—plowing, and lots of work. The decomposition of organic matter in the soil occurs through the process of oxidation, similar to digestion in the human body. The rate of this slow, steady burn is regulated by the amount of oxygen in the soil. In a natural soil the rate of decomposition matches the amount of plant material the soil produces along with the droppings and decaying bodies of animals and microorganisms. When the soil is plowed, the amount of oxygen is increased so the rate of decomposition increases. To maintain the fertility of the soil, new organic matter must be added on a regular basis. That’s where the work comes in, and the need for all that compost.

When human beings first learned to plow, they gained access to the vast reserve of solar energy that had been stored in the organic matter of the soil, but access to this energy came at a very high price in the form of labor, erosion, and other environmental consequences. Albert Howard thought the trade-off was worth it because it allowed civilization to flourish. “[Through cultivation of the soil] man has laid his hand on the great Wheel and for a moment has stopped or deflected its turning. To put it another way, he has for his own use withdrawn from the soil the products of its fertility. That man is entitled to put his hand on the Wheel has never been doubted, except by such sects . . . who argued themselves into a state of declaring it a sin to wound the earth with spades or tools.” He believed that it was perfectly all right to plow the soil, entirely remaking nature in the process, as long as people also put in the hard work to maintain the soil’s fertility. “All the great agricultural systems which have survived have made it their business never to deplete the earth of its fertility without at the same time beginning the process of restoration. This becomes a veritable preoccupation.”

Mr. Fukuoka and the Indigenous people of the world did not think it was a good idea, morally or otherwise, “to withdraw for his own use the products of the soil’s fertility” by plowing. They were content to coexist with the land in a more gentle way. They also did not want to make replacing the earth’s fertility “a veritable preoccupation.”
This dichotomy between nurturing the land for the benefit of all species and using it strictly for the advancement of human civilization is summed up in an illuminating paragraph from Howard’s Soil and Health:

What is agriculture? It is undoubtedly the oldest of the great arts; its beginnings are lost in the mists of man’s earliest days. Moreover, it is the foundation of settled life and therefore of all true civilization, for until man had learnt to add the cultivation of plants to his knowledge of hunting and fishing, he could not emerge from his savage existence. This is no mere surmise: observation of surviving primitive tribes, still in the hunting and fishing stage . . . show them unable to progress because they have not mastered and developed the principle of cultivation of the soil.


In this passage, Howard reveals the smug attitudes of his culture. He maintains that until plowed-field agriculture came along, the basis of all true civilization, people led a “savage existence.” In a later passage he referred to the way Indigenous people obtained their food as “nothing more than a harvesting process.”9 Without plowing the soil these primitive people were “unable to progress,” and the only way they could improve themselves was to change their ways to be like his. Of course that would entail remaking their entire way of life and violating their own ethics. Perhaps these people had no interest in joining the “march to progress,” did not believe that dominating nature was a good idea, and felt it was not “man’s destiny” to do so. The irony is that even though things have not gone at all well for human society or the environment over the past ten thousand years, the air of superiority persists.

In 1931 Howard retired from government service and returned to England. An Agricultural Testament was published in 1943. His ideas were immediately attacked by the agricultural establishment, which viewed them as exaggerations and oversimplifications. He was marginalized as an extremist largely because of his lack of scientific proof and his hard-line stand against the use of synthetic chemicals of any kind. Where were the comparison plots? Where was the data? Science demanded that he use their empirical criteria while Howard’s understanding was based largely on whole systems analysis, intuition, and a lifetime of experience.

~~One-Straw Revolutionary: The Philosophy And Work Of Masanobu Fukuoka -by- Larry Korn

Monday, July 25, 2016

Day 345: Originals



Standing on stage in front of a captive audience, a technology icon pulled a new device out of his pocket. It was so much smaller than competing products that no one in the room could believe his eyes. The founder’s flair for theatrical product launches wasn’t the only source of his fame. He was known for his singular creative vision, a passion for blending science and art, an obsession with design and quality, and a deep disdain for market research. “We give people products they do not even know they want,” he remarked after introducing a revolutionary gadget that helped to popularize the selfie.

The man urged people to think different. He led his company to greatness and redefined multiple industries, only to be unceremoniously forced out by his own board of directors, and then watch the empire he created start to crumble before his eyes.

As much as this story seems to describe Steve Jobs, the visionary was actually one of Jobs’s heroes: Edwin Land, the founder of Polaroid. Today, Land is best remembered for inventing the instant camera, which gave rise to an entire generation of amateur photographers—and enabled Ansel Adams to take his famous landscape photographs, Andy Warhol to make his celebrity portraits, and NASA astronauts to capture the sun. But Land was responsible for something bigger: the polarizing light filter that’s still used in billions of products, from sunglasses and digital watches to pocket calculators and 3-D movie glasses. He also played a vital role in conceiving and designing the U-2 spy plane for President Dwight Eisenhower, which changed the course of the Cold War. In total, Land amassed 535 patents, more than any American before him other than Thomas Edison. In 1985, just a few months before getting kicked out of Apple, Steve Jobs shared his admiration for Land, “one of the great inventors of our time. . . . The man is a national treasure.”

Land may have been a great original, but he failed to instill those attributes in his company’s culture. In an ironic twist, Polaroid was one of the companies that pioneered the digital camera, yet ultimately went bankrupt because of it. As early as 1981, the company was making major strides in electronic imaging. By the end of the decade, Polaroid’s digital sensors could capture quadruple the resolution of competitors’ products. A high-quality prototype of a digital camera was ready in 1992, but the electronic-imaging team could not convince their colleagues to launch it until 1996. Despite earning awards for technical excellence, Polaroid’s product floundered, as by then more than forty competitors had released their own digital cameras.

Polaroid fell due to a faulty assumption. Within the company, there was widespread agreement that customers would always want hard copies of pictures, and key decision makers failed to question this assumption. It was a classic case of groupthink—the tendency to seek consensus instead of fostering dissent. Groupthink is the enemy of originality; people feel pressured to conform to the dominant, default views instead of championing diversity of thought.

In a famous analysis, Yale psychologist Irving Janis identified groupthink as the culprit behind numerous American foreign-policy disasters, including the Bay of Pigs invasion and the Vietnam War. According to Janis, groupthink occurs when people “are deeply involved in a cohesive in-group,” and their “strivings for unanimity override their motivation to realistically appraise alternative courses of action.”

Before the Bay of Pigs fiasco, Undersecretary of State Chester Bowles wrote a memo opposing the idea of sending Cuban exiles to overthrow Fidel Castro, but was dismissed for being fatalistic. A number of President John F. Kennedy’s advisers, in fact, had reservations about the invasion: Some were silenced by group members, and others chose not to speak up. In the meeting on the final decision, only a lone rebel voiced opposition. The president called for a straw poll, a majority voted in favor of the proposal, and the conversation quickly shifted to tactical decisions about its execution.

Janis argued that members of the Kennedy administration were concerned about “being too harsh” and destroying the “cozy, ‘we-feeling’ atmosphere.” Insiders who were present at the discussions shared the view that it was this sort of cohesion that promoted groupthink. As Bill Moyers, who handled correspondence between Kennedy and Lyndon Johnson, recalls:

Men who handled national security affairs became too close, too personally fond of each other. They tended to conduct the affairs of state as if they were a gentlemen’s club. . . . If you are very close . . . you are less inclined, in a debating sense, to drive your opponent to the wall and you very often permit a viewpoint to be expressed and to go unchallenged except in a peripheral way.

When a group becomes that cohesive, it develops a strong culture—people share the same values and norms, and believe in them intensely. And there’s a fine line between having a strong culture and operating like a cult.
For nearly half a century, leaders, policymakers, and journalists have accepted the Janis theory of groupthink: Cohesion is dangerous, and strong cultures are deadly. To solve problems and make wise decisions, groups need original ideas and dissenting views, so we need to make sure that their members don’t get too chummy. Had Kennedy’s advisers not been so tight-knit, they could have welcomed minority opinions, prevented groupthink, and avoided the Bay of Pigs disaster altogether.

There’s just one tiny problem with the cohesion theory: It isn’t true.

When Janis completed his analysis in 1973, it was too early for him to have access to classified documents and memoirs concerning the Bay of Pigs incident. These critical sources of information reveal that the key decision was not made by one small, cohesive group. Richard Neustadt, a political scientist and presidential adviser, explained that Kennedy held “a series of ad hoc meetings with a small but shifting set of top advisers.” Subsequent studies have also demonstrated that cohesion takes time to develop: A group without stable membership has no opportunity to form a sense of closeness and camaraderie. University of Toronto researcher Glen Whyte points out that in the year after the Bay of Pigs, Kennedy led a cohesive group of mostly the same advisers to an effective resolution of the Cuban missile crisis. We now know that the consensus to launch the Cuban invasion “was not the result of a desire to maintain the group’s cohesiveness or esprit de corps,” explains Stanford psychologist Roderick Kramer.

Cohesion doesn’t cause groupthink anywhere else, either. There was another fatal flaw in Janis’s analysis: He studied mostly cohesive groups making bad choices. How do we know that it was actually cohesion—and not the fact that they all ate cereal for breakfast or wore shoes with laces—that drove dysfunctional decisions? To draw an accurate conclusion about cohesion, he needed to compare bad and good decisions, and then determine whether cohesive groups were more likely to fall victim to groupthink.

When researchers examined successful and failed strategic decisions in top management teams at seven Fortune 500 companies, they discovered that cohesive groups weren’t more likely to seek agreement and dismiss divergent opinions. In fact, in many cases, cohesive groups tended to make better business decisions. The same was true in politics. In a comprehensive review, researchers Sally Riggs Fuller and Ray Aldag write, “There is no empirical support. . . . Cohesiveness, supposedly the critical trigger in the groupthink phenomenon, has simply not been found to play a consistent role.” They observe that “the benefits of group cohesion” include “enhanced communication,” and members of cohesive groups “are likely to be secure enough in their roles to challenge one another.” After carefully combing through the data, Whyte concludes that “cohesiveness should be deleted from the groupthink model.”

~~Originals: How Non-Conformists Move the World -by- Adam Grant

Sunday, July 24, 2016

Day 344: Japan: A Reinterpretation



“In fact the whole of Japan is a pure invention,” Oscar Wilde wrote in 1889. “There is no such country, there are no such people.”

Japan had opened to the West just thirty years before Wilde made this observation. Europe was awash in what the French call japonisme. Degas, Manet, Whistler, Pissarro—they were all fascinated by the imagery of Japanese tradition. In 1887 van Gogh decorated Le Père Tanguy with prints of Mount Fuji and geisha in elaborate kimono. Gauguin made gouaches on paper cut to the shape of Japanese fans. This infatuation permeated society. It was reflected on teapots and vases, in the fabric of women’s dresses, and in the way people arranged flowers.

But what did japonisme have to do with Japan as it was? The Japan of the 1880s was erecting factories and assembling steamships, conscripting an army and preparing a parliament. There were universities, offices, department stores, banks. As Wilde elaborated, “The actual people who live in Japan are not unlike the general run of English people; that is to say, they are extremely commonplace, and have nothing curious or extraordinary about them.”

Wilde was ahead of his time. We now have a word, albeit a contentious one, for the phenomenon he touched upon in “The Decay of Lying.” We call it Orientalism. Orientalism meant “the eternal East.” In his account of Japan Wilde left out only the quotation marks, for he was writing about the simple, serene, perfume-scented “Japan” of the Orientalist’s imaginings.

Orientalism was made of received notions and images of the people, cultures, and societies that stretch from the eastern Mediterranean to the Pacific. There was no dynamism or movement in the Oriental society. The Orient was fixed in immutable patterns, discernible through the ages and eternally repeated, like the mosaics in Middle Eastern mosques. It did not, in a word, progress. Deprived of the Enlightenment, the East displayed no rational thought, no logic or science. The Oriental merely existed, a creature ruled by fate, timeless tradition, and an ever-present touch of sorrow. The Oriental was “exotic” rather than ordinary, “inscrutable” rather than comprehensible, dusky rather than light. The Orient was the “other” of the West, and the twain would never meet.

Japan, farthest east from the metropolitan capitals and least known among explorers, became the object of extreme Orientalist fantasies as soon as Europeans arrived, in 1542. The first Westerners to record their impressions were missionaries, who took Japan and the Japanese to be a place and a people “beyond imagining,” as an Italian Jesuit put it, “a world the reverse of Europe.” Europeans were tall, the Japanese were short. Churches were high, temples low. European women whitened their teeth, Japanese women blackened theirs. Japan was an antipodean universe, ever yielding, ever prostrate. “The people are incredibly resigned to their sufferings and hardships,” the Jesuit wrote on another occasion, “yet they live quietly and contentedly in their misery and poverty.” Francis Xavier, who arrived in 1549, asked why the Japanese did not write “in our way”—from left to right, across. His Japanese guide replied with a question that would have done Francis some good had he troubled with its implications: Why did Europeans not write in the Japanese way, from right to left, down?

The observations of sixteenth-century Europeans were not pure invention. By tradition Japanese women did blacken their teeth. An air of resignation is as evident among the Japanese today as it must have been then. And Japanese locks—a peculiar obsession among these first visitors, noted again and again—are still opened by turning the key to the left, not (as in the West) to the right. But what makes these observations faintly ridiculous? Why did they produce the enduring idea of a place populated by mysterious gnomes? From our distant point of view it was a simple failure of perspective. The early travelers made no connections: That is, the Japanese were not permitted, if that is the word, their own history, a past by which their great and small differences could have been explained.

Orientalism grew from empire. One of its features was the position of the observer to the observed: The one was always superior to the other. As Edward Said stresses in Orientalism, intellectual conventions reflected relationships based on power and material benefit. So Orientalism came into full flower in Britain and France, the great empire builders of the nineteenth century. Japan was never formally a part of anyone’s empire, but it was hardly free of the Orientalism associated with imperial possessions. Its relations with Europe were based on the same material interests and were marked by the same presumed superiority on the part of Europeans.

Today, of course, we call someone from India, Indonesia, Taiwan, or Japan an Asian rather than an Oriental. Our term is an attempt, at least, to acknowledge human complexity and diversity—and equality. To call someone an Oriental would give at least mild offense, because it would recall relations that no longer exist—at least not on maps. But this is not to say that the habits of Orientalism are not still with us, as any Asian can point out. Our Orientalism is remarkable only for its fidelity to the ideas of centuries past: Japanese society is “vertical,” while in the West social relations are “horizontal”; Westerners like competition, the Japanese compromise. When an earthquake struck Kobe in 1995 an American correspondent described the city as “an antipodean New York with more sushi.” Asians stoically accept natural calamities as part of the timeless order of things, he explained, so that “the Japanese of Kobe are ideal disaster victims.”

There was one peculiar aspect of Wilde’s idea of Orientalism. He observed that the image of Japan abroad in the last century was partly a concoction of the Japanese themselves. Wilde called the Japanese “the deliberate self-conscious creation” of artists such as Hokusai, whose woodblock prints were much the fashion at the height of Europe’s japonisme. This was exceptionally astute. We could easily make the same assertion about many of Japan’s leaders and thinkers throughout history. “Japan” has long been an act of the imagination among the Japanese, too, and to call some Japanese Orientalists is to stretch the term but slightly.

~~Japan: A Reinterpretation -by- Patrick Smith

Saturday, July 23, 2016

Day 343: Dataclysm



Nostalgia used to be called mal du Suisse—the Swiss sickness. Their mercenaries were all over Europe and were apparently notorious for wanting to go home. They would get misty and sing shepherd ballads instead of fighting, and when you’re the king of France with Huguenots to burn, songs won’t do. The ballads were banned. In the American Civil War nostalgia was such a problem it put some 5,000 troops out of action, and 74 men died of it—at least according to army medical records. Given the circumstances, being sad to death is actually kind of understandable, but then again, this was also the time of leeches and the bonesaw, so who knows what was really going on. It’s interesting to think that in those days, many of the people who left home did so to go to war—much of the early literature on nostalgia, which was seen then as a bona fide disease, mentions soldiers. In that sepia-toned way I can’t help but think about the past, I like to imagine scientists in 1863, on either side of the Potomac, working furiously against the clock to develop the ultimate war-ending superweapon: high school yearbooks.

I actually don’t even know if they have high school yearbooks anymore. It’s hard to see why you’d need one now that Facebook’s around, although according to the company’s last quarterly report, people under eighteen aren’t using Facebook as much as they used to. So maybe the kids need the printed copy again, I don’t know.1 But however teenagers are staying in touch—whether it’s through Snapchat or WhatsApp or Twitter—I’m positive they’re doing it with words. Pictures are part of the appeal of all of these services, obviously, but you can only say so much without a keyboard. Even on Instagram, the comments and the captions are essential—the photo after all is just a few inches square. But the words are the words are the words. They’re still how feelings come across and how connections are made.

In fact, for all the hand-wringing over technology’s effect on our culture, I am certain that even the most reticent teenager in 2014 has written far more in his life than I or any of my classmates had back in the early ’90s. Back then, if you needed to talk to someone you used the phone. I wrote a few stiff thank-you notes and maybe one letter a year. The typical high school student today must surpass that in a morning. The Internet has many regrettable sides to it, but that’s one thing that’s always stood it in good stead with me: it’s a writer’s world. Your life online is mediated through words. You work, you socialize, you flirt, all by typing. I honestly feel there’s a certain epistolary, Austenian grandness to the whole enterprise. No matter what words we use or how we tap out the letters, we’re writing to one another more than ever. Even if sometimes

dam gerl

is all we have to say.

Major Sullivan Ballou was one of the soldiers in the Union army, on the Potomac, suffering, and homesick. Early in Ken Burns’s The Civil War, a narrator reads his farewell letter to his wife, to his “very dear Sarah,” and it’s a moving and important moment in the film. The Major was writing from camp before the first large battle of the war, and he was mortally wounded days later. His words were the last his family would ever hear from him, and they drove home the greater sorrow the nation would face in the years to come. Because of the exposure, the Ballou letter has become one of the most famous ever written—when I search for “famous letter,” Google lists it second. It’s a beautiful piece of writing, but think of all the other letters that will never be read aloud, that were burned, lost in some shuffle, or carried off by the wind, or that just moldered away.

Today we don’t have to rely on the lucky accident of preservation to know what someone was thinking or how he talked, and we don’t need the one to stand in for the many. It’s all preserved, not just one man to one wife before one battle, but all to all, before and after and even in the middle of each of our personal battles. You can find readings of the Ballou letter on YouTube, and many of the comments are along the lines of “They just don’t make them like that anymore.” That’s true. But what they, or rather we, are making offers a richness and a beauty of a different kind: a poetry not of lyrical phrases but of understanding. We are at the cusp of momentous change in the study of human communication and what it tries to foster: community and personal connection.

When you want to learn about how people write, their unpolished, unguarded words are the best place to start, and we have reams of them. There will be more words written on Twitter in the next two years than contained in all books ever printed. It’s the epitome of the new communication: short and in real time. Twitter was, in fact, the first service not only to encourage brevity and immediacy, but to require them. Its prompt is “What’s happening?” and it gives users 140 characters to tell the world. And Twitter’s sudden popularity, as much as its sudden redefinition of writing, seemed to confirm the fear that the Internet was “killing our culture.” How could people continue to write well (and even think well) in this new confined space—what would become of a mind so restricted? The actor Ralph Fiennes spoke for many when he said, “You only have to look on Twitter to see evidence of the fact that a lot of English words that are used, say, in Shakespeare’s plays or P. G. Wodehouse novels … are so little used that people don’t even know what they mean now.”

Even basic analysis shows that language on Twitter is far from a degraded form. Below, I’ve compared the most common words on Twitter against the Oxford English Corpus—a collection of nearly 2.5 billion words of modern writing of all kinds—journalism, novels, blogs, papers, everything. The OEC is the canonical census of the current English vocabulary. I’ve charted only the top 100 words out of the tens of thousands that people use, which may seem like a paltry sample, but roughly half of all writing is formed from these words alone (both on Twitter and in the OEC). The most important thing to notice on Twitter’s list is this: despite the grumblings from the weathered sentinels atop Fortress English, there are only two “netspeak” entries—rt, for “retweet” and u, for “you”—in the top 100. You’d think that contractions, grammatical or otherwise, would be staples of a form that only allows a person 140 characters, but instead people seem to be writing around the limitation rather than stubbornly through it. Second, when you calculate the average word length of the Twitter list, it’s longer than the OEC’s: 4.3 characters to 3.4. And look beyond length to the content of the Twitter vocabulary. I’ve highlighted the words unique to it in order to make the comparison easier...
[TABLE REMOVED]

While the OEC list is rather drab, lots of helpers and modifiers—workmanlike language to get you to some payoff noun or verb—on Twitter, there’s no room for functionaries; every word’s gotta be boss. So you see vivid stuff like:
love
happy
life
today
best
never
home
… make the top 100 cut. Twitter actually may be improving its users’ writing, as it forces them to wring meaning from fewer letters—it embodies William Strunk’s famous dictum, Omit needless words, at the keystroke level. A person tweeting has no option but concision, and in a backward way the character limit actually explains the slightly longer word length we see. Given finite room to work, longer words mean fewer spaces between them, which means less waste. Although the thoughts expressed on Twitter may be foreshortened, there’s no evidence here that they’re diminished.

~~Dataclysm : Who We Are* -by- Christian Rudder

Friday, July 22, 2016

Day 342: Pakistan’s Political Labyrinths



This essay seeks to reframe the policy debate surrounding the role of madaris in the production of militants in Pakistan and elsewhere. The main argument is that analysts must examine the human capital requirements of specific tanzeems, taking into consideration the objectives, tactics, theatres, and ‘quality of terror’ produced, as well as the preferred ‘target recruitment market’ of each particular group in question. Necessarily, this implies that some groups pose more risks than others, based on the scope of their operations, ties with other organizations (e.g. al Qaeda, Taliban), reach (local v. global), and lethality of operations pursued (suicide terrorism v. bazaar attacks). Such an analytical approach is more agile and affords more nuanced conclusions about the connections between education and militancy and about concomitant policy implications. Such an approach does not seek static answers to the madrasah question; rather, this approach permits analysis to evolve as groups develop their objectives, targets, theatres, and indeed, the quality of terror that they can perpetrate.

This approach permits the following conclusions. First, groups that operate in more challenging terrains, assail hard targets or attack targets that are either high-value or for which opportunity costs of failure are high are less likely to use militants that are exclusively madrasah trained than are groups that operate in easier areas of operation and engage either soft targets or targets with low opportunity costs of failure. Second, considering the prospect that madrasah education could confer some operational benefits – as in sectarian groups – madrasah graduates may be preferred in some operations. In other words, madrasah graduates may be suitable for some kinds of attacks but not for others. Third, even if madrasah students are more inclined towards jihad, madrasah students may not be selected by a given militant group if the group has other more desirable candidates to recruit. Militant groups could become more dependent upon madrasah students over time if militant recruitment standards change or if the militant recruitment market changes. Fourth, madaris produce religious entrepreneurs who justify violence and contribute to communities of support. Madrasah graduates also may build families that support some kinds of violence and may be the schools of choice for such families. In sum, this analytical framework suggests that madaris merit continual observation as they may contribute both to the demand for terrorism and to the limited supply of militants. For the same reasons, Pakistan’s public school sector deserves much more attention, however, than that sector currently enjoys.

The remainder of this chapter is organized as follows. The first section reviews the literature, laying out the various claims about madrasah enrolments, numbers of madaris, madrasah students’ socio-economic backgrounds, and – finally, and perhaps, most importantly – reviews the literature arguing for and against the connections between militancy and madaris. The second section looks very carefully at the various analyses of the presence (or lack thereof) of madaris products in militant groups. Drawing from this complex and multidisciplinary literature, the third section lays out a new analytical framework. The fourth section revisits the connections between madaris and militancy through this new analytical optic. The fifth and concluding section draws out the policy implications of this approach.

As noted above, despite the proliferation of studies of Pakistan’s madaris, many important questions persist. First, scholars have vigorously disagreed about the number of madaris and the penetration of madaris in the educational market. In the popular press, an array of reports suggested that anywhere from 500,000 to two million children are enrolled in Pakistan’s madaris, without any clarity about the level, intensity, or duration of madrasah attendance. The most influential – yet, still incorrect – account of the penetration of madaris in the educational market was offered by the International Crisis Group (ICG) in 2002. Relying upon interview data to obtain estimates of madrasah students, the ICG claimed that some one-third of all students in Pakistan attend madaris; however, those estimates were derived from an erroneous calculation that, when corrected, yields estimates that vary from 4 per cent to 7 per cent. This miscalculation is regrettable because the report is otherwise very illuminating.

In 2005, Tahir Andrabi, Jishnu Das, Asim Khwaja, and Tristan Zajonc published a study (hereafter referred to as the ‘Andrabi study’) that employed data both from household-based economic surveys (Pakistani Integrated Household Surveys, or PIHS, from 2001, 1997, and 1991) and from the 1998 Pakistani census as well as from household data collected in 2003 in three districts in the province of Punjab. The Andrabi study, without adjusting for bias in the data, calculated that madaris enjoyed a market share of less than 1 per cent. That is among all students enrolled in school full time, less than 1 per cent attend madaris. In contrast, the study found that public schools account for nearly 70 per cent of full-time enrolment and private schools account for nearly 30 per cent. It should be noted that the Andrabi study asked only about the kind of school attended not about the kind of education obtained. The Andrabi study did not adequately consider the fact that religious education is not the exclusive purview of madaris. Indeed, religious education takes place in public schools, under private tutors, in part-time mosque schools, and even in various kinds of private schools.

Since household-based surveys exclude some potential madrasah students (e.g. orphans and homeless children) and are somewhat dated, Andrabi, Das, Khwaja, and Zajonc adjusted their estimates accordingly for excluded groups and population growth. Accounting for these biases, they estimated generously that 475,000 children might attend madaris full time, less than 3 per cent of all full-time enrolments. The Andrabi study’s upper estimates are on the same order of magnitude as the ICG’s corrected figures, suggesting that madaris do not enjoy the market penetration that is widely believed of them.

Another serious caveat to the Andrabi study is that the data the study employed excluded various important areas of the Federally Administered Tribal Areas (FATA) and protected areas of the Northwest Frontier Province (NWFP), where madrasah enrolment could be much higher. The Andrabi study presented evidence that this may be the case: intensity of madrasah enrolment was highest along the Pakistan–Afghanistan border, reaching 7.5 per cent of enrolments in the district of Pishin. This raises the possibility that intensity of madrasah utilization could be just as high, if not higher, in all or parts of the FATA. For these reasons, the study could have underestimated madrasah enrolments, particularly in areas such as the FATA and restricted areas of the NWFP. The Andrabi study did not make any attempt to correct estimates for this exclusion, likely because there is little empirical base upon which such correction could be attempted.

A second area of empirical discord surrounds the number of madaris in Pakistan. In 2000, Jessica Stern claimed that there were 40,000–50,000 madaris in Pakistan; in 2001, Peter Singer estimated a number of 45,000, albeit with dubiety about this figure. The 9/11 Commission Report, citing Karachi’s police commander, claims that there are 859 madaris educating more than 200,000 youth in Karachi alone. In contrast, official Pakistani sources estimate that there were fewer than 7,000 madaris in Pakistan’s four provinces in 2000. Unfortunately, there are no definitive data sources in place to reconcile these different claims until Pakistan’s Ministry of Education completes its planned census of all educational institutions in Pakistan.

Yet a third area of empirical concern is the socio-economic backgrounds of madrasah students. Conventional wisdom holds that madaris are the resort of the poor students; yet, this claim rests uneasily upon the various robust studies of student socio-economic background that utilize 2001 PIHS data. [...] It is true that 43 per cent of madrasah students come from the poorest households (defined as those with annual incomes less than 50,000 Pakistani rupees [Rs], or U.S. $865 in 2001 dollars), compared to only 40.4 per cent for those in public schools; however, more madrasah students (11.7%) than public school students (3.4%) come from Pakistan’s wealthiest families (those with incomes of Rs 250,000 [$4,325] or greater). In fact, more than one-quarter of madrasah students come from Pakistan’s wealthier families (those with incomes of at least Rs 100,000 [$1,730]) compared to only 21 per cent of students in public schools.
...
Against the vocal assertions that madaris are ‘instruments of mass instruction’ and comprise an essential element of militant production in Pakistan and elsewhere, several scholarly articles as well as editorial pieces have sought to add a corrective view to the madrasah policy fixation. At first blush, many of these studies can be called ‘supply side’ because of their purported focus on the characteristics of militants who supply labour to militant groups. One recent example is afforded by Peter Bergen and Swati Pandey, who examined the backgrounds of 79 terrorists involved in five of the worst anti-Western terrorist attacks (e.g. the 1993 World Trade Center bombing, the 1998 bombing of two US embassies in Africa, the September 11 attacks, the 2002 Bali nightclub bombings, and the London bombings in July 2005). Bergen and Pandey found madrasah involvement to be rare and further noted that the masterminds of the attacks all had university degrees.

~~Pakistan’s Political Labyrinths: Military, society and terror -ed- Edited by Ravi Kalia

Thursday, July 21, 2016

Day 341: The Piano Shop On the Left Bank



Less than a week later a knock sounded on our door at the appointed time, loud and insistent, as if someone could not be bothered to use the bell. When I opened the door there stood before me an older man of about my height but with fully twice my mass in his upper body. His torso was the size and shape of a bass drum; he seemed to be all chest. Behind him, almost hidden by his bulk, lurked a slender young man with a narrow mustache and a nervous look on his face. The large man addressed me in a gruff voice. “You’re expecting a piano.”

“That’s right.”

“Where do you want me to put it?”

“Please come in and I’ll show you.”

We live on what in France is called the premier étage, one up from ground level. Our front door opens from a small, plant-filled courtyard onto a straight staircase that leads directly up to our apartment. He took this in as we ascended and he grunted approvingly: “No spiral staircase, that’s good.”

I thought of all the tiny twisting staircases that are so common in Paris and wondered what contortions must sometimes be necessary to deliver pianos. When I indicated the corner of the main room where I wanted the piano, he nodded. “No interior doors or hallways; this will be quick.”

“Will you and your crew need anything special for the assembly?” I assumed that there were at least three or four other men in a truck at the curb, waiting with the piano.

“What crew?”

“I mean . . . Well, how will you get the piano up here? Do you put a ramp on the staircase or something?”

“We’ll bring it up the same way we always do. Trust me; we’ve done this before.”

With that, he and the skinny young man marched down the stairs, leaving the front door wide open. Less than two minutes later I heard a chuffing noise out in the courtyard. I looked out the window and saw a huge black mass—our legless piano—making its way across the cobblestones, borne sideways on the shoulder of the barrel-chested man. The assistant trailed behind, his hand on the tail of the piano but apparently bearing none of its enormous weight.

At the open front door they paused and set the back tip of the piano on the doormat. I raced down the stairs, utterly amazed at what I had just seen and unsure how they proposed to come up the staircase. The older man stood before me, the piano strapped to his back with wide brown leather straps blackened and shiny from years of sweat. They ran diagonally over his shoulders and under his arms and looped around the piano so that the side curve of the cabinet hooked across his right shoulder, its snub tail resting on the ground. He was breathing heavily.

“Surely it’s not just the two of you! Can I help somehow?”

“Monsieur,” he stammered as he gasped for air, “I’ll tell you what I tell all our clients. Just stand clear and let us do our work.”

I ran up the stairs, baffled by how such a huge weight and massive bulk could be moved up the staircase by these two. Suddenly from below came a hoarse and rhythmic shout:

“Un, deux, trois: allez!”

The older man leaned into his straps and tilted forward so the full weight of the piano—nearly six hundred pounds—rested once again on his back. He then headed up the stairs, slowly but methodically. I watched, horrified but fascinated, powerless to help. The piano bowed him low and the straps disappeared into his flesh, pressing deep furrows through his shirt into the muscle and bone below. The younger man followed behind, carrying nothing but holding the tip of the piano and pushing it forward. I thought of the dragging tail wheel on an old airplane whose sole function is to stabilize.

About a third of the way up the stairs the man paused and stood partly up from his stoop. There was a precarious wobble as the mass of the piano swayed lightly and I had a vision of a singular disaster on our staircase; if the piano went, this man went, too. He was literally strapped to his load.

He exhaled hugely, like a draft animal at maximum exertion, and straightened a little. Then, with a quick intake of air through his clenched teeth, he leaned back into the straps and continued up the steps. This pause was repeated once more before the top, all the more terrifying for being higher on the staircase. The young man’s position was almost comically dangerous, as in a cartoon; if the piano slipped he would be crushed instantly.

At last the summit was achieved and the tail of the piano set down once again. The man before me had been transfigured into a red-faced mass of sweating muscle and bulging veins. As if to pause too long would break some strange spell that gave him power, after only a few seconds he once again hefted the entire cabinet and crossed the room, each footstep shaking the apartment mightily. He set it down in the corner on its side. At once the younger man attached two of the legs to the exposed underside. Then the older man lifted the piano to the horizontal while his assistant scurried underneath and attached the third leg.

The whole undertaking from the bottom of the stairs had taken perhaps three minutes, but I felt as if we had shared some major life experience. I had just witnessed the single most extraordinary feat of human strength that I could imagine.

~~The Piano Shop On the Left Bank: Discovering a Forgotten Passion In a Paris Atelier -by- Thad Carhart

Wednesday, July 20, 2016

Day 340: Color- A Natural History of The Palette



Ochre


In the lakelands of Italy there is a valley with ten thousand ancient rock carvings. These petroglyphs of Valle Camonica are signs that Neolithic people lived there once, telling stories and illustrating them with pictures. Some show strangely antlered beasts, too thin to provide much meat for a feast, and others show stick-people hunting them with stick-weapons. Another rock has a large five-thousand-year-old butterfly carved into it—although my visit coincided with that of a horde of German schoolchildren queuing up to trace it, and sadly I couldn’t see the original through all the paper and wax crayons.

But in a quieter place, far away from the groups, I found a flat dark rock covered with fifty or more designs for two-story houses with pointy roofs. It didn’t feel particularly sacred to me as I stood looking at it. It was more like an ancient real estate office or an architect’s studio, or just a place where people sat and idly carved their domestic dreams. The crude carvings are not colored now, of course: any paints would have disappeared long ago in the Alpine rain. But as I sat there, contemplating the past, I saw what looked like a small stone on the ground. It was a different color from all the other mountain rubble—whatever it was, it didn’t belong.

I picked it up and realized something wonderful. It didn’t look promising: a dirty pale brown stub of claylike earth about the size and shape of a chicken’s heart. On the front it was flat and on the back there were three planes like a slightly rounded three-sided pyramid. But when I placed the thumb and the first two fingers of my right hand over those three small planes, it felt immensely comfortable to hold. And what I realized then was that this piece of clay was in fact ochre, and had come from a very ancient paintbox indeed. I wet the top of it with saliva, and once the mud had come off it was a dark yellow color, the color of a haystack. When, copying the carvings, I drew a picture of a two-story house on the rock, the ochre painted smoothly with no grit: a perfect little piece of paint. It was extraordinary to think that the last person who drew with it—the person whose fingers had formed the grooves—lived and died some five thousand years ago. He or she had probably thrown this piece away after it had become too small for painting. A storm must have uncovered it, and left it for me to find.

Ochre—iron oxide—was the first color paint. It has been used on every inhabited continent since painting began, and it has been around ever since, on the palettes of almost every artist in history. In classical times the best of it came from the Black Sea city of Sinope, in the area that is now Turkey, and was so valuable that the paint was stamped with a special seal and was known as “sealed Sinope”: later the words “sinopia” or “sinoper” became general terms for red ochre. The first white settlers in North America called the indigenous people “Red Indians” because of the way they painted themselves with ochre (as a shield against evil, symbolizing the good elements of the world, or as a protection against the cold in winter and insects in summer), while in Swaziland’s Bomvu Ridge (Bomvu means “red” in Zulu), archaeologists have discovered mines that were used at least forty thousand years ago to excavate red and yellow pigments for body painting. 5 The word “ochre” comes from the Greek meaning “pale yellow,” but somewhere along the way the word shifted to suggest something more robust—something redder or browner or earthier. Now it can be used loosely to refer to almost any natural earthy pigment, although it most accurately describes earth that contains a measure of hematite, or iron ore.

There are big ochre mines in the Luberon in southern France and even more famous deposits in Siena in Tuscany: I like to think of my little stub of paint being brought from that area by Neolithic merchants, busily trading paint-stones for furs from the mountains. Cennino Cennini wrote of finding ochre in Tuscany when he was a boy walking with his father. “And upon reaching a little valley, a very wild steep place, scraping the steep with a spade, I beheld seams of many kinds of color,” he wrote. He found yellow, red, blue and white earth, “and these colors showed up in this earth just the way a wrinkle shows in the face of a man or a woman.”

I knew there would be stories to be uncovered in many ochre places—from Siena to Newfoundland to Japan. But for my travels in search of this first colored paint I wanted to go to Australia—because there I would find the longest continuous painting tradition in the world. If I had been charmed by my five-thousand-year-old ochre, how much more charmed would I be in Australia where cave painters used this paint more than forty thousand years ago? But I also knew that in the very center of Australia I would find the story of how that ancient painting tradition was transformed to become one of the most exciting new art movements in recent years.

Before I left for Australia I called an anthropologist friend in Sydney, who has worked with Aboriginal communities for many years. At the end of our phone conversation I looked at the notes I had scribbled. Here they are:

It’ll take time. Lots.

Ochre is still traded, even now.

Red is Men’s Business. Be careful.


I had absentmindedly underlined the last point several times. It seemed that the most common paint on earth was also sometimes the most secret. Finding out about ochre was going to be a little more complicated than I had thought.

~~Color- A Natural History of The Palette -by- Victoria Finlay