Monday, August 31, 2015

Day 17

First of all, you are not alone. Right now, whether you’re lying in bed or sitting on the beach, you’re in the company of thousands of living organisms—bacteria, insects, fungi, and who knows what else. Some of them are inside you—your digestive system is filled with millions of bacteria that provide crucial assistance in digesting food. Constant company is pretty much the status quo for every form of life outside a laboratory. And a lot of that life is interacting as organisms affect one another—sometimes helpfully, sometimes harmfully, sometimes both.

Which leads to the second point—evolution doesn’t occur on its own. The world is filled with a stunning collection of life. And every single living thing—from the simplest (like the schoolbook favorite, the amoeba) to arguably the most complex (that would be us)—is hardwired with the same two command lines: survive and reproduce. Evolution occurs as organisms try to improve the odds for survival and reproduction. And because, sometimes, one organism’s survival is another organism’s death sentence, evolution in any one species can create pressure for evolution in hundreds or thousands of other species. And that, when it happens, will create evolutionary pressure in hundreds or thousands of other species.
...
Outbreaks and pandemics are thought to be caused by antigenic drift, when a mutation occurs in the DNA of a virus, or antigenic shift, when a virus acquires new genes from a related strain. When the antigenic drift or shift in a virus is significant enough, our bodies don’t recognize it and have no antibodies to fight it—and that spells trouble. It’s like a criminal on the run taking on a whole new identity so his pursuers can’t recognize him. What causes antigenic drift? Mutations, which can be caused by radiation. Which is what the sun spews forth in significantly greater than normal amounts every eleven years.

The potential for evolution begins when a mutation occurs during the reproductive process of a given organism. In most cases, that mutation will have a harmful effect or no effect at all. Rarely, a random mutation will confer an advantage on its carrier, giving it a better chance to survive, thrive, and reproduce. In those cases, natural selection comes into play, the mutation spreads throughout the population through successive generations, and you have evolution. Adaptations that confer truly significant benefit to a species will eventually spread across an entire species, as when a strain of the flu virus acquires the new characteristic to go pandemic. But organisms, so the collective wisdom went, only happen upon helpful mutations by chance.

~~Survival of the Sickest--Dr.Sharon Moalem

Sunday, August 30, 2015

Day 16

When I was a postdoctoral fellow at Fermilab, Marvin Minsky, one of the founding fathers of strong AI and a signatory of the 1956 Dartmouth Proposal, came to give a colloquium. After presenting his arguments as to why machines would soon be thinking (this was 1986), I asked him if, in that case, they would also develop mental pathologies such as psychosis and bipolar illness. His answer, to my amazement and incredulity, was a categorical “Yes!” Half-jokingly, I then asked if there would be machine therapists. His answer, again, was a categorical “Yes!” I guess these therapists would be specialized debuggers, trained in both programming and machine psychology.

One could argue, contra Minsky, that if we could reverse engineer the human brain to the point of being able to properly simulate it, we would understand quite precisely the chemical, genetic, and structural origin of such mental pathologies and should deal with them directly, creating perfectly healthy simulated minds. In fact, such medical applications are one of the main goals of replicating a brain inside computers. One would have a silicone-based laboratory to test all sorts of treatments and drugs without the need of human subjects. Of course, all this would depend on whether we would still be here by then.

Such sinister dreams of transhuman machines are the stuff of myths, at least for now. For one thing, Moore’s Law is not a law of Nature but simply reflects the fast-paced development of processing technologies, another tribute to human ingenuity. We should expect that it will break down at some point, given physical limitations in computing power and miniaturization. However, if the myth were to turn real, we would have much to fear from these codes-that-write-better-codes digital entities. To what sort of moral principles would such unknown intelligences adhere? Would humans as a species become obsolete and thus expendable? Kurzweil and others believe so and see this as a good thing. In fact, as Kurzweil expressed in his The Singularity Is Near, he can’t wait to become a machine-human hybrid. Others (presumably most in the medical and dental profession, athletes, bodybuilders, and the like) would not be so enthusiastic about letting go of our carbon carcasses. Pressing the issue a bit further, can we even understand a human brain without a human body? This separation may be unattainable, body and brain so intertwined with one another as to make it meaningless to consider them separately. After all, a big chunk of the human brain (the same for other animals) is dedicated to regulating the body and the sensory apparatus. What would a brain be like without the grounding activity of making sure the body runs? Could a brain or intelligence exist only to process higher cognitive functions—a brain in a jar? And would a brain in a jar have any empathy with or understanding of physical beings?

~~The Island of Knowledge -by- Marcelo Gleiser

Saturday, August 29, 2015

Day 15

In the construction and consolidation of Muslim identity as distinct from the Hindus, the role of the Mussalmani Bangla was immensely important. Although the doctrinal differences between the two principal communities were wide and varied, historically these differences were not of such importance as to divide them in blocs antagonistic to one another. In fact, any unprejudiced consideration of historical Islam would suggest that ‘the basic doctrinal principles had very little to do with the political confrontation between the Muslims and the Hindus’. It was only through skilful manipulation of certain religious symbols and constant ideological propaganda that these differences were articulated and utilised in strengthening the claim for a separate homeland for the Muslims. A well-designed scheme in this direction was the Mussalmani Bangla, a curious hybrid which made indiscriminate use of Arabic, Persian and Urdu words.

The story began in the late nineteenth century, but the most significant step was undoubtedly the formation of the Islam Mission Samity in 1904 at the behest of Maniruzzaman Islamabadi, a self-styled preacher of repute seeking to undertake a programme of revivalism and reform in Bengal. In its inaugural meeting, the Samity pledged to pursue the following plan of action to popularise Islam among the Bengali Muslims who were apparently ‘ignorant of their cultural roots’:

(a) publication of booklets in simple Bengali on religion and to arrange their free distribution;
(b) publication of a magazine (Islam Darshan or Muslim Dharma) as a mouthpiece of the mission for free distribution;
(c) appointment of salaried missionaries, who would undertake preaching in different parts of Bengal;
(d) sending of preachers and missionaries to the remote corners of Bengal where the rays of Islam had not penetrated;
(e) translation of religious book on Islam into Bengali;
(f) establishment of contacts with Anjumans and such other bodies in different part of Bengal and
(g) setting up of a national library for the benefit of preachers, speakers and missionaries.

The basic objective was two-fold: first, the fact that Islam was a cementing force was recognised and its role in both constructing and consolidating a powerful Muslim bloc was, therefore, immensely significant. Secondly, in order to establish the Muslims as a pre-eminent political group in the public arena, the Samity suggested specific programmes involving not only the salaried missionaries but also those ‘interested in safeguarding the interests of Muslims in Bengal’. It is true that the Samity never became as effective as was anticipated, but it had certainly contributed to a process that loomed large in the course of time. The Muslim intellectuals realised the importance of creating a space for them not only for survival but also for strengthening their claim for power and privileges in the new environment, created in the aftermath of political and institutional changes, introduced by the colonial administration.

What was initiated by the Islam Mission Samity in 1904 blossomed fully with the formation of Bangiya Mussalman Sahitya Samity in 1911 in Calcutta, in which renowned Muslim intellectuals - Moniruzzaman Islam-abadi, Mohammad Shahidullah, Mozammel Haq, Eyakub Ali and Hatem Ali Khan - participated. Its principal aim was to bring about ‘a national awakening of the Bengal Muslims through the creation of an exclusively Muslim literature or national literature’, which was absolutely necessary ‘to develop the community as strongly as the Hindus’. Drawing on Islam, the Samity also articulated its objectives in such a way as to consolidate the Islamic identity in opposition to the Hindus.

~~The Partition of Bengal And Assam -by- Bidyut Chakrabarty

Friday, August 28, 2015

Day 14

Not many direct records exist of Vermeer’s methods regarding material preparation but the raw materials lapis lazuli, ochres, white lead, and ivory black would have been brought back to Holland by the Dutch merchants whom, by now, had mapped trading routes all around the world. In the novel Girl with a Pearl Earring, Tracy Chevalier alludes to the methods that Vermeer probably would have used. As she says, he would have ground black pigment from a piece of ivory and also ground his own white lead pigment by crushing it until it was a fine paste, then burnt yellow ochres by the fire to make them turn red and dark brown; he would have washed, purified, and ground the precious and expensive lapis lazuli to make ultramarine pigment. All this needed a direct and thorough knowledge of what supplies were available, the best new techniques for processing and purifying substances and making them into paint, news of the arrival of new consignments of pigments and resins from abroad and even information about brand new materials that might be of use. Vermeer would have had strong links with other trades specifically dealing with color such as weavers, dyers, ceramists, pharmacists, and printers. Artists were often members of other guilds as well as their own, such as the pharmacy guild.

How would this knowledge have affected Vermeer’s painterly technique? I have made a study of the art materials and techniques of a number of historically recognized artists over several years and can give an account of the ground and paint used by Vermeer. The ground is the layers that all painters need to put down on the canvas before the paint. It serves as a background for the paint and prevents it from sinking into the canvas and possibly causing it to rot. It is well known to artists that grounds make themselves “felt” no matter how thick the overlaid paint, as recognized by Max Doerner (1950), one of the few art experts to talk of the physical construction of paintings and of their effect.
...
I made up Vermeer’s ground following the ingredients given by the National Gallery’s technical department and using approximately the same amounts of each substance as in the original. In the painting Lady Standing at the Virginals, Vermeer’s distinct ground is primarily lead white with some ochre, a trace of red lead, a blackish-brown pigment, glue size, and water (Kirby 1997). When these ingredients are put together they make a very unusual and surprising substance. It was quite a shock to find this was under Vermeer’s finely painted, luminous, image. Ochres are heavy, earthy pigments made from clay, which maintain some of their rich body throughout the ground making process. Put together with the soft, heavy pigment, lead white, and the other ingredients, they create a grayish-brown combination with a texture that can only be described as like molten volcanic rock. Just as the under-layer affected the works of Van Eyck and Rubens, the gravity and weighted solidity of this ground mixture affects Vermeer’s painting and accounts in some way for its gravity and dynamism, despite the fact that the work is just over eighteen inches in length.

But why is the ground gray? According to the psycho-physiologist Hering (1964), who based his color theory on the psychological and physiological perception of color, color works in binary oppositions; yellow-blue, green-red and black-white (unlike the Young-Helmholtz theory). Each pair of colors is distinct but works in relation, and one is needed for the other to function. Within this the color gray works as an identity element between all the color opposites. The grayness of Vermeer’s ground shines through the colors, not obviously but in terms of a nuance. This underlying tone in the painting is the unifying element that aids the sense of harmony of the colors.

The ingredients of Vermeer’s paint are also very distinct. He used Venice Turpentine made from resin from the larch tree, and linseed oil. Add enough resin to the paint mixture and one ends up with an enamel-like, glassy quality. Linseed oil added to Venice turpentine is thick and viscose and can take a large amount of pigment. Intensity of pigment makes intensity of color. The enamel viscosity of the paint prevents the colors from sinking into the dry, dark ground and losing their intensity. The colors sit on the canvas as a homogenous body, with the ground coming through from below. The effect is comprehensive. Take the yellow paint of the woman’s skin in Lady Standing at the Virginals, for instance. It has a very subtle, bluish-tinge, and it is hard to say whether the color is blue or yellow. Yet the two are psychologically opposite and, in Hering’s experiments, cannot be one and also the other. Vermeer’s skilful combination of materials stimulates the vision and senses of the viewer and enables a discriminating response.

~~Color, Facture, Art and Design -by- Iona Singh

Thursday, August 27, 2015

Day 13

Since the Second World War, increases in GDP, educational attainment, and life span have forced the industrialized world to grapple with something we’d never had to deal with on a national scale: free time. The amount of unstructured time cumulatively available to the educated population ballooned, both because the educated population itself ballooned, and because that population was living longer while working less. (Segments of the population experienced an upsurge of education and free time before the 1940s, but they tended to be in urban enclaves, and the Great Depression reversed many of the existing trends for both schooling and time off from work.) This change was accompanied by a weakening of traditional uses of that free time as a result of suburbanization—moving out of cities and living far from neighbors—and of periodic relocation as people moved for jobs. The cumulative free time in the postwar United States began to add up to billions of collective hours per year, even as picnics and bowling leagues faded into the past. So what did we do with all that time? Mostly, we watched TV.
...
This isn’t just an American phenomenon. Since the 1950s, any country with rising GDP has invariably seen a reordering of human affairs; in the whole of the developed world, the three most common activities are now work, sleep, and watching TV. All this is despite considerable evidence that watching that much television is an actual source of unhappiness. In an evocatively titled 2007 study from the Journal of Economic Psychology—“Does Watching TV Make Us Happy?”—the behavioral economists Bruno Frey, Christine Benesch, and Alois Stutzer conclude that not only do unhappy people watch considerably more TV than happy people, but TV watching also pushes aside other activities that are less immediately engaging but can produce longer-term satisfaction. Spending many hours watching TV, on the other hand, is linked to higher material aspirations and to raised anxiety.

The thought that watching all that TV may not be good for us has hardly been unspoken. For the last half century, media critics have been wringing their hands until their palms chafed over the effects of television on society, from Newton Minow’s famous description of TV as a “vast wasteland” to epithets like “idiot box” and “boob tube” to Roald Dahl’s wicked characterization of the television-obsessed Mike Teavee in Charlie and the Chocolate Factory. Despite their vitriol, these complaints have been utterly ineffective—in every year of the last fifty, television watching per capita has grown. We’ve known about the effects of TV on happiness, first anecdotally and later through psychological research, for decades, but that hasn’t curtailed its growth as the dominant use of our free time.
...
Imagine treating the free time of the world’s educated citizenry as an aggregate, a kind of cognitive surplus. How big would that surplus be? To figure it out, we need a unit of measurement, so let’s start with Wikipedia. Suppose we consider the total amount of time people have spent on it as a kind of unit—every edit made to every article, and every argument about those edits, for every language that Wikipedia exists in. That would represent something like one hundred million hours of human thought, back when I was talking to the TV producer. (Martin Wattenberg, an IBM researcher who has spent time studying Wikipedia, helped me arrive at that figure. It’s a back-of-the-envelope calculation, but it’s the right order of magnitude.) One hundred million hours of cumulative thought is obviously a lot. How much is it, though, compared to the amount of time we spend watching television?

Americans watch roughly two hundred billion hours of TV every year. That represents about two thousand Wikipedias’ projects’ worth of free time annually. Even tiny subsets of this time are enormous: we spend roughly a hundred million hours every weekend just watching commercials. This is a pretty big surplus. People who ask “Where do they find the time?” about those who work on Wikipedia don’t understand how tiny that entire project is, relative to the aggregate free time we all possess. One thing that makes the current age remarkable is that we can now treat free time as a general social asset that can be harnessed for large, communally created projects, rather than as a set of individual minutes to be whiled away one person at a time.

~~ Cognitive Surplus -by- Clay Shirky

Wednesday, August 26, 2015

Day 12

The capstone of my third year of medical school was the crucial clerkship in internal medicine. How well you did in that clerkship was reputed to determine your professional future. I was at a lecture when my supervising resident, a few years ahead of me in her training, came into the classroom, tears in her eyes, and whispered to me that Mr. Quinn, a patient I’d been caring for, had just died. I got up and went with her to his bedside. We stood there together for a long time. He had been a feisty merchant marine, his face roughened from years at sea. I used to sit with him after those long days at the hospital, soaking up his stories, listening to his feelings about his impending death. He knew that his seventy years on the planet were coming to an end, his adventures almost over. Now his life story was complete, and the resident and I shared our reflections as we stood by the body that had sailed his ship at sea.

That afternoon I met with the senior attending physician for my mid-rotation student progress review. He was quite an imposing figure, tall, black-bearded, and handsome, an oncologist, who told me that I was doing a “fine job” in my clerkship—except for one thing. He noticed that I had left the teaching rounds that morning. I told him about Mr. Quinn’s death and about how my resident and I had wanted to stay with him until the orderlies came to take his body away. Then the physician said something I will never forget: “Daniel, you have to realize that you are here to learn. Taking time away from a learning opportunity is a big problem. You have to get over these feelings—patients just die. There is no time for tears. Your job is to learn. To be an excellent doctor, you have to deal with just the facts.”

No time for tears. Was this the art of medicine I was supposed to be learning?
The next day I went to Mr. Quinn’s old room to admit a new patient. There I found one of my favorite science instructors sitting on the bed. He smiled at me and said, “Well, I guess these diseases can happen to any of us.” He had developed acute leukemia, and I was supposed to begin preparing him for a bone marrow transplant. My face filled with intensity—first tears, which I held back; then fear, which I could not bear to sense; and finally stern resolve, a steely-eyed feeling of focus. I committed my mind to “get over” my fear and sadness and just attend to the details of what needed to be done. I ordered the necessary lab work, carefully administered the chemotherapy, watched closely for side effects, and intensely monitored my teacher/patient’s progress. I went to the library and gathered all the research facts I could about his form of leukemia, the treatment, and the prognosis. I presented these papers and the “clinical case” to my team of fellow students, residents, and supervising physicians. On teaching rounds in the patient’s room and beside his door, I discussed the technical details of the case with my attending and residents: just the facts, no feelings. I was careful not to spend much time talking with my patient. He was the sick one, I was the doctor. What was there to talk about, anyway?
...
Over the last quarter century, science has opened a new window into the nature of our lives. We can state definitively that the mind, though not visible to the eye, is unequivocally “real.” Medicine too has progressed since those days. Harvard Medical School has changed, and many programs today give at least some attention to notions such as empathy and stress reduction in student physicians and the importance of seeing the patient as a person. I would have had a much better experience becoming a physician with such an internally focused, well-rounded curriculum.

~~Mindsight -by- Daniel J. Siegel, M.D.

Tuesday, August 25, 2015

Day 11

In 1500 cotton textiles were the centre of the manufacturing life of the Indian subcontinent and the foundation of a wide-ranging trade that spread from India via land and sea to as far as Indonesia and Japan in the east and Saudi Arabia, Ethiopia, Egypt and West Africa in the west. Various types of textiles, and in particular cotton textiles, were traded by Indian merchants in exchange for a variety of commodities ranging from spices and foodstuff s to specie and luxuries. The regions of Gujarat in western India, Coromandel in its southern part and later Bengal in the east were among the most thriving centres of manufacturing within a well articulated system of exchange.

Europeans could just marvel at the scale, sophistication and articulation of such trade. John Huyghen van Linschoten noted in his Voyage to the East Indies (1598) a “great traffique into Bengala, Pegu, Sian, and Malacca, and also to India”, adding that “there is excellent faire linnen of Cotton made in Negapatan, Saint Thomas, and Masulepatan, of all colours, and woven with divers sorts of loome workes and figures, verie fine and cunningly wrought, which is much worne in India, and better esteemed then silke, for that is higher prised than silke, because of the finenes and cunning workmanship”.
...
There is no doubt that in the 1760s the Indian subcontinent was the major producer and trader of cotton textiles in the world. Fifty years later, in the 1810s, this was no longer the case. Europe or, to be more precise, the north-western corner of the continent called England, was fast overtaking India in the production of cotton textiles thanks to the use of machinery and the consequent reduction of the cost of production. This is a classic narrative that we succinctly call ‘The Industrial Revolution’. It is complemented by explanations of how Europe rose to world dominance in the production of cotton textiles and how, as a consequence, the strong trading position of Britain became increasingly evident. Europe, as India had done for centuries, was now not just the core of manufacturing but also the main carrier of cotton textiles across the globe. This narrative of European achievement is the result of generations of economic historians painstakingly collecting data, analysing documents and creating hypotheses on the modalities, causes and effects of a process that has been mostly seen as European in nature and characterised by a certain northern English accent. The very concept of a revolutionary event implies a break or discontinuity with the past, and more precisely with the long tradition of cotton textile manufacturing and trade that for centuries had characterised India.

The Indian subcontinent plays no part in the story of a process of European industrialisation, although it can be argued that the subcontinent later became a victim of competition or prey to political imperialism. The lack of Indian agency can be contrasted with some rather exaggerated European confidence and boisterousness. What seems to be often forgotten is that Europe did not suddenly acquire the skills, knowledge and ‘outlook’ to produce and sell cotton textiles and that even when it did, they were not the simple result of technological innovation. It was a long process of learning that started back in the 1500s that eventually led, a couple of centuries later, to Europe becoming one of the world’s leading cotton manufacturing areas.
...
[The] intercontinental trade in cotton textiles was key to creating the conditions for the later development of a European cotton industry. It is the relationship between India and Europe, mediated as it were through the so-called East India companies (in particular the Dutch (VOC) and the English (EIC) companies), not European exceptionalism that explains the mechanisation, industrialization and re-location of the cotton industry from India to Europe at the end of the eighteenth century.

~~[ESSAY 1] INTRODUCTION: THE WORLD OF SOUTH ASIAN TEXTILES, 1500-1850 -by- Giorgio Riello and Tirthankar Roy

~~[ESSAY 2] THE INDIAN APPRENTICESHIP: THE TRADE OF INDIAN TEXTILES AND THE MAKING OF EUROPEAN COTTONS -by- Giorgio Riello
from the book "How India Clothed the World"

Monday, August 24, 2015

Day 10


Every year, 9 million children die before their fifth birthday. A woman in sub-Saharan Africa has a one-in-thirty chance of dying while giving birth—in the developed world, the chance is one in 5,600. There are at least twenty-five countries, most of them in sub-Saharan Africa, where the average person is expected to live no more than fifty-five years. In India alone, more than 50 million school-going children cannot read a very simple text.

This is the kind of paragraph that might make you want to shut this book and, ideally, forget about this whole business of world poverty: The problem seems too big, too intractable. Our goal with this book is to persuade you not to.

A recent experiment at the University of Pennsylvania illustrates well how easily we can feel overwhelmed by the magnitude of the problem. Researchers gave students $5 to fill out a short survey. They then showed them a flyer and asked them to make a donation to Save the Children, one of the world’s leading charities. There were two different flyers. Some (randomly selected) students were shown this:

Food shortages in Malawi are affecting more than 3 million children; In Zambia, severe rainfall deficits have resulted in a 42% drop in maize production from 2000. As a result, an estimated 3 million Zambians face hunger; Four million Angolans—one third of the population—have been forced to flee their homes; More than 11 million people in Ethiopia need immediate food assistance.

Other students were shown a flyer featuring a picture of a young girl and these words:

Rokia, a 7-year-old girl from Mali, Africa, is desperately poor and faces a threat of severe hunger or even starvation. Her life will be changed for the better as a result of your financial gift. With your support, and the support of other caring sponsors, Save the Children will work with Rokia’s family and other members of the community to help feed her, provide her with education, as well as basic medical care and hygiene education.

The first flyer raised an average of $1.16 from each student. The second flyer, in which the plight of millions became the plight of one, raised $2.83. The students, it seems, were willing to take some responsibility for helping Rokia, but when faced with the scale of the global problem, they felt discouraged.
...
The students’ reaction is typical of how most of us feel when we are confronted with problems like poverty. Our first instinct is to be generous, especially when facing an imperiled seven-year-old girl. But, like the Penn students, our second thought is often that there is really no point: Our contribution would be a drop in the bucket, and the bucket probably leaks. This book is an invitation to think again, again: to turn away from the feeling that the fight against poverty is too overwhelming, and to start to think of the challenge as a set of concrete problems that, once properly identified and understood, can be solved one at a time.

~~Poor Economics -by- Avijit Banerjee and Esther Duflo

Sunday, August 23, 2015

Day 9

The liberal resurgence, which brought down so many tyrannies, was also an attack on the beliefs and values of the old democracies. The 1960s generation brought an end to the deference shown to democratic leaders and established institutions. Many found its irreverence shocking, but no matter. The job of artists, intellectuals and journalists became to satirise and expose; to be the transgressive and edgy critics of authority. They did not confine themselves to politics. Cultural constraints, backed by religious authority, collapsed under the pressure of the second wave of feminism, the sexual revolution and the movements for racial and homosexual emancipation. The revolution in private life was greater than the revolution in politics. Old fences that had seemed fixed by God or custom for eternity fell as surely as the Berlin Wall.

Struggling to encapsulate in a paragraph how the cultural revolution of the second half of the twentieth century had torn up family structures and prejudices, the British Marxist historian Eric Hobsbawm settled on an account from a baffled film critic of the plot of Pedro Almodóvar’s 1987 Law of Desire.

"In the film Carmen Maura plays a man who’s had a transsexual operation and, due to an unhappy love affair with his/her father, has given up on men to have a lesbian, I guess, relationship with a woman, who is played by a famous Madrid transvestite."

It was easy to mock. But laughter ought to have been stifled by the knowledge that within living memory transsexuals, transvestites, gays and lesbians had not been subjects that writers and directors could cover sympathetically, or on occasion at all. Their release from traditional morality reflected the release of wider society from sexual prejudice.

That release offended religious and social conservatives who thought a woman’s place was in the home, sexual licence a sin and homosexuality a crime against nature. Although the fashion for relativism was growing in Western universities in the 1980s, leftish academics did not say we had no right to offend the cultures of racists, misogynists and homophobes, and demand that we ‘respect’ their ‘equally valid’ contributions to a diverse society. Even they knew that reform is impossible without challenging established cultures. Challenge involves offence. Stop offending, and the world stands still.

~~You Can't Read This Book: Censorship in an Age of Freedom -by- Nick Cohen

Saturday, August 22, 2015

Day 8

In 49 B.C., nearly six years after that massacre, Gaul is completely conquered. It is time for Caesar to return home, where he will finally stand trial for his actions. He’s been ordered to dismiss his army before setting foot into Italy.

This is Roman law. All returning generals are required to disband their troops before crossing the boundary of their province, in this case the Rubicon River. This signals that they are returning home in peace rather than in the hopes of attempting a coup d’état. Failure to disband the troops is considered an act of war.
But Caesar prefers war. He decides to cross the Rubicon on his own terms. Julius Caesar is fifty years old and in the prime of his life.

He has spent the entire day of January 10 delaying this moment, because if he fails, he will not live to see the day six months hence when he will turn fifty-one. While his troops play dice, sharpen their weapons, and otherwise try to stay warm under a pale winter sun, Caesar takes a leisurely bath and drinks a glass of wine. These are the actions of a man who knows he may not enjoy such creature comforts for some time to come. They are also the behavior of man delaying the inevitable.

But Caesar has good reason to hesitate. Pompey the Great, his former ally, brother-in-law, and builder of Rome’s largest theater, is waiting in Rome. The Senate has entrusted the future of the Republic to Pompey and ordered him to stop Caesar at all costs. Julius Caesar, in effect, is about to begin a civil war. This is as much about Caesar and Pompey as it is about Caesar and Rome. To the winner goes control of the Roman Republic. To the loser, a certain death.

Caesar surveys his troops. The men of Legio XIII stand in loose formation, awaiting his signal. Each carries almost seventy pounds of gear on his back, from bedroll to cooking pot to three days’ supply of grain. On this cold winter evening, they wear leather boots and leggings and cloaks over their shoulders to keep out the chill. They will travel on foot, wearing bronze helmets and chain mail shirts. All protect themselves with a curved shield made of wood, canvas, and leather, along with two javelins—one lightweight, the other heavier and deadlier. They are also armed with double-edged “Spanish swords,” which hang from scabbards on their thick leather belts, and the requisite pugiones. Some men are kitted out with slingshots, while others are designated as archers. Their faces are lined and weathered from years of sun and wind, and many bear the puckered scars from where an enemy spear plunged into their bodies or the long purple scar tissue from the slash of an enemy sword cutting into biceps or shoulder.

They are young, mostly between seventeen and twenty-three years of age, but there are some salt-and-pepper beards among them, for any male Roman citizen as old as forty-six can be conscripted into the legions. Young or old, they have endured the rugged physical training that makes the stamina of legionaries legendary. New recruits march for hours wearing a forty-five-pound pack, all the while maintaining complicated formations such as the wedge, hollow square, circle, and testudo, or “tortoise.” And all Roman legionaries must learn how to swim, just in case battle forces them to cross a river.
...
They live off the land, pooling their supplies of grain and any meat they can forage. They have built roads and bridges, delivered mail, collected taxes, served as police, endured the deep winters of Gaul, known the concussive sting of a slingshot-hurled rock bouncing off their helmets, and even played the role of executioner, driving nails through the hands and feet of escaped slaves and deserters from their own ranks who have been captured and condemned to crucifixion. The oldest among them can remember the uprising of 71 B.C., when seven thousand slaves, led by a rebel named Spartacus, revolted, were captured, and were crucified in a 240-mile line of crosses that stretched almost all the way from Naples to Rome.

It is Caesar to whom these men have sworn their allegiance. They admire how he leads by example, that he endures the same hardships and deprivations during a campaign that they do. He prefers to walk among the “comrades,” as he calls his troops, rather than ride a horse. Caesar is also well known throughout the ranks for his habit of rewarding loyalty and for his charisma. His men proudly boast of the many women he has had throughout Gaul, Spain, and Britain, and they even make fun of his thinning hair by singing songs about “our bald whoremonger.” Likewise, Caesar gives his legions free rein to chase women and gamble when they are off duty. “My men fight just as well when they are stinking of perfume,” he says.

~~Killing Jesus: A History -by- Bill O'Reilly and Martin Dugard

Friday, August 21, 2015

Day 7


One of the first victims of this creeping layer of gas was the cripple Rahul on his wheeled plank. Because of his robust constitution, he did not die right away but only after several minutes of agony. He coughed, choked and spewed up blackish clots. His muscles shook with spasms, his features contorted, he tore off his necklaces and his shirt, groaning and gasping for something to drink, then finally toppled from his board and dragged himself along the ground in a last effort to breathe. The man who had always been such a tireless source of moral support to the community, who had so frequently appeased the fears of his companions in misfortune, was dead.
...
In a matter of minutes the emergency rooms of Hamidia Hospital looked like a morgue. The two doctors on duty, Deepak Gandhe and Mohammed Sheikh, had thought they were going to have a quiet night after Sister Felicity’s visit. All at once the department was invaded. People were dropping like flies. Their bodies lay strewn about the wards, corridors, offices, verandas and the approaches to the building. The admissions nurse closed her register. How could she begin to record the names of so many people? The spasms and convulsions that racked most of the victims, the way they gasped for breath like fish out of water, reminded Dr. Gandhe of Mohammed Ashraf’s death two years earlier. The little information he could glean confirmed that the refugees came from areas close to the Carbide factory. So all of them had been poisoned by some toxic agent. But which one? While Sheikh and a nurse tried to revive the weakest with oxygen masks, Gandhe picked up the telephone. He wanted to speak to his colleague Loya, Carbide’s official doctor in Bhopal. He was the only one who would be able to suggest an effective antidote to the gas these dying people had inhaled. It was nearing two in the morning when he finally got hold of Loya. “That was the first time I heard the cruel name of methyl isocyanate,” Dr. Gandhe was to say later. But just as Mukund had been earlier, Dr. Loya turned out to be most reassuring.

“It’s not a deadly gas,” he claimed, “just irritating, a sort of tear gas.”

“You are joking! My hospital’s overrun with people dying like flies.” Gandhe was running out of patience.

“Breathing in a strong dose may eventually cause pulmonary edema,” Dr. Loya finally conceded.

“What antidote should we administer?” pressed Gandhe. “There is no known antidote for this gas,” replied the factory’s spokesperson, without any apparent embarrassment. “In any case, there’s no need for an antidote,” he added. “Get your patients to drink a lot and rinse their eyes with compresses steeped in water. Methyl isocyanate has the advantage of being soluble in water.”

Gandhe made an effort to stay calm. “Water? Is that all you suggest I use to save people coughing their lungs out!” he protested before hanging up.
...
This was only the beginning of his night of horror. Quite apart from haemorrhaging of the lungs and cataclysmic suffocation, he found himself confronted with symptoms that were unfamiliar to him: cyanosis of the fingers and toes, spasms in the esophagus and intestines, attacks of blindness, muscular convulsions, fevers and sweating so intense that victims wanted to tear off their clothes. Worst of all was the incalculable number of living dead making for the hospital as if it were a lifeboat in a shipwreck. This onslaught gave rise to particularly distressing scenes. Going out briefly into the street to assess the situation, Gandhe saw screaming youngsters clinging to their mothers’ burkahs, men who had gone mad tearing about in all directions, rolling on the ground, dragging themselves along on their hands and knees in the hope of getting to the hospital. He saw women abandon some of their children, those they could no longer carry, in order to save just one—a choice that would haunt them for the rest of their lives.

~~Five Past Midnight in Bhopal -by- Dominique Lapierre, Javier Moro

Thursday, August 20, 2015

Day 6

The “real” founder of the Mogul Empire was indeed Akbar (Padshah, i.e. emperor, from 1556 to 1605). Akbar put an end to the political chaos in north India by subduing the Afghans and the Rajputs. Further, he reorganized the administration. By the time of Akbar’s death in 1605, the Mogul Empire had established a stable administrative machinery in north and central India and was in the process of moving slowly into Deccan. Until the fourteenth century, the dominant mode of military recruitment in India was the mamluk system. The mamluks were slave soldiers of the Muslim world. However, by the end of the sixteenth century, due to Akbari reorganization, a sort of quasi-mercenary-cum-quasi-professional military employment known as the mansabdari system became dominant. The beginning of the seventeenth century witnessed the gradual expansion of Mogul power into Deccan under Akbar’s son and grandson, named Jahangir (r. 1605-1627) and Shah Jahan (r. 1628-1658) respectively. They continued to operate within the administrative fabric established by their illustrious predecessor. By the mid-seventeenth century, two contradictory processes were unfolding in the subcontinent. While the Mogul Empire under the dynamic leadership of Emperor Aurangzeb (1658-1707) was poised for expansion, simultaneously the administrative institutions established by Akbar were slowly becoming dysfunctional. This was partly because the Mogul economy was in the grip of what is known as the “agrarian crisis” and partly due to the new forms of warfare introduced by the Marathas and the Persians.
...
Military history is neglected in the South Asian academic field due to the dominance of Marxism and, more recently, post-modernism. We have a few books on the military history of medieval India. The earliest modern work on the Mogul army is by the British historian of colonial India, William Irvine. He argues that Indian “racial inferiority” resulted in continuous treachery, infighting, and backbiting, and that this racial/cultural trait prevented the Moguls from constructing a bureaucratic professional standing army capable of waging decisive battles and sieges. The latest work on the Mogul army by a Dutch historian, Jos Gommans, asserts that the Mogul army was not geared for decisive confrontations aimed at destroying the enemy. Rather, the Mogul grand strategy was to absorb potential enemies within the loose structure of the Mogul Empire. The Mogul army functioned as an instrument to frighten, coerce, and deter enemies.

The principal debate in the field is about weak states and flower/ritual warfare versus strong states, standing armies, and decisive battles. Most modern non-Indian scholars (Dirk Kolff, Gommans, Andre Wink, Doug- las Streusand, Burton Stein, Lorne Adamson, Stephen Peter Rosen, etc.) argue that the Mogul state was a shadowy structure. The imperial fabric comprised innumerable semi-autonomous principalities held together by the personality of the emperor and the pomp and splendour of the Mogul durbar (court). The emperor did not enjoy a monopoly of violence in the public sphere. The Moguls lacked a drilled and disciplined standing army for crushing opponents on the battlefields. Treachery, diplomacy, bribery, and a show of force resulted in the absorption and assimilation of enemies. What Irvine has categorized as Indian racial inferiority had been transformed as the unique culture of the “Orientals” in the paradigm of these modern scholars.

In contrast, John F. Richards and many of the Indian Muslim historians who are influenced by Marxism and belong to a group which can be labelled the Aligarh School, assert that the Mogul Empire was a centralized agrarian  bureaucratic polity. The Aligarh School turns the limelight on the agrarian economy; focusing on the revenue documents, they argue that the Moguls’ ability to claim about 50 per cent of the gross produce from the land proves that they had a strong presence at the regional/local level. The sucking of economic surplus from the countryside was aided by the military supremacy of the Moguls, exemplified by the use of cavalry and gunpowder weapons. However, M. Athar Ali notes that, unlike the Tudor state, the Mogul state lacked the capability and the intention to legislate. Probably the nature of the Mogul state and Mogul warfare lies somewhere in between the two extreme viewpoints discussed above.

~~ [ESSAY] From the mamluks to the mansabdars: A social history of military service in South Asia, c. 1500 to c. 1650 -by- Kaushik Ray (from the book "Fighting for a Living: A Comparative History of Military Labour 1500-2000")

Wednesday, August 19, 2015

Day 5

For the first eight years of Walker’s life, every night is the same. The same routine of tiny details, connected in precise order, each mundane, each crucial. The routine makes the eight years seem long, almost endless, until I try to think about them afterwards, and then eight years evaporate to nothing, because nothing has changed.

Tonight I wake up in the dark to a steady, motorized noise. Something wrong with the water heater. Nnngah. Pause. Nnngah. Nnngah. But it’s not the water heater. It’s my boy, Walker, grunting as he punches himself in the head, again and again.

He has done this since before he was two. He was born with an impossibly rare genetic mutation, cardiofaciocutaneous syndrome, a technical name for a mash of symptoms. He is globally delayed and can’t speak, so I never know what’s wrong. No one does. There are just over a hundred people with CFC around the world. The disorder turns up randomly, a misfire that has no certain cause or roots; doctors call it an orphan syndrome because it seems to come from nowhere.

I count the grunts as I pad my way into his room: one a second. To get him to stop hitting himself, I have to lure him back to sleep, which means taking him downstairs and making him a bottle and bringing him back into bed with me.
That sounds simple enough, doesn’t it? But with Walker, everything is complicated. Because of his syndrome, he can’t eat solid food by mouth, or swallow easily. Because he can’t eat, he takes in formula through the night via a feeding system. The formula runs along a line from a feedbag and a pump on a metal IV stand, through a hole in Walker’s sleeper and into a clever-looking permanent valve in his belly, sometimes known as a G-tube, or mickey. To take him out of bed and down to the kitchen to prepare the bottle that will ease him back to sleep, I have to disconnect the line from the mickey. To do this, I first have to turn off the pump (in the dark, so he doesn’t wake up completely) and close the feed line. If I don’t clamp the line, the sticky formula pours out onto the bed or the floor (the carpet in Walker’s room is pale blue: there are patches that feel like the Gobi Desert under my feet, from all the times I have forgotten). To crimp the tube, I thumb a tiny red plastic roller down a slide. (It’s my favourite part of the routine—one thing, at least, is easy, under my control.) I unzip his one-piece sleeper (Walker’s small, and grows so slowly he wears the same sleepers for a year and a half at a time), reach inside to unlock the line from the mickey, pull the line out through the hole in his sleeper and hang it on the IV rack that holds the pump and feedbag. Close the mickey, rezip the sleeper. Then I reach in and lift all 45 pounds of Walker from the depths of the crib. He still sleeps in a crib. It’s the only way we can keep him in bed at night. He can do a lot of damage on his own.

This isn’t a list of complaints. There’s no point to complaining. As the mother of another CFC child once told me, “You do what you have to do.” If anything, that’s the easy part. The hard part is trying to answer the questions Walker raises in my mind every time I pick him up. What is the value of a life like his—a life lived in the twilight, and often in pain? What is the cost of his life to those around him? “We spend a million dollars to save them,” a doctor said to me not long ago. “But then when they’re discharged, we ignore them.” We were sitting in her office, and she was crying. When I asked her why, she said “Because I see it all the time.”

Sometimes watching Walker is like looking at the moon: you see the face of the man in the moon, yet you know there’s actually no man there. But if Walker is so insubstantial, why does he feel so important? What is he trying to show me? All I really want to know is what goes on inside his off-shaped head, in his jumped-up heart. But every time I ask, he somehow persuades me to look into my own.

~~The Boy In the Moon -by- Ian Brown

Tuesday, August 18, 2015

Day 4

I came across the power of “cunt” quite accidentally. After writing an article for a newspaper, I typed in “word count,” but left out the “o.” My editor laughingly pointed out the mistake. I looked at the two words together and decided “Word Cunt” seemed like a nice title for a woman writer. As a kind of intraoffice byline, I started typing “Word Cunt” instead of “word count” on all my articles.

The handful of people who saw hard copies of my work reacted strongly and asked why I chose to put these two words on my articles. After explaining my reasoning to editorial assistants, production magis, proofreaders and receptionists, I started wondering about the actual, decontextualized power of “cunt.” I looked up “cunt” in Barbara G. Walker’s twenty-five-year research opus, The Women’s Encyclopedia of Myths and Secrets, and found it was indeed a title, back in the day. “Cunt” is related to words from India, China, Ireland, Rome and Egypt. Such words were either titles of respect for women, priestesses and witches, or derivatives of the names of various goddesses:

In ancient writings, the word for “cunt” was synonymous with “woman,” though not in the insulting modern sense.
...
According to every woman-centered historical reference I have read—from M. Esther Harding to bell hooks—the containment of woman’s sexuality was a huge priority to emerging patrifocal religious and economic systems. Cunts were anathema to forefather types. Literally and metaphorically, the word and anatomical jewel presided at the very nexus of many earlier religions which impeded phallic power worship. In Western civilization, forefather types practiced savior-centered religions, such as Catholicism. Springing forth from a very real, very fiscal fear of women and our power, eventually evolving into sexual retardation and womb envy, a philosophy and social system based on destruction was culled to thriving life. One of the more well-documented instances of this destruction-oriented consciousness is something called the Inquisition. It lasted for over five hundred years. That is how long it took the Inquisition to rend serious damage to the collective spirit of non-savior-centered religious worshippers.

The Inquisition justified the—usually sadistic—murder, enslavement or rape of every woman, child and man who practiced any form of spiritual belief which did not honor savior-centered phallic power worship. Since the beginning of time, most cultures honored forces which were tangible, such as the moon, earth, sun, water, birth, death and life. A spirituality which was undetectable to any of the human senses was considered incomprehensible. One imagines victims of the Inquisition were not hard to come by. Women who owned anything more than the clothes on their backs and a few pots to piss in were religiously targeted by the Inquisition because all of women’s resources and possessions became property of the famously cuntfearing Catholic Church. Out of this, the practice of sending “missionaries” into societies bereft of savior-centered spiritualities evolved.

Negative reactions to “cunt” resonate from a learned fear of ancient yet contemporary, inherent yet lost, reviled yet redemptive cuntpower.

~~Cunt- A Declaration of Independence -by- Inga Muscio

Monday, August 17, 2015

Day 3


I originally intended to title this book The Road to Paradise, but eventually changed it to Tombstone. I had four reasons for choosing this title: the first is to erect a tombstone for my father, who died of starvation in 1959; the second is to erect a tombstone for the thirty-six million Chinese who died of starvation; and the third is to erect a tombstone for the system that brought about the Great Famine.
...
At the end of April 1959, I was spending my after-school hours assembling a May Fourth Youth Day wall newspaper for my school’s Communist Youth League. My childhood friend Zhang Zhibai suddenly arrived from our home village of Wanli and told me, “Your father is starving to death! Hurry back, and take some rice if you can.” He said, “Your father doesn’t even have the strength to strip bark from the trees—he’s starved beyond helping himself. He was headed to Jiangjiayan to buy salt to make saltwater, but he collapsed on the way, and some people from Wanli carried him home.”

I dropped what I was doing and requested leave from our league secretary and head teacher. Then I collected a three-day meal ration of 1.5 kilos of rice from the school canteen and rushed home. Upon reaching Wanli, I found things radically changed. The elm tree in front of our house had been reduced to a barkless trunk, and even its roots had been dug up and stripped, leaving only a ragged hole in the earth. The pond was dry; neighbors said it had been drained to dredge for rank-tasting mollusks that had never been eaten in the past. There was no sound of dogs barking, no chickens running about; even the children who used to scamper through the lanes remained at home. Wanli was like a ghost town.

Upon entering our home, I found utter destitution; there was not a grain of rice, nothing edible whatsoever, and not even water in the vat. Immobilized by starvation, how would my father have had the strength to fetch water?
My father was half-reclined on his bed, his eyes sunken and lifeless, his face gaunt, the skin creased and flaccid. He tried to extend his hand to greet me, but couldn’t lift it, just moving it a little. That hand reminded me of the human skeleton in my anatomy class; although it was covered with a layer of withered skin, nothing concealed the protrusions and hollows of the bone structure. I was shocked with the realization that the term skin and bones referred to something so horrible and cruel.
...
Three days later he departed this world.

With the help of other villagers, I hastily buried him. While he was still alive I had hardly taken notice of him, but now that he lay at rest in the earth, instances from the past vividly replayed themselves in my mind.
My father’s name was Yang Xiushen; he was also known by the names Yufu and Hongyuan. He was born on the sixth day of the sixth month of the lunar calendar in the year 1889. He was in fact my uncle and foster father, raising me from the time I was three months old. He and my foster mother had treated me better than if I had been their own son, and the extraordinary love they showed me was known throughout our home village. I later learned from fellow villagers that even in the worst weather my father would carry me through every lane and path seeking milk, so that I had wet nurses scattered throughout the area. One time I became ill and fell into a coma, and my father knelt and prayed unceasingly before the ancestral shrine until I regained consciousness. Once, I developed an abscess on my head, and my mother sucked the pus out of it with her own lips until it was cured. Although they were extremely poor, they used every means to ensure that I obtained schooling. They had extremely strict expectations regarding my conduct.

~~Tombstone- The Great Chinese famine 1958-1962 -by- Yang Jisheng

Sunday, August 16, 2015

Day 2

Often, however, someone has an inherent or acquired trait that is foreign to his or her parents and must therefore acquire identity from a peer group. This is a horizontal identity. Such horizontal identities may reflect recessive genes, random mutations, prenatal influences, or values and preferences that a child does not share with his progenitors. Being gay is a horizontal identity; most gay kids are born to straight parents, and while their sexuality is not determined by their peers, they learn gay identity by observing and participating in a subculture outside the family. Physical disability tends to be horizontal, as does genius. Psychopathy, too, is often horizontal; most criminals are not raised by mobsters and must invent their own treachery. So are conditions such as autism and intellectual disability. A child conceived in rape is born into emotional challenges that his own mother cannot know, even though they spring from her trauma.
...
In 1993, I was assigned to investigate Deaf culture for the New York Times. My assumption about deafness was that it was a deficit and nothing more. Over the months that followed, I found myself drawn into the Deaf world. Most deaf children are born to hearing parents, and those parents frequently prioritize functioning in the hearing world, expending enormous energy on oral speech and lipreading. Doing so, they can neglect other areas of their children’s education. While some deaf people are good at lipreading and produce comprehensible speech, many do not have that skill, and years go by as they sit endlessly with audiologists and speech pathologists instead of learning history and mathematics and philosophy. Many stumble upon Deaf identity in adolescence, and it comes as a great liberation. They move into a world that validates Sign as a language and discover themselves. Some hearing parents accept this powerful new development; others struggle against it.

The whole situation felt arrestingly familiar to me because I am gay. Gay people usually grow up under the purview of straight parents who feel that their children would be better off straight and sometimes torment them by pressing them to conform. Those gay people often discover gay identity in adolescence or afterward, finding great relief there. When I started writing about the deaf, the cochlear implant, which can provide some facsimile of hearing, was a recent innovation. It had been hailed by its progenitors as a miraculous cure for a terrible defect and was deplored by the Deaf community as a genocidal attack on a vibrant community. Both sides have since moderated their rhetoric, but the issue is complicated by the fact that cochlear implants are most effective when they are surgically implanted early—in infants, ideally—so the decision is often made by parents before the child can possibly have or express an informed opinion. Watching the debate, I knew that my own parents would gamely have consented to a parallel early procedure to ensure that I would be straight, had one existed. I do not doubt that the advent of such a thing even now could wipe out most of gay culture. I am saddened by the idea of such a threat, and yet as my understanding of Deaf culture deepened, I realized that the attitudes I had found benighted in my parents resembled my own likely response to producing a deaf child. My first impulse would have been to do whatever I could to fix the abnormality.
...
An extreme version of the social model of disability is summarized by the British academic Michael Oliver: “Disability has nothing to do with the body, it is a consequence of social oppression.” This is untrue, even specious, but it contains a valid challenge to revise the prevalent opposite assumption that disability resides entirely in the mind or body of the disabled person. Ability is a tyranny of the majority. If most people could flap their arms and fly, the inability to do so would be a disability.

~~Far From The Tree -by- Andrew Solomon

Saturday, August 15, 2015

Day 1

When war broke out in Europe in September 1939 the British political will to hold on to its Indian empire was as strong as it ever had been, despite the qualitative changes in the economic relations between the metropolis and the colony. The forces of Indian nationalism were more radicalized but were also more divided than they had been in the past. The Congress leadership, having just fended off a left-wing challenge, asked the British to define their war aims before they agreed to any support for the British cause. Congress leaders had been deeply offended and embarrassed by Viceroy Linlithgow’s decision to declare India a belligerent in the war against Germany without bothering to consult the Congress high command or the provincial ministries. Once it became clear that the British were not of a mind to make any immediate concessions to Indian nationalist aspirations, Congress had little choice but to resign from holding office in the provinces as a mark of protest.

From the Indian nationalist point of view the world war was a conflict between old and new imperialist powers. That Britain was fighting for freedom and democracy was simply not credible to its colonial subjects unless they too were given a taste of these values. In 1940, Gandhi, not yet prepared to signal the beginning of a mass movement, called upon his followers to offer individual satyagraha. So satyagrahis made anti-war speeches and courted arrest in large numbers. While non-violent protestors were herded into detention camps, the British moved decisively to imprison radical leaders and workers, including Subhas Chandra Bose and his followers, in 1940. Japan’s entry into the war in December 1941 and its military sweep across South East Asia in early 1942 provided the occasion for one futile round of negotiations but ultimately served to strengthen Britain’s resolve to use the coercive powers of the colonial state to the fullest extent when necessary to keep nationalists at bay.

Political denial was matched by economic interventions on an unprecedented scale. Indian resources were marshalled to finance Britain’s war effort as never before. While the Depression decade had seen a steep decline in prices, the war economy came to be characterized by galloping inflation. The inflationary pressure emanated largely from the massive expansion in public expenditure. Between 1939 and 1945 nearly Rs 3.5 billion were spent on defence purposes in India. While Indian revenues were to be used for the defence of the colony, the metropolitan government agreed, in a major departure of policy, to foot the bill for the use of Indian forces in the defence of the empire. But the treasury in London was short of cash. So, in a typical example of British financial jugglery, a mechanism was devised by which India would pay here and now and be reimbursed after the end of the war. Part of the total war expenditure would be recoverable as sterling credits for India accumulated in the Bank of England. For now, the government of India would finance the war by making the mints work harder. The money supply in India rose from about Rs 3 billion in 1939 to Rs 22 billion in 1945. Since imports had dropped drastically due to the dislocations of war and government purchases of war-related material diverted some goods from Indian consumption, serious shortages developed and prices soared for essential commodities like cloth, kerosene oil and, most important of all, food.

Wartime exigencies and the experience of the Bengal famine, however, brought about a reversal of the debt relationship between metropolis and colony, as well as of the nature of the links between the colonial state and the economy. Throughout the colonial era India had owed a debt to Britain, but at the end of the war it was Britain which owed a large debt of £1.3 billion to the colonial government of India. In order to provision troops and key urban classes, the colonial state had intruded into the food market, procuring grains from the countryside and selling them through ration shops in the towns and cities. Social groups, such as the rich farmers of the Punjab, who might have been expected to make large profits from rising grain prices, were prevented from doing so by the colonial state’s procurement and price-control policies. The poor in one region of India, Bengal, perished on account of the state’s lack of action; the better off in another region, Punjab, complained bitterly about the state’s heavy-handed interventions which they deemed to be detrimental to their own interests.

~~MODERN SOUTH ASIA: History, Culture, Political Economy -by- Sugata Bose and Ayesha Jalal

Friday, August 14, 2015

Day 0

When Independence came, it came with an orgy of blood and tears. We welcomed it with mixed feelings of joy and sorrow. Our long cherished dream of liberty had bloomed into reality— a thousand years of foreign rule had come to an end— the destiny of crores of human skeletons would be guided to the fulness of their lives by their own trusted leaders— the very idea knocked open the floodgates of imagination. At the same time it was a great wrench to have to leave behind our hearth and home, the scenes and surroundings of our childhood and our near and dear ones. My field of work was at Karimganj but the ancestral home and property fell to that part of Sylhet which went to the share of Pakistan. I knew the trees and bushes of my village. They were bound up with a tie of intimacy to me. The rivulet murmuring through the village, the paddy fields undulating with changing colours every month, the cremation ground shadowed by a banian tree along the river bank where my father and forefathers lie in eternal rest, the rail
line running through the village and attracting the boys with every passing train— and above all the simple affectionate yet un-affected people of the village— all cast a romantic spell on me. I shall possibly have no chance to see them again or to be in the midst of the wonted haunts of my childhood and youth. They have possibly now changed beyond recognition. The village Kalibari with its quiet surroundings, the mystic jungles to its east where I once dreamt of organising the young friends of the village on the pattern of the ‘Santan’ group of Bankim’s ‘Anandamath’— they are all out of my sight though I am living just sixteen miles away. No one could foresee the extent of separation which partition would bring in its train.
...
But we are cut off from all kinds of communication with the other side of this river which is fordable in many places during the winter. Some of my best friends during my school days were Muslims. This friendship recognised no barrier of religion even during the crucial period of political feud. It stood the test of time even after partition, but the Governments are making re-union with friends and relatives impossible. As the position is today, no new friendship or relationship is possible to be picked up with people on the other side of the border. It appears that deliberate attempts are being made to tear off the cultural and linguistic ties between the two wings of Bengal.
...
We are hoping for a bumper crop. The chaos created by hunger, unemployment and under-employment may soon be a thing of the past. Undoubtedly, the cloud has a silver lining. Will not a leader of all-India stature arise again to guide the destiny of India? May we take the symptoms as the travail of a new birth?
~~From the Corridoors of Memory -by- Rabindra Nath Aditya (written in 1969)