Thursday, June 30, 2016

Day 320: Why Zebras Don't Get Ulcers



It’s two o’clock in the morning and you’re lying in bed. You have something immensely important and challenging to do that next day—a critical meeting, a presentation, an exam. You have to get a decent night’s rest, but you’re still wide awake. You try different strategies for relaxing—take deep, slow breaths, try to imagine restful mountain scenery—but instead you keep thinking that unless you fall asleep in the next minute, your career is finished. Thus you lie there, more tense by the second.

If you do this on a regular basis, somewhere around two-thirty, when you’re really getting clammy, an entirely new, disruptive chain of thought will no doubt intrude. Suddenly, amid all your other worries, you begin to contemplate that nonspecific pain you’ve been having in your side, that sense of exhaustion lately, that frequent headache. The realization hits you—I’m sick, fatally sick! Oh, why didn’t I recognize the symptoms, why did I have to deny it, why didn’t I go to the doctor?

When it’s two-thirty on those mornings, I always have a brain tumor. These are very useful for that sort of terror, because you can attribute every conceivable nonspecific symptom to a brain tumor and justify your panic. Perhaps you do, too; or maybe you lie there thinking that you have cancer, or an ulcer, or that you’ve just had a stroke.

Even though I don’t know you, I feel confident in predicting that you don’t lie there thinking, “I just know it; I have leprosy.” True? You are exceedingly unlikely to obsess about getting a serious case of dysentery if it starts pouring. And few of us lie there feeling convinced that our bodies are teeming with intestinal parasites or liver flukes.

Of course not. Our nights are not filled with worries about scarlet fever, malaria, or bubonic plague. Cholera doesn’t run rampant through our communities; river blindness, black water fever, and elephantiasis are third world exotica. Few female readers will die in childbirth, and even fewer of those reading this page are likely to be malnourished.

Thanks to revolutionary advances in medicine and public health, our patterns of disease have changed, and we are no longer kept awake at night worrying about infectious diseases (except, of course, AIDS or tuberculosis) or the diseases of poor nutrition or hygiene. As a measure of this, consider the leading causes of death in the United States in 1900: pneumonia, tuberculosis, and influenza (and, if you were young, female, and inclined toward risk taking, childbirth). When is the last time you heard of scads of people dying of the flu? Yet the flu, in 1918 alone, killed many times more people than throughout the course of that most barbaric of conflicts, World War I.

Our current patterns of disease would be unrecognizable to our great-grandparents or, for that matter, to most mammals. Put succinctly, we get different diseases and are likely to die in different ways from most of our ancestors (or from most humans currently living in the less privileged areas of this planet). Our nights are filled with worries about a different class of diseases; we are now living well enough and long enough to slowly fall apart.

The diseases that plague us now are ones of slow accumulation of damage—heart disease, cancer, cerebrovascular disorders. While none of these diseases is particularly pleasant, they certainly mark a big improvement over succumbing at age twenty after a week of sepsis or dengue fever. Along with this relatively recent shift in the patterns of disease have come changes in the way we perceive the disease process. We have come to recognize the vastly complex intertwining of our biology and our emotions, the endless ways in which our personalities, feelings, and thoughts both reflect and influence the events in our bodies. One of the most interesting manifestations of this recognition is understanding that extreme emotional disturbances can adversely affect us. Put in the parlance with which we have grown familiar, stress can make us sick, and a critical shift in medicine has been the recognition that many of the damaging diseases of slow accumulation can be either caused or made far worse by stress.

In some respects this is nothing new. Centuries ago, sensitive clinicians intuitively recognized the role of individual differences in vulnerability to disease. Two individuals could get the same disease, yet the courses of their illness could be quite different and in vague, subjective ways might reflect the personal characteristics of the individuals. Or a clinician might have sensed that certain types of people were more likely to contract certain types of disease. But since the twentieth century, the addition of rigorous science to these vague clinical perceptions has made stress physiology—the study of how the body responds to stressful events—a real discipline. As a result, there is now an extraordinary amount of physiological, biochemical, and molecular information available as to how all sorts of intangibles in our lives can affect very real bodily events. These intangibles can include emotional turmoil, psychological characteristics, our position in society, and how our society treats people of that position. And they can influence medical issues such as whether cholesterol gums up our blood vessels or is safely cleared from the circulation, whether our fat cells stop listening to insulin and plunge us into diabetes, whether neurons in our brain will survive five minutes without oxygen during a cardiac arrest.

This book is a primer about stress, stress-related disease, and the mechanisms of coping with stress. How is it that our bodies can adapt to some stressful emergencies, while other ones make us sick? Why are some of us especially vulnerable to stress-related diseases, and what does that have to do with our personalities? How can purely psychological turmoil make us sick? What might stress have to do with our vulnerability to depression, the speed at which we age, or how well our memories work? What do our patterns of stress-related diseases have to do with where we stand on the rungs of society’s ladder? Finally, how can we increase the effectiveness with which we cope with the stressful world that surrounds us?

~~Why Zebras Don't Get Ulcers -by- Robert M. Sapolsky

Wednesday, June 29, 2016

Day 319: The Kabul Beauty School



The women arrive at the salon just before eight in the morning. If it were any other day, I’d still be in bed, trying to sink into a few more minutes of sleep. I’d probably still be cursing the neighbor’s rooster for waking me up again at dawn. I might even still be groaning about the vegetable dealers who come down the street at three in the morning with their noisy, horse-drawn wagons, or the neighborhood mullah, who warbles out his long, mournful call to prayer at four-thirty. But this is the day of Roshanna’s engagement party, so I’m dressed and ready for work. I’ve already had four cigarettes and two cups of instant coffee, which I had to make by myself because the cook has not yet arrived. This is more of a trial than you might think, since I’ve barely learned how to boil water in Afghanistan. When I have to do it myself, I put a lit wooden match on each of the burners of the cranky old gas stove, turn one of the knobs, and back off to see which of the burners explodes into flame. Then I settle a pot of water there and pray that whatever bacteria are floating in the Kabul water today are killed by the boiling.

The mother-in-law comes into the salon first, and we exchange the traditional Afghan greeting: we clasp hands and kiss each other’s cheeks three times. Roshanna is behind her, a tiny, awkward, blue ghost wearing the traditional burqa that covers her, head to toe, with only a small piece of netting for her to see out the front. But the netting has been pulled crooked, across her nose, and she bumps into the doorway. She laughs and flutters her arms inside the billowing fabric, and two of her sisters-in-law help her navigate her way through the door. Once inside, Roshanna snatches the burqa off and drapes it over the top of one of the hair dryers.

“This was like Taliban days again,” she cries, because she hasn’t worn the burqa since the Taliban were driven out of Kabul in the fall of 2001. Roshanna usually wears clothes that she sews herself—brilliant shalwar kameezes or saris in shades of orchid and peach, lime green and peacock blue. Roshanna usually stands out like a butterfly against the gray dustiness of Kabul and even against the other women on the streets, in their mostly drab, dark clothing. But today she observes the traditional behavior of a bride on the day of her engagement party or wedding. She has left her parents’ house under cover of burqa and will emerge six hours later wearing her body weight in eye shadow, false eyelashes the size of sparrows, monumentally big hair, and clothes with more bling than a Ferris wheel. In America, most people would associate this look with drag queens sashaying off to a party with a 1950s prom theme. Here in Afghanistan, for reasons I still don’t understand, this look conveys the mystique of the virgin.

The cook arrives just behind the women, whispering that she’ll make the tea, and Topekai, Baseera, and Bahar, the other beauticians, rush into the salon and take off their head scarves. Then we begin the joyful, gossipy, daylong ordeal of transforming twenty-year-old Roshanna into a traditional Afghan bride. Most salons would charge up to $250—about half the annual income for a typical Afghan—for the bride’s services alone. But I am not only Roshanna’s former teacher but also her best friend, even though I’m more than twenty years older. She is my first and best friend in Afghanistan. I love her dearly, so the salon services are just one of my gifts to her.

We begin with the parts of Roshanna that no one will see tonight except her husband. Traditional Afghans consider body hair to be both ugly and unclean, so she must be stripped of all of it except for the long, silky brown hair on her head and her eyebrows. There can be no hair left on her arms, underarms, face, or privates. Her body must be as soft and hairless as that of a prepubescent girl. We lead Roshanna down the corridor to the waxing room—the only one in Afghanistan, I might add—and she grimaces as she sits down on the bed.

“You could have done it yourself at home,” I tease her, and the others laugh. Many brides are either too modest or too fearful to have their pubic hair removed by others in a salon, so they do it at home—they either pull it out by hand or rip it out with chewing gum. Either way, the process is brutally painful. Besides, it’s hard to achieve the full Brazilian—every pubic hair plucked, front and back—when you do it on your own, even if you’re one of the few women in this country to own a large mirror, as Roshanna does.

“At least you know your husband is somewhere doing this, too,” Topekai says with a leer. My girls giggle at this reference to the groom’s attention to his own naked body today. He also must remove all of his body hair.

“But he only has to shave it off!” Roshanna wails, then blushes and looks down. I know she doesn’t want to appear critical of her new husband, whom she hasn’t yet met, in front of her mother-in-law. She doesn’t want to give the older woman any reason to find fault with her, and when Roshanna looks back up again, she smiles at me anxiously.

But the mother-in-law seems not to have heard her. She has been whispering outside the door with one of her daughters. When she turns her attention back to the waxing room, she looks at Roshanna with a proud, proprietary air.

The mother-in-law had picked Roshanna out for her son a little more than a year after Roshanna graduated from the first class at the Kabul Beauty School, in the fall of 2003, and opened her own salon. The woman was a distant cousin who came in for a perm. She admired this pretty, plucky, resourceful girl who had been supporting her parents and the rest of her family ever since they fled into Pakistan to escape the Taliban. After she left Roshanna’s salon, she started asking around for further details about the girl. She liked what she heard.

Roshanna’s father had been a doctor, and the family had led a privileged life until they fled to Pakistan in 1998. There, he was not allowed to practice medicine—a typical refugee story—and had to work as a lowly shoeshine man. By the time they returned to Kabul, he was in such ill health that he couldn’t practice medicine. Still, he staunchly carried out his fatherly duties by accompanying Roshanna everywhere to watch over her. The mother-in-law had detected no whiff of scandal about Roshanna, except perhaps her friendship with me. Even that didn’t put her off, since foreign women are not held to the same rigorous standards as Afghan women. We are like another gender entirely, able to wander back and forth between the two otherwise separate worlds of men and women; when we do something outrageous, like reach out to shake a man’s hand, it’s usually a forgivable and expected outrage. The mother-in-law may even have regarded me as an asset, a connection to the wealth and power of America, as nearly all Afghans assume Americans are rich. And we are, all of us, at least in a material sense. Anyway, the mother-in-law was determined to secure Roshanna as the first wife for her elder son, an engineer living in Amsterdam. There was nothing unusual about this. Nearly all first marriages in Afghanistan are arranged, and it usually falls to the man’s mother to select the right girl for him. He may take on a second or even third wife later on, but that first virginal lamb is almost as much his mother’s as his.

I see that Roshanna is faltering under her mother-in-law’s gaze, and I pull all the other women away from the waxing room. “How about highlights today?” I ask the mother-in-law. “My girls do foiling better than anyone between here and New York City.”

“Better than in Dubai?” the mother-in-law asks.

“Better than in Dubai,” I say. “And a lot cheaper.”

Back in the main room of the salon, I make sure the curtains are pulled tight so that no passing male can peek in to see the women bareheaded. That’s the kind of thing that could get my salon and the Kabul Beauty School itself closed down. I light candles so that we can turn the overhead lights off. With all the power needed for the machine that melts the wax, the facial lamps, the blow dryers, and the other salon appliances, I don’t want to blow a fuse. I put on a CD of Christmas carols. It’s the only one I can find, and they won’t know the difference anyway. Then I settle the mother-in-law and the members of the bridal party into their respective places, one for a manicure, one for a pedicure, one to get her hair washed. I make sure they all have tea and the latest outdated fashion magazines from the States, then excuse myself with a cigarette. I usually just go ahead and smoke in the salon, but the look on Roshanna’s face just before I shut the door to the waxing room has my heart racing. Because she has a terrible secret, and I’m the only one who knows it—for now.

~~The Kabul Beauty School : An American Woman Goes Behind the Veil -by- Deborah Rodriguez

Tuesday, June 28, 2016

Day 318: How Doctors Think



On a sweltering morning in June 1976, I put on a starched white coat, placed a stethoscope in my black bag, and checked for the third time in the mirror that my tie was correctly knotted. Despite the heat, I walked briskly along Cambridge Street to the entrance of the Massachusetts General Hospital. This was the long-awaited moment, my first day of internship—the end of play-acting as a doctor, the start of being a real one. My medical school classmates and I had spent the first two years in lecture halls and in laboratories, learning anatomy, physiology, pharmacology, and pathology from textbooks and manuals, using microscopes and petri dishes to perform experiments. The following two years, we learned at the bedside. We were taught how to organize a patient's history: his chief complaint, associated symptoms, past medical history, relevant social data, past and current therapies. Then we were instructed in how to examine people: listening for normal and abnormal heart sounds; palpating the liver and spleen; checking pulses in the neck, arms, and legs; observing the contour of the nerve and splay of the vessels in the retina. At each step we were closely supervised, our hands firmly held by our mentors, the attending physicians.

Throughout those four years of medical school, I was an intense, driven student, gripped by the belief that I had to learn every fact and detail so that I might one day take responsibility for a patient's life. I sat in the front row in the lecture hall and hardly moved my head, nearly catatonic with concentration. During my clinical courses in internal medicine, surgery, pediatrics, obstetrics and gynecology, I assumed a similarly focused posture. Determined to retain everything, I scribbled copious notes during lectures and after bedside rounds. Each night, I copied those notes onto index cards that I arranged on my desk according to subject. On weekends, I would try to memorize them. My goal was to store an encyclopedia in my mind, so that when I met a patient, I could open the mental book and find the correct diagnosis and treatment.

The new interns gathered in a conference room in the Bulfinch Building of the hospital. The Bulfinch is an elegant gray granite structure with eight Ionic columns and floor-to-ceiling windows, dating from 1823. In this building is the famed Ether Dome, the amphitheater where the anesthetic ether was first demonstrated in 1846. In 1976, the Bulfinch Building still housed open wards with nearly two dozen patients in a single cavernous room, each bed separated by a flimsy curtain.

We were greeted by the chairman of medicine, Alexander Leaf. His remarks were brief—he told us that as interns we had the privilege to both learn and serve. Though he spoke in a near whisper, what we heard was loud and clear: the internship program at the MGH was highly selective, and great things were expected of us during our careers in medicine. Then the chief resident handed out each intern's schedule.

There were three clinical services, Bulfinch, Baker, and Phillips, and over the ensuing twelve months we would rotate through all of them. Each clinical service was located in a separate building, and together the three buildings mirrored the class structure of America. The open wards in Bulfinch served people who had no private physician, mainly indigent Italians from the North End and Irish from Charlestown and Chelsea. Interns and residents took a fierce pride in caring for those on the Bulfinch wards, who were "their own" patients. The Baker Building housed the "semi-private" patients, two or three to a room, working- and middle-class people with insurance. The "private" service was in the Phillips House, a handsome edifice rising some eleven stories with views of the Charles River; each room was either a single or a suite, and the suites were rumored to have accommodated valets and maids in times past. The very wealthy were admitted to the Phillips House by a select group of personal physicians, many of whom had offices at the foot of Beacon Hill and were themselves Boston Brahmins.

I began on the Baker service. Our team was composed of two interns and one resident. After the meeting with Dr. Leaf, the three of us immediately went to the floor and settled in with a stack of patient charts. The resident divided our charges into three groups, assigning the sickest to himself.

Each of us was on call every third night, and my turn began that first evening. We would be on call alone, responsible for all of the patients on the floor as well as any new admissions. At seven the following morning, we would meet and review what had happened overnight. "Remember, be an ironman and hold the fort," the resident said to me, the clichés offered only half jokingly. Interns were to ask for backup only in the most dire circumstances. "You can page me if you really need me," the resident added, "but I'll be home sleeping, since I was on call last night."

I touched my left jacket pocket and felt a pack of my index cards from medical school. The cards, I told myself, would provide the ballast to keep me afloat alone. I spent the better part of the day reading my patients' charts and then introducing myself to them. The knot in my stomach gradually loosened. But it tightened again when my fellow intern and supervising resident signed out their patients, alerting me to problems I might encounter on call.

A crepuscular quiet settled over the Baker. There were still a few patients I had not met. I went to room 632, checked the name on the door against my list, and knocked. A voice said, "Enter."

"Good evening, Mr. Morgan. I am Doctor Groopman, your new intern." The appellation "Doctor Groopman" still sounded strange to me, but it was imprinted on the nameplate pinned to my jacket.

William Morgan was described in his chart as "a 66-year-old African-American man" with hypertension that was difficult to control with medications. He had been admitted to the hospital two days earlier with chest pains. I called up from my mental encyclopedia the fact that African Americans have a high incidence of hypertension, which could be complicated by cardiac enlargement and kidney failure. His initial ER evaluation and subsequent blood tests and electrocardiogram did not point to angina, pain from coronary artery blockage. Mr. Morgan shook my hand firmly and grinned. "First day, huh?"

I nodded. "I saw in your chart that you're a letter carrier," I said. "My grandfather worked in the post office too."

"Carrier?"

"No, he sorted mail and sold stamps."

William Morgan told me that he had started out that way, but was a "restless type" and felt better working outside than inside, even in the worst weather.

"I know what you mean," I said, thinking that right now I too would rather be outside than inside—alone, in charge of a floor of sick people. I updated Mr. Morgan on the x-ray tests done earlier in the day. A GI series showed no abnormality in his esophagus or stomach.

"That's good to hear."

I was about to say goodbye when Mr. Morgan shot upright in bed. His eyes widened. His jaw fell slack. His chest began to heave violently.

"What's wrong, Mr. Morgan?"

He shook his head, unable to speak, desperately taking in breaths.

I tried to think but couldn't. The encyclopedia had vanished. My palms became moist, my throat dry. I couldn't move. My feet felt as if they were fixed to the floor.

"This man seems to be in distress," a deep voice said.

I spun around. Behind me was a man in his forties, with short black hair, dark eyes, and a handlebar mustache. "John Burnside," he said. "I trained here a number of years ago and was by to see some old friends. I'm a cardiologist in Virginia."

With his handlebar mustache and trimmed hair, Burnside looked like a figure from the Civil War. I remembered that a famous general of that name had fought in that conflict. Burnside deftly took the stethoscope from my pocket and placed it over Mr. Morgan's chest. After a few short seconds, he held the bell of the instrument over Mr. Morgan's heart and then removed the earpieces from his ears. "Here, listen."

I heard something that sounded like a spigot opened full blast, then closed for a moment, and opened again, the pattern repeated over and over. "This gentleman just tore through his aortic valve," Burnside said. "He needs the services of a cardiac surgeon. Pronto."

Dr. Burnside stayed with Mr. Morgan while I raced to find a nurse. She told another nurse to stat page the surgery team and ran back with me, the resuscitation cart in tow. Dr. Burnside quickly inserted an airway through Mr. Morgan's mouth and the nurse began to pump oxygen via an ambu bag. Other nurses arrived. The cardiac surgery resident appeared. Together we rushed Mr. Morgan to the OR. Dr. Burnside said goodbye. I thanked him.

I returned to the Baker and sat for several minutes at the nurses' station. I was in a daze. The event seemed surreal—enjoying a first conversation with one of my patients, then, like an earthquake, Mr. Morgan's sudden upheaval, then the deus ex machina appearance of Dr. Burnside. I felt the weight of the cards in my pocket. Straight A's when I was a student, play-acting. Now, in the real world, I gave myself an F.

~~How Doctors Think -by- Jerome Groopman

Monday, June 27, 2016

Day 317: Thinking With Whitehead



He glanced at me, suspicious. "You're not paying attention."

"I am!" I said, joining my hands to show my seriousness.

But he shook his head slowly. "Nothing interests you but excitement, violence."

"That's not true!" I said.

His eye opened wider, his body brightened from end to end. "You tell me what's true?" he said.

"I'm trying to follow you. I do my best," I said. "You should be reasonable. What do you expect?"

The dragon thought about it, breathing slowly, full of wrath. At last he closed his eyes: "Let us try starting somewhere else," he said. "It's damned hard, you understand, confining myself to concepts familiar to a creature of the Dark Ages. Not that one age is darker than another. Technical jargon from another dark age." He scowled as if hardly capable of forcing himself on. Then, after a long moment: "The essence of life is to be found in the frustrations of established order. The universe refuses the deadening influence of complete conformity. And yet in its refusal, it passes toward novel order as a primary requisite for important experience. We have to explain the aim at forms of order, and the aim at novelty of order, and the measure of success, and the measure of failure. Apart from some understanding, however dim-witted, of these characteristics of historic process..." His voice trailed off.

How does a dragon talk? Such is the problem John Gardner had to solve when he undertook to reinvent the epic poem Beowulf, a poem that, as every English-speaking student has learned, is the oldest European literary work written in a vernacular tongue to have come down to us. In the original work, slow and somber, Beowulf is the hero who fights against the forces of evil : the monster Grendel, whom he kills in the first part of the poem, and the dragon whom he will likewise kill in the second part, but who will mortally wound him. In Gardner's fiction, however, it is Grendel who tells his story, for the question of knowing how one gets to be a monster produces a more interesting viewpoint than the one defined by the good. One thus discovers that if Grendel kills men, it is because he is simultaneously the witness, judge, and impotent voyeur of the strange power fiction has over them, and confers upon them. He has seen them build themselves a destiny, a heroic past, a glorious future, with the words invented for them by the Shaper, or the Poet i n the strong sense of giver of form. Grendel is aware of the lies in these words, but this knowledge excludes him from what is taking shape before his eyes: his lucidity brings him nothing but hatred and despair. Thus, he chooses, forever solitary, to be the Great Destroyer for human beings, or more precisely the Great Deconstructor. He will derive a bitter, monotonous pleasure from the proof he never ceases inflicting on humans of the impotence of their Gods, the senseless character of their lives, and the vanity of their heroes.

Hatred is a choice, not a consequence.  Before becoming the scourge of humankind, Grendel met a being much older than himself, the dragon that Beowulf was to fight one day. This dragon is "beyond good and evil," beyond both the passion for constructing and for destroying illusory constructions. For him, nihilistic rage is just as absurd as belief, for everything is tied together, everything goes hand in hand, creation and destruction, lies and authenticity. And he knows that Grendel will choose excitement and violence, despite his advice, the only one he can give: seek out gold and sit on it...

The homage of fiction to philosophy. It is fairly easy to give voice to a denouncer, an idol-smasher, a denier of all belief. Yet it is much harder to give voice to a nonhuman knowledge, more ancient than humankind, able to see farther than the insignificant ripple they create in the river of time. To escape the human point of view, and to do so with the calm self-evidence that is appropriate, as if the workings of the universe belonged to what is given, beyond all conquest and hypothesis, Gardner turned to the philosopher Alfred North Whitehead, copying out entire passages from Whitehead's last book, Modes of Thought.

In what follows, Grendel was to encounter Whitehead a second time. It happened in the course of an incursion that took him toward the circle of the Gods, those statues that terrorized men ask in vain for protection against him. Grendel meets the blind Ork, the eldest and wisest of the priests. He decides to have some fun and, before killing Ork, he asks him to confess his faith, and say who is the King of the Gods. This time, in breathless succession, it is the God of Science and the Modern World, the principle of limitation, ultimate irrationality, then that of Process of Reality, with his infinite patience, his tender concern that nothing may be lost, that come from the blind man's lips. Grendel, bewildered, lets his prey get away.

The words of a dragon, surging forth from the depths of the ages, associated with the neutrality of one for whom epochs, importances, and arrogances succeed one another, but also words of trance, come from nowhere, able to rout Grendel, who has declared war on the poet's tale-spinning: the reader has now been warned. It is a strange tongue that will gradually be elaborated here, a language that challenges all clear distinctions between description a nd tale-spinning, and induces a singular experience of disorientation in the heart of the most familiar experiences. It is a language that can scandalize, or else madden, all those who think they know what they know, but also all those for whom to approach the non-knowing at the heart of all knowledge is an undertaking that is meticulous, grave, and always to be taken up again.

~~Thinking With Whitehead: A Free and Wild Creation of Concepts -by- Isabelle Stengers

Sunday, June 26, 2016

Day 316: Humankind



Biogeographically speaking, islands are in some ways just small bits of mainland. Indeed, sometimes islands are literally small bits of mainland. They have broken away from the edge of a continent and moved out to sea, propelled by plate tectonics. Alternatively, they are still strongly connected to the mainland, but the connection happens for the moment to be flooded by high sea levels. Between ice ages, sea levels are high, as they are today. During ice ages, the water gets converted to ice, and sea levels drop. At the height of the last glaciation, roughly twenty thousand years ago, sea levels were over a hundred meters lower than they are now. If global warming does not prevent the return of the next glacial period, sea levels will then fall as in previous ice ages, and many coastal islands will return to being part of the mainland. Britain and Ireland will no longer be islands but, instead, simply the western edge of Europe, as they were during the last ice age.

That could be a pity, because in many ways, islands are special. Cut off by water, isolated from contact with others of their kind, island animals and plants can go their own way evolutionarily. The result is many species unique to a particular island and found nowhere else. The lemurs of Madagascar are a classic case for primates. The nene, or Hawaiian goose, is classic for birds.

A relevant piece of biogeographic jargon out there is the word “endemism,” or “endemic.” It means confined to a particular region. The lemurs are endemic to Madagascar. One often thinks that endemism means confined to a small area. However, the size of the region is irrelevant. In Hawaii, some bird species are endemic to just one island, but the kangaroo is endemic to Australia, and the reindeer is endemic to northern Eurasia.

Biogeographers speak of island “rules” to describe and explain a number of general ways in which island fauna and flora differ biologically from mainland fauna and flora. Humans follow some of the rules, but not others. To introduce one of the island rules, I am going to start with “Flo,” the Flores island “hobbit.” Strictly, Flo is probably not relevant in a book on human biogeography, for reasons that I will come to. However, the story allows me to introduce an island rule that Pacific-island humans break. Understanding why a phenomenon breaks a rule is often just as informative concerning underlying causes as understanding the reason for the rule in the first place.

In 2004, the announcement in the journal Nature of a new species of the genus Homo from Asia, Homo floresiensis, electrified the anthropological world. The species name comes from the discovery of 40,000- to 13,000-year-old bones in a cave on the island of Flores, east of Java and Bali in the Indonesian archipelago.

“Flo,” or the “hobbit,” as the tiny new species affectionately became known, was the first new hominin species claimed in Asia for over a century. The last one, oddly enough, was the first Homo erectus ever discovered. Named Java Man, it was found in 1891 by the Dutch paleoanthropologist Eugène Dubois or, to give him his impressive full name, Marie Eugène François Thomas Dubois.

The hobbit was strange in many ways. Apparently alive on Flores well over ten thousand years after the last non-human hominin died out elsewhere in the world, its skeleton turned out to be weird enough that doubts immediately arose regarding almost everything to do with it. Arguments and counter-arguments flared, burned out, and flared again. And the arguments were not, sad to say, always politely expressed, reasoned differences of opinion. Sarcastic commentary with the word research in quotation marks to denigrate the others’ scholarship marked one exchange. Other different opinions were characterized as “unsubstantiated assertions” by a “vocal group.” Suggestions of, shall we say, incomplete reporting of measurements, along with accusations of mishandling of the original material and denial of access to it, exemplify just some of the all-too-human nature of the early debate.

If the hobbit had been only slightly different from all human ancestors, probably only paleontologists, a journalist or two, and the occasional informed member of the public would have paid any attention. But the hobbit was extraordinary. It had such a mix of modern, ancient, and its own unique features, such as almost ridiculously long feet, that disagreement was inevitable.

Perhaps the most amazing feature of the hobbit was its small size—hence the nickname. Let me say that only one near-complete skeleton has been found, so I refer to the hobbit in the singular. The hobbit stood just over one meter high. That is thirty centimeters shorter than any human pygmy. I am a normal-sized male for my generation in the UK, at one meter seventy-eight centimeters. The hobbit would reach the bottom of my rib cage. The hobbit was stocky, so at thirty kilograms it weighed more than many modern-day pygmy adults do. Added to its amazingly small stature is an amazingly small brain. The one skull found shows a brain so tiny—four hundred and twenty-five cubic centimeters, one third the size of a modern human’s, in fact the size of a chimpanzee’s—that we have to go back three million years in human evolution, beyond Homo to the australopithecines, to find another hominin with a brain this small.

Is the hobbit a relict australopithecine species? If not, what is it? How do we explain its extremely long feet? How can the hobbit be a hominin, and yet have so small a brain? Why is this hobbit so small? How do we explain its mix of modern and ancient traits?

With no evidence of any australopithecines outside of Africa, one of the early explanations for the hobbit’s small body size, and especially its small brain size, was that it suffered from microcephalic dwarfism. The “microcephalic” part of that is just the typical doctor’s Greek jargon for “small-brained.” I cannot resist a story from my brother-in-law. He had a swollen knee. He went to the doctor. “Aah,” said the doctor, “you have patellitis.” Patellitis is Latin and Greek for swollen knee!

Other pathologies to explain the hobbit are Laron syndrome, or a form of cretinism. Both of these conditions come with small stature. The first is a genetic condition, while the second can result from mineral deficiency, especially lack of iodine. It characterizes individuals who lack fully functioning thyroid glands, the glands that produce hormones essential for full growth. A recently suggested pathology to explain the hobbit’s features is Down syndrome, which can explain the odd mix of modern and apparently ancient traits.

If the hobbit was in fact a modern human suffering from a disease, it would not be relevant to this book on the biogeography of humans, given that none of the suggested diseases to explain its small stature and brain size confine themselves to any one part of the world. However, the hobbit lived on the small island of Flores, in eastern Indonesia, and the biogeography of small islands is highly relevant to the hobbit. Conversely, the hobbit is relevant to the biogeography of small islands, given how intensely scientists have studied it.

Flores covers on the order of thirteen and a half thousand square kilometers, approximately the size of Connecticut in the USA, or Northern Ireland in Britain. A feature of small islands is that species that are large-bodied on the mainland often evolve to become smaller, sometimes far smaller. On the islands of the Mediterranean Sea, such as Cyprus, Malta, Crete, and Sicily, elephants and mammoths shrunk over many generations’ time to half the height that they were on the mainland, so they ended just one and a half meters high at the shoulder.

California’s Channel Islands too had their own pygmy mammoth, just a little taller than the Mediterranean elephants. Similarly, the mammoth of Wrangel Island off the north coast of Siberia was roughly thirty percent smaller than the average mainland mammoth, as judged from their tooth sizes.

Flores is no exception to this phenomenon of miniaturization. An extinct form of elephant in mainland Asia, stegodons, were some of the largest elephants ever to have lived. The Flores version on the island at the same time as the hobbit was one third smaller than its mainland relative. The adults, at about eight hundred and fifty kilograms, were too big for the hobbit to hunt, but they hunted the young ones. Archeologists have inferred this from cut marks on the bones in the cave.

~~Humankind: How Biology And Geography Shape Human Diversity -By- Alexander H. Harcourt

Saturday, June 25, 2016

Day 315: Einstein’s Dice and Schrödinger’s Cat



This is the tale of two brilliant physicists, the 1947 media war that tore apart their decades-long friendship, and the fragile nature of scientific collaboration and discovery.

When they were pitted against each other, each scientist was a Nobel laureate, well into middle age, and certainly past the peak of his major work. Yet the international press largely had a different story to tell. It was a familiar narrative of a seasoned fighter still going strong versus an upstart contender hungry to seize the trophy. While Albert Einstein was extraordinarily famous, his every pronouncement covered by the media, relatively few readers were conversant with the work of Austrian physicist Erwin Schrödinger.

Those following Einstein’s career knew that he been working for decades on a unified field theory. He hoped to extend the work of nineteenth-century British physicist James Clerk Maxwell in uniting the forces of nature through a simple set of equations. Maxwell had provided a unified explanation for electricity and magnetism, called electromagnetic fields, and identified them as light waves. Einstein’s own general theory of relativity described gravity as a warping of the geometry of space and time. Confirmation of the theory had won him fame. However, he didn’t want to stop there. His dream was to incorporate Maxwell’s results into an extended form of general relativity and thereby unite electromagnetism with gravity.

Every few years, Einstein had announced a unified theory to great fanfare, only to have it quietly fail and be replaced by another. Starting in the late 1920s, one of his primary goals was a deterministic alternative to probabilistic quantum theory, as developed by Niels Bohr, Werner Heisenberg, Max Born, and others. Although he realized that quantum theory was experimentally successful, he judged it incomplete. In his heart he felt that “God did not play dice,” as he put it, couching the issue in terms of what an ideal mechanistic creation would be like. By “God” he meant the deity described by seventeenth-century Dutch philosopher Baruch Spinoza: an emblem of the best possible natural order. Spinoza had argued that God, synonymous with nature, was immutable and eternal, leaving no room for chance. Agreeing with Spinoza, Einstein sought the invariant rules governing nature’s mechanisms. He was absolutely determined to prove that the world was absolutely determined.

Exiled in Ireland in the 1940s after the Nazi annexation of Austria, Schrödinger shared Einstein’s disdain for the orthodox interpretation of quantum mechanics and saw him as a natural collaborator. Einstein similarly found in Schrödinger a kindred spirit. After sharing ideas for unification of the forces, Schrödinger suddenly announced success, generating a storm of attention and opening a rift between the men.

You may have heard of Schrödinger’s cat—the feline thought experiment for which the general public knows him best. But back when this feud took place, few people outside of the physics community had heard of the cat conundrum or of him. As depicted in the press, he was just an ambitious scientist residing in Dublin who might have landed a knockout punch on the great one.
The leading announcer was the Irish Press, from which the international community learned about Schrödinger’s challenge. Schrödinger had sent them an extensive press release describing his new “theory of everything,” immodestly placing his own work in the context of the achievements of the Greek sage Democritus (the coiner of the term “atom”), the Roman poet Lucretius, the French philosopher Descartes, Spinoza, and Einstein himself. “It is not a very becoming thing for a scientist to advertise his own discoveries,” Schrödinger told them. “But since the Press wishes it, I submit to them.”

The New York Times cast the announcement as a battle between a maverick’s mysterious methods and the establishment’s lack of progress. “How Schrödinger has proceeded we are not told,” it reported.

For a fleeting moment it seemed that a Viennese physicist whose name was then little known to the general public had beaten the great Einstein to a theory that explained everything in the universe. Perhaps it was time, puzzled readers may have thought, to get to know Schrödinger better.

Today, what comes to mind for most people who have heard of Schrödinger are a cat, a box, and a paradox. His famous thought experiment, published as part of a 1935 paper, “The Present Situation in Quantum Mechanics,” is one of the most gruesome devised in the history of science. Hearing about it for the first time is bound to trigger gasps of horror, followed by relief that it is just a hypothetical experiment that presumably has never been attempted on an actual feline subject.

Schrödinger proposed the thought experiment in 1935 as part of a paper that investigated the ramifications of entanglement in quantum physics. Entanglement (the term was coined by Schrödinger) is when the condition of two or more particles is represented by a single quantum state, such that if something happens to one particle the others are instantly affected.

Inspired in part by dialogue with Einstein, the conundrum of Schrödinger’s cat presses the implications of quantum physics to their very limits by asking us to imagine the fate of a cat becoming entangled with the state of a particle. The cat is placed in a box that contains a radioactive substance, a Geiger counter, and a sealed vial of poison. The box is closed, and a timer is set to precisely the interval at which the substance would have a 50–50 chance of decaying by releasing a particle. The researcher has rigged the apparatus so that if the Geiger counter registers the click of a single decay particle, the vial would be smashed, the poison released, and the cat dispatched. However, if no decay occurs, the cat would be spared.

According to quantum measurement theory, as Schrödinger pointed out, the state of the cat (dead or alive) would be entangled with the state of the Geiger counter’s reading (decay or no decay) until the box is opened. Therefore, the cat would be in a zombielike quantum superposition of deceased and living until the timer went off, the researcher opened the box, and the quantum state of the cat and counter “collapsed” (distilled itself) into one of the two possibilities.

~~Einstein’s Dice and Schrödinger’s Cat: How Two Great Minds Battled Quantum Randomness to Create a Unified Theory of Physics -by- Paul Halpern

Friday, June 24, 2016

Day 314: Escape From Quantopia



Giordano Bruno discovered in the lights of the night sky a bottomless ocean of suns where others saw only sketches projected from human imagination. Alone among the pioneers of science, Bruno fully absorbed the lesson of Copernicus, something even the solar revolutionary himself failed to grasp. Not only is the cosmos not centered on Earth but the very idea of center has no physical meaning. There’s no more a privileged location from which all places are subject to objective measurement than a virgin or a goatfish in the sky.

“For there is in the Universe,” wrote the itinerant philosopher, “neither center nor circumference, but, if you will, the whole is central, and every point also may be regarded as part of a circumference in respect to some other central point.” If Earth seems like the center of all things, that’s only because we live on it. To lunar dwellers the Moon is center-stage. It’s all perspective.

Bruno never hesitated to announce his relativistic revelation to any and all. For this and other “impieties,” the church ordered him burned at the stake on Ash Wednesday 1600.

His successors lacked his penetrating insight. Following Isaac Newton’s observation that massive bodies attract each other at a distance, consensus opinion coalesced around the idea of a subtle kind of matter permeating space that mediates the force of gravity much as water mediates waves on the ocean. In the nineteenth century scientists updated this approach with their contention that electromagnetic waves propagate across a “luminiferous aether.” Aside from serving as a fixed framework establishing the boundaries and absolute center of the universe, the aether was thought to enable the cosmic machine to operate by contact mechanics, not unlike the contraptions we fashion down here on the terrestrial plane.

By the turn of the twentieth century, the great questions of existence seemed to be dissolving in the magic potion of science. The world had never been so clear, the ground never so solid and dependable.
Since then all center and substance have shattered. Bruno could at least count on God. Now we’ve got nothing, adrift in a void without reference points. The Great Wall of Certainty has collapsed under its own density. From the other side Bruno confronts us with crackling skin and blazing eye.

Up until 1897 the idea of material substance wasn’t generally regarded as a pre-scientific mirage. But in that year JJ Thomson cut the “uncuttable” atom. The solid core of matter turned out to be internally differentiated, with vast empty gulfs punctuated by occasional pinpricks of mass. An electron isn’t so much a thing as a field of possibilities across which a “particle” randomly bops around like a speck of static on a TV screen. It’s a dance whose steps can be calculated according to a probability wave. Let’s say an electron is trapped inside a perfectly sealed container. As it bounces off the walls, its probability wave gradually seeps out to the surrounding area until the electron itself is no longer inside the container.

This is why quantum physicists don’t speak of substance. Reality is composed of “information.” The randomness of the quantum level averages out to the predictability of the perceptual level. All that is solid melts into stats.

The de-centering of all centers began in 1887 when Albert Michelson and Edward Morley carried out an experiment designed to prove the existence of the aether. Their “interferometer,” a box containing a telescope and mirrors set at odd angles, could measure the speed of light on Earth relative to its speed in outer space. Since our planet is in motion, scientists reasoned that the light reaching us from a distant source should be either faster or slower than in the stillness of space, depending on whether we’re approaching the starlight or receding. But when Michelson and Morley looked at their results, they found no interference and therefore no difference in the speed of light relative to Earth’s motion.
For years their findings puzzled physicists, though Hendrick Lorentz wrote up some interesting equations meant to explain how the aether was somehow still relevant despite the no-show in ‘87. Not until Einstein came along did anyone see the true weight of the Michelson-Morley results. With a little tweaking of Lorentz’s equations, he demonstrated that space has no fixed framework, no center or circumference. As far as the universe is concerned, we are nowhere. Bruno was vindicated.

Whether you’re adrift in deep space or breezing along at 185,000 miles per second, light always travels faster, at 186,000 miles per second. light always travels 186,000 miles per second faster. Change your frame of reference and the flow of time changes along with it. Only the speed of light remains constant.

Or so we thought. Light has many different speeds, depending on what kind of medium it’s traversing. Water, for instance, slows it down by 75%. In the final days of the twentieth century, researcher Lene Vestergaard Hau imprisoned a beam of light in a frozen cloud of atoms, stopping it dead in its tracks and demonstrating, once and for all, that nothing is sacred.
...
You don’t have to consult Aristotle to realize something is holding all this up. You can’t have miles and miles of accident and no essence anywhere in sight. Something’s got to be substantial, not just informational. Absolute, not just relative. Even illusion is illusory only in contrast to reality. Who or what is hallucinating this hallucination?
For the answer we must go back, once more, to that magnetic moment when the world turned inside out. The clock is winding down on the nineteenth century as young Henri Bergson, a Polish Jew transplanted to France, studies philosophy at the Ecole Superier Normale. Captivated by the English positivist Herbert Spencer and his book, Progress: Its Law and Cause, Bergson is dazzled by the promise of a completely coordinated system of knowledge, a synthetic scheme founded on a single absolute principle: the persistence of force. Physical science, prophecies Spencer, shall render the world transparent, granting unimagined power to the human race.

Then one day Bergson is shaken by a terrible insight, as if the whole twentieth-century intellectual meltdown has appeared to him in a blast.

There’s no time in physics.

“Newton’s laws of motion,” according to physicists Christopher Hill and Leon Lederman, “make no distinction between past and future, and time can apparently flow in any direction.” On their website devoted to mathematician Emmy Noether and her principle of symmetry-breaking, Hill and Lederman describe the universe as a movie that could run through a projector in reverse as readily as forward. “When applied to simple systems, billiard balls colliding on the table, atomic collisions, etc., it would not be possible to tell in which direction the film was progressing. The motion we see satisfies laws of motion that are the same, whether run forward or backward.”5 Future and past are effectively interchangeable.

“Notice another peculiar aspect of physics,” write Hill and Lederman. “Nowhere in any formulation does the issue of a special point in time called ‘now’ ever occur. Yet, we humans sense something we call ‘now.’ Is it an illusion? We call this the ‘Now’ question.”

I can’t help but feel present. Even memories concern moments once present. To be human is to be temporal, informed by a past and oriented toward a future. Without ongoing presence our consciousness, the sensation of now, is null and void. Lacking real time, we aren’t real either.

~~Escape From Quantopia: Collective Insanity in Science and Society -by- Ted Dace

Thursday, June 23, 2016

Day 313: How We'll Live on Mars



Recently, after one of his rockets exploded just above its launch pad, Elon Musk wryly tweeted: “Rockets are tricky.” He’s right: close to two-thirds of all the attempts to get probes to Mars have failed.

A casual observer might well wonder why humans have had so much trouble getting to Mars when getting to the moon more than fifty years ago seemed relatively easy. Mostly, it’s a matter of distances. The scale changes are phenomenal. The moon floats between 225,000 and 250,000 miles from Earth, depending on the lunar cycle. Mars can be up to a thousand times farther away. In 2003, Mars and Earth were closer than they had been in almost sixty thousand years—only about 34 million miles apart. But because Earth’s orbit around the sun takes 365 days and Mars’s takes 687 Earth days, the two planets can get out of sync and wind up very far apart, with each on a different side of the sun. When they are far apart, they are really far apart—about 250 million miles. Mars thus varies between being 140 and 1,000 times farther away from Earth than the moon.

Put another way, humans can make a round-trip to the moon and back in six days. (We could have gotten there in one day with the boost the Saturn V rocket offered, but we would have been going so fast when we arrived that we would simply have shot by instead of being captured by the moon’s weak gravity.) Using the Hohmann transfer orbits suggested by von Braun in Das Marsprojekt, even if we went much faster than the speed at which the Apollo astronauts went to the moon, we would still have to fly about a thousand times farther than the distance to the moon to end up at Mars. That’s because we simply can’t carry enough fuel to blast ahead in a straight line. Without unlimited cheap energy, we will always be in orbit around something in this solar system, so all our trajectories will be curved. There are no foreseeable shortcuts in the next twenty years that could get us to Mars in much less than 250 days each way, although SpaceX is designing more powerful and more efficient rocket engines that could shorten the trip substantially.

Even the early, more straightforward missions to Mars—missions that merely attempted to fly by the planet—regularly met with disaster. The far more difficult Mars orbiter missions, and especially the lander missions, made something of a mockery of our grasp on space technology.

The Soviets seemed to get the worst of the early Martian calamities. The first Earth object ever to reach the surface of Mars was a Soviet lander called Mars 2. It crash-landed in November of 1971, and was a follow-up project to Kosmos 419, which never got out of orbit around the Earth, much less headed to Mars. The next month, Mars 3 actually made a successful landing but stopped sending signals after twenty seconds. Mars 4’s guidance system failed, and it whizzed by the planet completely. Mars 5 was the most successful Soviet probe. It was inserted into an elliptical orbit in February 1974, and returned about sixty photos during twenty-two orbits, then failed. Mars 6 reached the planet in March of 1974 and launched a lander that crashed on the surface. It transmitted atmospheric data for about four minutes before it went silent, but the data was largely incomprehensible because of a computer chip failure. Mars 7 also entered orbit in March 1974 but launched its lander four hours too early and missed the planet. There were a handful of other earlier Mars missions launched by the Soviets that failed, as well as later failed missions. In 1996 the Russian Space Agency launched an orbiter/lander called Mars 96 that didn’t escape Earth’s gravity and broke up over the Pacific Ocean. Since then, the Russians have seemed less than eager to challenge their jinx.

A huge hindrance to successfully landing a probe on Mars is that it takes communications a long time to arrive from Earth. When Earth and Mars are farthest apart, it takes a radio signal twenty-one minutes to get from Earth to Mars, and then another twenty-one minutes for a return signal to get back to Earth. Unmanned spacecraft must therefore use artificial intelligence software to make decisions in emergencies, because there’s no time to call home for help.

But all the bad history of early lander mission failure slipped into the darker reaches of our consciousness after NASA scored big by successfully landing the Spirit and Opportunity rovers on Mars. More recently, the success of the Curiosity rover has stolen our attention. Opportunity is still actively exploring Mars after more than a decade. Curiosity finished a Martian year’s (just under two Earth years) worth of exploration in 2014, and is just getting started on its longer mission. Nevertheless, the distances these rovers have covered is not impressive. Opportunity has traveled only about twenty-six miles since 2004, and Curiosity has gone a bit more than six miles in nearly three years.

Despite the failures of the past, NASA’s success with Curiosity proves that relatively large payloads can be delivered to the surface of Mars, making not only manned flights more realistic but also the idea of cargo and resupply flights. Changing the equation from large payloads like Curiosity to human cargo is mostly just a step up in scale, frequency of cargo launches, and oxygen. SpaceX is refining a Dragon spacecraft with the ability to carry seven astronauts that it expects to fly to the International Space Station as early as 2016, although Musk recently said that “2017 is probably a realistic expectation of when we’ll send a human into space for the first time.” He has joked that a stowaway astronaut on its current Dragon vehicle that resupplied the International Space Station would survive the flight because part of the craft is pressurized; it was designed from the start to be converted to carry astronauts instead of cargo.

Currently, the Russian Soyuz spacecraft is the only vehicle that can get astronauts to the space station and back in the absence of the space shuttle. It dates to 1966 and, along with the Soyuz rocket that carries it into space, has proven to be the most reliable space vehicle in history. As made famous in the movie Gravity, at least one Soyuz spacecraft is attached to the International Space Station at all times for use as an emergency escape vehicle. The Russians charge more than $50 million to fly an astronaut to the space station. SpaceX wants that business.

~~How We'll Live on Mars -by- Stephen L. Petranek

Wednesday, June 22, 2016

Day 312: Improvisation, Creativity, and Consciousness



In the fall of 2003, I was a participant in a symposium convened by Harvard University’s Law and Business Schools called Improvisation and Negotiation. Perhaps ironically given the theme, jazz musicians comprised a distinct minority at the event, with the majority consisting of psychologists, sociologists, lawyers, corporate leaders, and colleagues from other fields beyond music in which interest in improvisation as key to optimal performance has been on the rise. I was deeply impressed by the level of dialogue about a process that I, and much of the jazz community, tend to take for granted as unique to our domain. As a jazz musician who teaches at a classically oriented institution, moreover, I greatly appreciated being part of an academic gathering in which my work was not viewed as peripheral but central to the discourse. I commonly stress to my graduate students that the marginalized status of jazz and improvised music in musical academe is not a clear barometer of the rising appreciation for this kind of creativity not only in the broader academic world but also in an increasing number of professional circles. Thus, my students are exposed to their share of commentary similar to this chapter’s epigraph from Maslow in hopes of inspiring them to appreciate the importance of what they do in the broader scope of things. Maslow, indeed, is a treasure trove for such inspiration, as when he further exhorts the need for educational systems to “create a new kind of human being who is comfortable with change, who enjoys change, who is able to improvise, who is able to face with confidence, strength, and courage a situation of which he has absolutely no forewarning.” And nothing tops my all-time favorite: “We must develop a race of improvisers, of here-now creators.”

 From an integral perspective, it might also be argued that we need to develop a race of “peak experiencers,” to invoke another important aspect of Maslow’s thought, this pertaining to the experience of transformed episodes of consciousness in which sense of self, clarity, wholeness, mind-body integration, inner calm and well-being, and a variety of other faculties are heightened. Hence, the interior dimensions of the creativity-consciousness relationship, at which point the New Jazz Studies—championing the creativity component—gives rise to what I propose as Integral Jazz Studies. As previously discussed, jazz musicians report vivid episodes of such heightened states, and it is thus no surprise that the tradition boasts a long legacy of leading artists who have significantly delved into contemplative practices and studies in order to enhance this dimension of their work and lives. And thus in addition to the common acknowledgment of Maslow as the founder of the humanist psychology movement, his work might also be regarded as important to the emergence of the integral jazz movement.

 It is from the standpoint of the creativity-consciousness relationship and its interior-exterior, integral dimensions that this chapter explores jazz’s journey into the academic world, the arena in which its integral properties, if harnessed, have the capacity to yield considerable impact on the broader educational mission, and by extension, society. In so exploring, we will make full use of our integral delineative and diagnostic faculties. For, as we are about to see, as rich as jazz might be in its inherent integral properties, academic approaches to the discipline have been strongly shaped by the overarching materialist patterns inherited from the broader musical and extramusical academic knowledge base. These will need to be identified and addressed if the field is to uphold the transformational function of which it is capable.

 Overview of Jazz Education

 In order to understand how materialist or self-confining third-person tendencies have taken hold in jazz study, it is first essential to acknowledge how the marginalized status of the idiom in musical academe, at least in part, contributed to them. Jazz not only brought to the academy a vastly different and expanded process spectrum that departed from the norm, it also introduced Afrological musicocultural features to which the prevailing Eurological culture was not receptive. To ignore the issue of race in any assessment of the situation would be a significant oversight. Between what LeRoi Jones reminds us was the “unbelievably cruel” circumstance of slavery dating back several centuries and reminders to what Karlton Hester candidly identifies as a musical “bigotry” still evident well into the twenty-first century, a topic to be explored in some depth, the dynamics of race in the broader society have clearly played out in musical academe. It is thus not surprising that jazz was overtly scorned and at times entirely forbidden, as Bruno Nettl points out, in university music departments. Oliver Lake recounts that this even included extracurricular jam sessions: “They didn’t consider jazz as music, and if they heard you playing jazz, you would be admonished. We would have to wait until all the instructors were gone, and then we would start jamming.” Dave Brubeck, reflecting on the early days of his college circuit performances, recalls “controversies at the institutions as to whether or not we should be allowed to play.” Underscoring entirely uninformed notions that playing jazz is somehow damaging to musical instruments, Brubeck adds that “[s]ometimes I would be led to an old, beat-up piano for the performance when there’d also be a great grand piano backstage that they wouldn’t let me go near.”

 While nowadays it is difficult to imagine a music department or school that forbids its students from curricular or extracurricular jazz activity, ample indicators of continued institutional marginalization may be cited. Perhaps most notable is the fact that, despite the idiom’s rich foundational skills, jazz remains largely excluded from musical academe’s core curriculum. Short units on jazz, which generally do not involve hands-on contact with the music, may be found in core music theory and history sequences, but full semester coursework in jazz improvisation, theory, composition, or arranging are relegated to elective status for all but jazz majors. Because music curricula tend to be filled to the brim with conventional requirements, leaving little time, space, or energy for electives, the result is that the majority of music majors—for whom interpretive performance and analysis of European classical repertory is the focus—continue to graduate with little or no hands-on engagement with jazz. This is most conspicuous in the case of American music students gaining certification to teach music in American public schools. As David Baker points out, “one or two courses in jazz”—which appears to be the best-case scenario, with many teacher certification programs including no jazz training—is not sufficient “to be able to adequately teach this music.”

 ~~Improvisation, Creativity, and Consciousness:Jazz as Integral Template for Music, Education, and Society -by- Edward W. Sarath

Tuesday, June 21, 2016

Day 311: Uncreative Writing


There is a room in the Musée d’Orsay that I call the “room of possibilities.” The museum is roughly set up chronologically, happily wending its way through the nineteenth century, until you hit this one room with a group of painterly responses to the invention of the camera—about a half dozen proposals for the way painting could respond. One that sticks in my mind is a trompe l’oeil solution where a figure is painted literally reaching out of the frame into the “viewer’s space.” Another incorporates three-dimensional objects atop the canvas. Great attempts, but as we all know, impressionism—and hence modernism—won out. Writing is at such a juncture today.

With the rise of the Web, writing has met its photography. By that, I mean writing has encountered a situation similar to what happened to painting with the invention of photography, a technology so much better at replicating reality that, in order to survive, painting had to alter its course radically. If photography was striving for sharp focus, painting was forced to go soft, hence impressionism. It was a perfect analog to analog correspondence, for nowhere lurking beneath the surface of either painting, photography, or film was a speck of language. Instead, it was image to image, thus setting the stage for an imagistic revolution.

Today, digital media has set the stage for a literary revolution. In 1974 Peter Bürger was still able to make the claim that “because the advent of photography makes possible the precise mechanical reproduction of reality, the mimetic function of the fine arts withers. But the limits of this explanatory model become clear when one calls to mind that it cannot be transferred to literature. For in literature, there is no technical innovation that could have produced an effect comparable to that of photography in the fine arts.” Now there is.

If painting reacted to photography by going abstract, it seems unlikely that writing is doing the same in relation to the Internet. It appears that writing’s response—taking its cues more from photography than painting—could be mimetic and replicative, primarily involving methods of distribution, while proposing new platforms of receivership and readership. Words very well might not only be written to be read but rather to be shared, moved, and manipulated, sometimes by humans, more often by machines, providing us with an extraordinary opportunity to reconsider what writing is and to define new roles for the writer. While traditional notions of writing are primarily focused on “originality” and “creativity,” the digital environment fosters new skill sets that include “manipulation” and “management” of the heaps of already existent and ever-increasing language. While the writer today is challenged by having to “go up” against a proliferation of words and compete for attention, she can use this proliferation in unexpected ways to create works that are as expressive and meaningful as works constructed in more traditional ways.

I’m on my way back to New York from Europe and am gazing wearily at the map charting our plodding progress on the screen sunk into the seatback in front of me. The slick topographic world map is rendered two dimensionally, showing the entire earth, half in darkness, half in light, with us—represented as a small white aircraft—making our way west. The screens change frequently, from graphical maps to a series of blue textual screens announcing our distance to destination—the time, the aircraft’s speed, the outside air temperature, and so forth—all rendered in elegant white sans serif type. Watching the plane chart its progress is ambient and relaxing as the beautiful renderings of oceanic plates and exotic names of small towns off the North Atlantic—Gander, Glace Bay, Carbonear—stream by.

Suddenly, as we approach the Grand Banks off the coast of Newfoundland, my screen flickers and goes black. It stays that way for some time, until it illuminates again, this time displaying generic white type on a black screen: the computer is rebooting and all those gorgeous graphics have been replaced by lines of DOS startup text. For a full five minutes, I watch line command descriptions of systems unfurling, fonts loading, and graphic packages decompressing. Finally, the screen goes blue and a progress bar and hourglass appear as the GUI loads, returning me back to the live map just as we hit landfall.

What we take to be graphics, sounds, and motion in our screen world is merely a thin skin under which resides miles and miles of language. Occasionally, as on my flight, the skin is punctured and, like getting a glimpse under the hood, we see that our digital world—our images, our film and video, our sound, our words, our information—is powered by language. And all this binary information—music, video, photographs—is comprised of language, miles and miles of alphanumeric code. If you need evidence of this, think of when you’ve mistakenly received a .jpg attachment in an e-mail that has been rendered not as image but as code that seems to go on forever. It’s all words (though perhaps not in any order that we can understand): The basic material that has propelled writing since its stabilized form is now what all media is created from as well.

Besides functionality, code also possesses literary value. If we frame that code and read it through the lens of literary criticism, we will find that the past hundred years of modernist and postmodernist writing has demonstrated the artistic value of similar seemingly arbitrary arrangements of letters.

Here’s a three lines of a .jpg opened in a text editor:

^?Îj€≈ÔI∂fl¥d4˙‡À,†ΩÑÎóªjËqsõëY”Δ″/å)1Í.§ÏÄ@˙’∫JCGOnaå$ë¶æQÍ″5ô’5å
p#n›=ÃWmÃflÓàüú*Êœi”›_$îÛμ}Tß‹æ´’[“Ò*ä≠ˇ
Í=äÖΩ;Í”≠Õ¢ø¥}è&£S¨Æπ›ëÉk©ı=/Á″/”˙ûöÈ>∞ad_ïÉúö˙€Ì—éÆΔ’aø6ªÿ-

Of course a close reading of the text reveals very little, semantically or narratively. Instead, a conventional glance at the piece reveals a nonsensical collection of letters and symbols, literally a code that might be deciphered into something sensible.

Yet what happens when sense is not foregrounded as being of primary importance? Instead, we need to ask other questions of the text. Below are three lines from a poem by Charles Bernstein called “Lift Off,” written in 1979:

HH/ ie,s obVrsxr;atjrn dugh seineocpcy i iibalfmgmMw
er,,me”ius ieigorcy¢jeuvine+pee.)a/nat” ihl”n,s
ortnsihcldseløøpitemoBruce-oOiwvewaa39osoanfJ++,r”P

Intentionally bereft of literary tropes and conveyances of human emotion, Bernstein chooses to emphasize the workings of a machine rather than the sentiments of a human. In fact, the piece is what its title says it is: a transcription of everything lifted off a page with a correction tape from a manual typewriter. Bernstein’s poem is, in some sense, code posing as a poem: careful reading will reveal bits of words and the occasional full word that was erased. For example, you can see the word “Bruce” on the last line, possibly referring to Bruce Andrews, Bernstein’s coeditor of the journal L=A=N=G=U=G=A=G=E. But such attempts at reassembling won’t get us too far: what we’re left with are shards of language comprised of errors from unknown documents. In this way Bernstein emphasizes the fragmentary nature of language, reminding us that, even in this shattered state, all morphemes are prescribed with any number of references and contexts; in this case the resultant text is a tissue of quotations drawn from a series of ghost writings.

~~Uncreative Writing: Managing Language in the Digital Age -by- Kenneth Goldsmith

Monday, June 20, 2016

Day 310: Deep Simplicity



When other people hear scientists refer to ‘complex systems’ this sometimes ‘creates a barrier, since to many people ‘complex’ means ‘complicated’, and there is an automatic assumption that if a system is complicated it will be difficult to understand. Neither assumption is necessarily correct. A complex system is really just a system that is made up of several simpler components interacting with one another. As we have seen, the great triumphs of science since the time of Galileo and Newton have largely been achieved by breaking complex systems down into their simple components and studying the way the simple components behave (if necessary, as a first approximation, taking the extra step of pretending that the components are even simpler than they really are). In the classic example of the success of this approach to understanding the world, much of chemistry can be understood in terms of a model in which the simple components are atoms, and for these purposes it scarcely matters what the nuclei of those atoms are composed of. Moving up a level, the laws which describe the behaviour of carbon dioxide gas trapped in a box can be understood in terms of roughly spherical molecules bouncing off one another and the walls of their container, and it scarcely matters that each of those molecules is made up of one carbon atom and two oxygen atoms linked together. Both systems are complex, in the scientific sense, but easy to understand. And the other key to understanding, as these examples highlight, is choosing the right simpler components to analyse; a good choice will give you a model with widespread applications, just as the atomic model applies to all of chemistry, not just to the chemistry of carbon and oxygen, and the ‘bouncing ball’ model of gases applies to all gases, not just to carbon dioxide.

At a more abstract level, the same underlying principle applies to what mathematicians choose to call complex numbers. The name has frightened off many a student, but complex numbers are really very simple, and contain only two components, scarcely justifying the use of the term ‘complex’ at all. The two components of a complex number are themselves everyday numbers, which are distinguished from each other because one of them is multiplied by a universal constant labelled i. So whereas an everyday number can be represented by a single letter (say, X), a complex number is represented by a pair of letters (say, A + iB). It happens that i is the square root of -1, so that i × i =-1, but that doesn’t really matter. What matters is that there is a fairly simple set of rules which tell you how to manipulate complex numbers – what happens when you multiply one complex number by another, or add two of them together, and so on. These rules really are simple – much simpler, for example, than the rules of chess. But using them opens up a whole new world of mathematics, which turns out to have widespread applications in physics, for example in describing the behaviour of alternating electric current and in the wave equations of quantum mechanics.

But there’s a more homely example of the simplicity of complexity. The two simplest ‘machines’ of all are the wheel and the lever. A toothed cogwheel, like the gearwheels of a racing bicycle, is in effect a combination of lever and wheel. A single wheel – even a single gearwheel – is not a complex object. But a racing bicycle, which is essentially just a collection of wheels and levers, is a complex object, within the scientific meaning of the term – even though its individual component parts, and the way they interact with one another, are easy to understand. And this highlights the other important feature of complexity, as the term is used in science today – the importance of the way things interact with one another. A heap of wheels and levers would not in itself be a complex system even if the heap consisted of all the pieces needed to make a racing bike. The simple pieces have to be connected together in the right way, so that they interact with one another to produce something that is greater than the sum of its parts. And that’s complexity, founded upon deep simplicity.

When scientists are confronted by complexity, their instinctive reaction is to try to understand it by looking at the appropriate simpler components and the way they interact with one another. Then they hope to find a simple law (or laws) which applies to the system they are studying. If all goes well, it will turn out that this law also applies to a wider sample of complex systems (as with the atomic model of chemistry, or the way the laws of cogwheels apply both to bicycles and chronometers), and that they have discovered a deep truth about the way the world works. The method has worked for over 300 years as a guide to the behaviour of systems close to equilibrium. Now it is being applied to dissipative systems on the edge of chaos – and what better terrestrial example could there be of a system in which large amounts of energy are dissipated than an earthquake?

One of the most natural questions to ask about earthquakes is how often earthquakes of different sizes occur. Apart from its intrinsic interest, this has great practical relevance if you live in an earthquake zone, or if you represent an insurance company trying to decide what premiums to charge for earthquake insurance. There are lots of ways in which earthquakes might be distributed through time. Most earthquakes might be very large, releasing lots of energy which then takes a long time to accumulate once again. Or they might all be small, releasing energy almost continuously, so that there is never enough to make a big ’quake. There could be some typical size for an earthquake, with both bigger and smaller events relatively rare (which is the way the heights of people are distributed, around some average value). Or they could be completely random. There is no point in guessing; the only way to find out is to look at all the records of earthquake, and add up how many of each size have occurred. Appropriately, the first person to do this was Charles Richter (1900–85), who introduced the eponymous scale now widely used to measure the intensity of earthquakes.
The Richter scale is logarithmic, so that an increase of one unit on the scale corresponds to an increase in the amount of energy released by a factor of 30; a magnitude 2 earthquake is 30 times as powerful as a magnitude 1 earthquake, a magnitude 3 earthquake is 30 times more powerful than a magnitude 2 earthquake (and therefore 900 times more powerful than a magnitude 1 earthquake), and so on. Although the name attached to the scale is Richter’s alone, he worked it out, at the beginning of the 1930s, with his colleague Beno Gutenberg (1889–1960), and in the middle of the 1950s the same team turned their attention to the investigation of the frequency of earthquakes of different sizes. The team looked at records of earthquakes worldwide, and combined them in ‘bins’ corresponding to steps of half a magnitude on the Richter scale – so all the earthquakes with magnitude between 5 and 5.5 went in one bin, all those between 5.5 and 6 in the next bin, and so on. Remembering that the Richter scale itself is logarithmic, in order to compare like with like they then took the logarithm in each of these numbers. When they plotted a graph showing the logarithm of the number of earthquakes in each bin in relation to the magnitude itself (a so-called ‘log-log plot’), they found that it made a straight line. There are very many small earthquakes, very few large earthquakes, and the number in between lies, for any magnitude you choose, on the straight line joining those two extreme points. This means that the number of earthquakes of each magnitude obeys a power law – for every 1,000 earthquakes of magnitude 5 there are roughly 100 earthquakes of magnitude 6,10 earthquakes of magnitude 7, and so on. This is now known as the Gutenberg-Richter law; it is a classic example of a simple law underlying what looks at first sight to be a complex system. But what exactly does it mean, and does it have any widespread applications?

First, it’s worth stressing just how powerful a law of nature this is. An earthquake of magnitude 8, a little smaller than the famous San Francisco earthquake of 1906, is 20 billion times more energetic than an earthquake of magnitude 1, which corresponds to the kind of tremor you feel indoors when a heavy lorry passes by in the street outside. Yet the same simple law applies across this vast range of energies. Clearly, it is telling us something fundamental about how the world works.

~~Deep Simplicity : Chaos, Complexity and the Emergence of Life -by- John Gribbin

Sunday, June 19, 2016

Day 309: Success And Dominance In Ecosystems



Social insects saturate most of the terrestrial environment. In ways that become fully apparent only when we bring our line of sight down to a millimeter of the ground surface, they lay heavily on the rest of the fauna and flora, constraining their evolution.

That fact has struck home to me countless times during my life as a biologist. Recently it came again as I walked through the mixed coniferous and hardwood forests on Finland’s Tvarminne Archipelago. My guides were Kari Vepsalainen and Riitta Savolainen of the University of Helsinki, whose research has meticulously detailed the distribution of ants in the archipelago and the histories of individual colonies belonging to dominant species. We were in a cold climate, less than 800 kilometers from the arctic circle, close to the northern limit of ant distribution. Although it was mid-May, the leaves of most of the deciduous trees were still only partly out. The sky was overcast, a light rain fell, and the temperature at midday was an unpleasant (for me) 12°C. Yet ants were everywhere. Within a few hours, as we walked along trails, climbed huge moss-covered boulders, and pulled open tussocks in bogs, we counted nine species of Formica and an additional eight species belonging to other genera, altogether about one-third the known fauna of Finland. Mound-building Formicas dominated the ground surface. The nests of several species, especially F. aquilonia and F. polyctena, were a meter or more high and contained hundreds of thousands of workers. Ants seethed over the mound surfaces. Columns traveled several tens of meters between adjacent mounds belonging to the same colony. Other columns moved up the trunks of nearby pine trees, where the ants attended groups of aphids and collected their sugary excrement. Swarms of solitary foragers deployed from the columns in search of prey. Some could be seen returning with geometrid caterpillars and other insects. We encountered a group of F. polyctena workers digging into the edge of a low mound of Lasius flavus. They had already killed several of the smaller ants and were transporting them homeward for food. As we scanned the soil surface, peered under rocks, and broke apart small rotting tree branches, we were hard put to find more than a few square meters anywhere free of ants. In southern Finland they are in fact the premier predators, scavengers, and turners of soil. Exact censuses remain to be made, but it seems likely that ants make up 10% or more of the entire animal biomass of the Tvarminne Archipelago.

Two months earlier, in the company of Bert Hölldobler of the University of Wurzburg, F. R. Germany (then at Harvard University, USA), I had walked and crawled on all fours over the floor of tropical forest at La Selva, Costa Rica. The ant fauna was radically different and much more diverse than in Finland. The dominant genus was Pheidole, as it is in most tropical localities. Within a 1.5 km2 area along the Rio Sarapiqui, my students and I have collected 34 species of Pheidole, of which 16 are new to science. The total ant fauna in the sample area probably exceeds 150 species. That is a conservative estimate, because Neotropical forests have some of the richest faunas in the world. Manfred Verhaagh (personal communication) collected about 350 species belonging to 71 genera at the Rio Pachitea, Peru. That is the world record at the time of this writing. I identified 43 species belonging to 26 genera from a single leguminous tree at the Tambopata Reserve, Peru (Wilson, 1987a). From my experience in ground collecting in many Neotropical localities, I am sure that an equal number of still different species could have been found on the ground within a radius of a few tens of meters around the base of the tree. In other words, the fauna of the Tambopata Reserve is probably equivalent to that at the Rio Pachitea.

The abundance of ants at Neotropical localities, as opposed to species diversity, is comparable to that on the Tvarminne archipelago, and they occupy a great many more specialized niches as well. In addition to a large arboreal fauna, lacking in Finland, leaf-cutter ants raise fungi on newly harvested vegetation, Acanthognathus snare tiny collembolans with their long traplike mandibles, Prionopelta species hunt campodeid diplurans deep within decaying logs, and so on in seemingly endless detail. Roughly one out of five pieces of rotting wood contains a colony of ants, and others harbor colonies of termites. Ants absolutely dominate in the canopies of the tropical forests. In samples collected by Terry Erwin by insecticidal fogging in Peru, they make up about 70% of all of the insects (personal communication). In Brazilian Terra Firme forest near Manaus, Fittkau and Klinge (1973) found that ants and termites together compose a little less than 30% of the entire animal biomass. These organisms, along with the highly social stingless bees and polybiine wasps, make up an astonishing 80% of the entire insect biomass.

While few quantitative biomass measurements have been made elsewhere, my own strong impression is that social insects dominate the environment to a comparable degree in the great majority of land environments around the world. Very conservatively, they compose more than half the insect biomass. It is clear that social life has been enormously successful in the evolution of insects. When reef organisms and human beings are added, social life is ecologically preeminent among animals in general. This disproportion seems even greater when it is considered that only 13,000 species of highly social insects are known, out of the 750,000 species of the described insect fauna of the world.

In short, 2% of the known insect species of the world compose more than half the insect biomass. It is my impression that in another, still unquantified sense these organisms, and particularly the ants and termites, also occupy center stage in the terrestrial environment. They have pushed out solitary insects from the generally most favorable nest sites. The solitary forms occupy the more distant twigs, the very moist or dry or excessively crumbling pieces of wood, the surface of leaves - in short, the more remote and transient resting places. They are also typically either very small, or fast moving, or cleverly camouflaged, or heavily armored. At the risk of oversimplification, the picture 1 see is the following: social insects are at the ecological center, solitary insects at the periphery.

This then is the circumstance with which the social insects challenge our ingenuity: their attainment of a highly organized mode of colonial existence was rewarded by ecological dominance, leaving what must have been a deep imprint upon the evolution of the remainder of terrestrial life.
...
The most advanced social insects are referred to as eusocial, an evolutionary grade combining three traits: some form of care of the young, an overlap of two or more generations in the same nest site or bivouac, and the existence of a reproductive caste and a nonreproductive or “worker” caste. The eusocial grade has been attained by 4 principal groups of insects: the ants (order Hymenoptera, family Formicidae. 8800 described species), the eusocial bees (order Hymenoptera, about 10 independent evolutionary lines within the families Apidae and Halictidae. perhaps 1000 described species overall), the eusocial wasps (order Hymenoptera. mostly in the family Vespidae and a few in the family Sphecidae. 800 described species), and the termites (order Isoptera, 2200 described species). The presence of a worker caste is by far the most important feature, because it enhances division of labor and a more complex society overall. The most familiar social insects, those with the striking social adaptations such as honeybees, mound-building termites, and army ants, all have strongly differentiated worker castes.

Evolutionary grades below the eusocial state abound in the insects. They are lumped together in the category “presocial,” in which one or two but not all three of the aforementioned eusocial traits are displayed. One of the most frequently remarked forms of presocial behavior is subsocial behavior, which simply means that the parents care for their own nymphs or larvae. For example, the females of many true bugs (order Hemiptera) remain with their young to protect them from predators and sometimes even to guide them from one feeding site to another. Some scolytid bark beetles not only guard their young but feed them fungi in specially constructed nursery chambers. In neither case, however. do the offspring later function as nonreproductive workers. Hence neither hemipterans nor scolytid beetles, remarkable as they are, qualify as social insects.

Ants, bees, and wasps, being members of the order Hymenoptera, have a life cycle marked by complete metamorphosis. To use the appropriate adjective, they are holometabolous. As illustrated in Figure 3. the individual passes through four major developmental stages radically different from one another: egg, larva, pupa, and adult.

The significance of this tortuous sequence is the difference it allows between the larva and the adult. The larva is a feeding machine, specialized for consumption and growth. It typically travels less, remaining sequestered in a nest site or other protected microenvironment. The adult, in contrast, is specialized for reproduction and in many cases dispersal as well. It often feeds on different food from that of the larva or no food at all, living on energy stores built up during the larval phase. Finally, the pupa is simply a quiescent stage during which tissues are reorganized from the larval to the adult form. The effect of complete metamorphosis on social evolution is profound. The larva can do little work and must be nurtured. Its dependence on the adults is increased by its limited mobility, since even if it were capable of independent feeding it could not travel to distant food sources. Consequently a large part of adult worker life is devoted to larval care, during which individuals search for food to give to the larvae, then feed, clean, and protect them.

~~Success And Dominance In Ecosystems: The Case Of The Social Insects -by- Edward O. Wilson