Monday, June 20, 2016

Day 310: Deep Simplicity



When other people hear scientists refer to ‘complex systems’ this sometimes ‘creates a barrier, since to many people ‘complex’ means ‘complicated’, and there is an automatic assumption that if a system is complicated it will be difficult to understand. Neither assumption is necessarily correct. A complex system is really just a system that is made up of several simpler components interacting with one another. As we have seen, the great triumphs of science since the time of Galileo and Newton have largely been achieved by breaking complex systems down into their simple components and studying the way the simple components behave (if necessary, as a first approximation, taking the extra step of pretending that the components are even simpler than they really are). In the classic example of the success of this approach to understanding the world, much of chemistry can be understood in terms of a model in which the simple components are atoms, and for these purposes it scarcely matters what the nuclei of those atoms are composed of. Moving up a level, the laws which describe the behaviour of carbon dioxide gas trapped in a box can be understood in terms of roughly spherical molecules bouncing off one another and the walls of their container, and it scarcely matters that each of those molecules is made up of one carbon atom and two oxygen atoms linked together. Both systems are complex, in the scientific sense, but easy to understand. And the other key to understanding, as these examples highlight, is choosing the right simpler components to analyse; a good choice will give you a model with widespread applications, just as the atomic model applies to all of chemistry, not just to the chemistry of carbon and oxygen, and the ‘bouncing ball’ model of gases applies to all gases, not just to carbon dioxide.

At a more abstract level, the same underlying principle applies to what mathematicians choose to call complex numbers. The name has frightened off many a student, but complex numbers are really very simple, and contain only two components, scarcely justifying the use of the term ‘complex’ at all. The two components of a complex number are themselves everyday numbers, which are distinguished from each other because one of them is multiplied by a universal constant labelled i. So whereas an everyday number can be represented by a single letter (say, X), a complex number is represented by a pair of letters (say, A + iB). It happens that i is the square root of -1, so that i × i =-1, but that doesn’t really matter. What matters is that there is a fairly simple set of rules which tell you how to manipulate complex numbers – what happens when you multiply one complex number by another, or add two of them together, and so on. These rules really are simple – much simpler, for example, than the rules of chess. But using them opens up a whole new world of mathematics, which turns out to have widespread applications in physics, for example in describing the behaviour of alternating electric current and in the wave equations of quantum mechanics.

But there’s a more homely example of the simplicity of complexity. The two simplest ‘machines’ of all are the wheel and the lever. A toothed cogwheel, like the gearwheels of a racing bicycle, is in effect a combination of lever and wheel. A single wheel – even a single gearwheel – is not a complex object. But a racing bicycle, which is essentially just a collection of wheels and levers, is a complex object, within the scientific meaning of the term – even though its individual component parts, and the way they interact with one another, are easy to understand. And this highlights the other important feature of complexity, as the term is used in science today – the importance of the way things interact with one another. A heap of wheels and levers would not in itself be a complex system even if the heap consisted of all the pieces needed to make a racing bike. The simple pieces have to be connected together in the right way, so that they interact with one another to produce something that is greater than the sum of its parts. And that’s complexity, founded upon deep simplicity.

When scientists are confronted by complexity, their instinctive reaction is to try to understand it by looking at the appropriate simpler components and the way they interact with one another. Then they hope to find a simple law (or laws) which applies to the system they are studying. If all goes well, it will turn out that this law also applies to a wider sample of complex systems (as with the atomic model of chemistry, or the way the laws of cogwheels apply both to bicycles and chronometers), and that they have discovered a deep truth about the way the world works. The method has worked for over 300 years as a guide to the behaviour of systems close to equilibrium. Now it is being applied to dissipative systems on the edge of chaos – and what better terrestrial example could there be of a system in which large amounts of energy are dissipated than an earthquake?

One of the most natural questions to ask about earthquakes is how often earthquakes of different sizes occur. Apart from its intrinsic interest, this has great practical relevance if you live in an earthquake zone, or if you represent an insurance company trying to decide what premiums to charge for earthquake insurance. There are lots of ways in which earthquakes might be distributed through time. Most earthquakes might be very large, releasing lots of energy which then takes a long time to accumulate once again. Or they might all be small, releasing energy almost continuously, so that there is never enough to make a big ’quake. There could be some typical size for an earthquake, with both bigger and smaller events relatively rare (which is the way the heights of people are distributed, around some average value). Or they could be completely random. There is no point in guessing; the only way to find out is to look at all the records of earthquake, and add up how many of each size have occurred. Appropriately, the first person to do this was Charles Richter (1900–85), who introduced the eponymous scale now widely used to measure the intensity of earthquakes.
The Richter scale is logarithmic, so that an increase of one unit on the scale corresponds to an increase in the amount of energy released by a factor of 30; a magnitude 2 earthquake is 30 times as powerful as a magnitude 1 earthquake, a magnitude 3 earthquake is 30 times more powerful than a magnitude 2 earthquake (and therefore 900 times more powerful than a magnitude 1 earthquake), and so on. Although the name attached to the scale is Richter’s alone, he worked it out, at the beginning of the 1930s, with his colleague Beno Gutenberg (1889–1960), and in the middle of the 1950s the same team turned their attention to the investigation of the frequency of earthquakes of different sizes. The team looked at records of earthquakes worldwide, and combined them in ‘bins’ corresponding to steps of half a magnitude on the Richter scale – so all the earthquakes with magnitude between 5 and 5.5 went in one bin, all those between 5.5 and 6 in the next bin, and so on. Remembering that the Richter scale itself is logarithmic, in order to compare like with like they then took the logarithm in each of these numbers. When they plotted a graph showing the logarithm of the number of earthquakes in each bin in relation to the magnitude itself (a so-called ‘log-log plot’), they found that it made a straight line. There are very many small earthquakes, very few large earthquakes, and the number in between lies, for any magnitude you choose, on the straight line joining those two extreme points. This means that the number of earthquakes of each magnitude obeys a power law – for every 1,000 earthquakes of magnitude 5 there are roughly 100 earthquakes of magnitude 6,10 earthquakes of magnitude 7, and so on. This is now known as the Gutenberg-Richter law; it is a classic example of a simple law underlying what looks at first sight to be a complex system. But what exactly does it mean, and does it have any widespread applications?

First, it’s worth stressing just how powerful a law of nature this is. An earthquake of magnitude 8, a little smaller than the famous San Francisco earthquake of 1906, is 20 billion times more energetic than an earthquake of magnitude 1, which corresponds to the kind of tremor you feel indoors when a heavy lorry passes by in the street outside. Yet the same simple law applies across this vast range of energies. Clearly, it is telling us something fundamental about how the world works.

~~Deep Simplicity : Chaos, Complexity and the Emergence of Life -by- John Gribbin

No comments:

Post a Comment