# Without Claude Shannon, there would have been no Internet

This equation was published in the 1949 book The Mathematical Theory of Communication, co-written by Claude Shannon and Warren Weaver. An elegant way to work out how efficient a code could be, it turned "information" from a vague word related to how much someone knew about something into a precise mathematical unit that could be measured, manipulated and transmitted. It was the start of the science of "information theory", a set of ideas that has allowed us to build the internet, digital computers and telecommunications systems. When anyone talks about the information revolution of the last few decades, it is Shannon's idea of information that they are talking about.

Claude Shannon was a mathematician and electronic engineer working at Bell Labs in the US in the middle of the 20th century. His workplace was the celebrated research and development arm of the Bell Telephone Company, the US's main provider of telephone services until the 1980s when it was broken up because of its monopolistic position. During the second world war, Shannon worked on codes and methods of sending messages efficiently and securely over long distances, ideas that became the seeds for his information theory.

Before information theory, remote communication was done using analogue signals. Sending a message involved turning it into varying pulses of voltage along a wire, which could be measured at the other end and interpreted back into words. This is generally fine for short distances but, if you want to send something across an ocean, it becomes unusable. Every metre that an analogue electrical signal travels along a wire, it gets weaker and suffers more from random fluctuations, known as noise, in the materials around it. You could boost the signal at the outset, of course, but this will have the unwanted effect of also boosting the noise.

Information theory helped to get over this problem. In it, Shannon defined the units of information, the smallest possible chunks that cannot be divided any further, into what he called "bits" (short for binary digit), strings of which can be used to encode any message. The most widely used digital code in modern electronics is based around bits that can each have only one of two values: 0 or 1.

This simple idea immediately improves the quality of communications. Convert your message, letter by letter, into a code made from 0s and 1s, then send this long string of digits down a wire – every 0 represented by a brief low-voltage signal and every 1 represented by a brief burst of high voltage. These signals will, of course, suffer from the same problems as an analogue signal, namely weakening and noise. But the digital signal has an advantage: the 0s and 1s are such obviously different states that, even after deterioration, their original state can be reconstructed far down the wire. An additional way to keep the digital message clean is to read it, using electronic devices, at intervals along its route and resend a clean repeat.

Shannon showed the true power of these bits, however, by putting them into a mathematical framework. His equation defines a quantity, H, which is known as Shannon entropy and can be thought of as a measure of the information in a message, measured in bits.

In a message, the probability of a particular symbol (represented by "x") turning up is denoted by p(x). The right hand side of the equation above sums up the probabilities of the full range of symbols that might turn up in a message, weighted by the number of bits needed to represent that value of x, a term given by logp(x). (A logarithm is the reverse process of raising something to a power – we say that the logarithm of 1000 to base 10 – written log10(1000) – is 3, because 103=1000.)

A coin toss, for example, has two possible outcomes (or symbols) – x could be heads or tails. Each outcome has a 50% probability of occurring and, in this instance, p(heads) and p(tails) are each ½. Shannon's theory uses base 2 for its logarithms and log2(½) is -1. That gives us a total information content in flipping a coin, a value for H, of 1 bit. Once a coin toss has been completed, we have gained one bit of information or, rather, reduced our uncertainty by one bit.

A single character taken from an alphabet of 27 has around 4.76 bits of information – in other words log2(1/27) – because each character either is or is not a particular letter of that alphabet. Because there are 27 of these binary possibilities, the probability of each is 1/27. This is a basic description of a basic English alphabet (26 characters and a space), if each character was equally likely to turn up in a message. By this calculation, messages in English need bandwidth for storage or transmission equal to the number of characters multiplied by 4.76.

But we know that, in English, each character does not appear equally. A "u" usually follows a "q" and "e" is more common than "z". Take these statistical details into account and it is possible to reduce the H value for English characters to less than one bit. Which is useful if you want to speed up comms or take up less space on a hard disk.

Information theory was created to find practical ways to make better, more efficient codes and find the limits on how fast computers could process digital signals. Every piece of digital information is the result of codes that have been examined and improved using Shannon's equation. It has provided the mathematical underpinning for increased data storage and compression – Zip files, MP3s and JPGs could not exist without it. And none of those high-definition videos online would have been possible without Shannon's mathematics.

["Claude Shannon" via Wikimedia Commons]

# What is Newton's second law of motion?

It explains force, whichever way it is happening

Isaac Newton's laws of motion were first set down in his Principia Mathematica Philosophiae Naturalis in 1687. The first law states that an object will stay at rest or move with a constant velocity, unless it is acted upon by an external force. The third is the well-known (if mildly misunderstood) idea that every action (force) has an equal but opposite reaction – if you push on a door, the door will push back against you.

The second law is the one that tells you how to calculate the value of a force. Force (measured in Newtons) is one of the fundamental physical properties of a system and comes in many forms. You might feel it as a push or pull (a mechanical force), while it is the value of your weight (the gravitational force of the Earth pulling on you) and can be seen in the repulsion or attraction of magnets or electric charges (electromagnetic force). A force might be the result of any number of fundamental physical interactions between bits of matter but Newton's second law allows you to work out how a force, when it is present, will affect the motion of an object.

In the form pictured, above, it says that force (F) is equal to the rate of change of momentum (p) with respect to time (t). The small "d"s are differential notation, another Newtonian invention that appears in countless physical equations and that allows you to mathematically predict how something will change as another related parameter is incrementally altered – in this case, time.

Momentum is the mass (kilograms) of an object multiplied by its velocity (metres per second). In most situations, the mass of something does not change as it moves so the equation can be simplified to mass (m) multiplied by the rate of change of velocity, which we know as acceleration (a). That gives us the more familiar school textbook version of the second law: F=ma.

Like the rest of Newton's physics, the second law of motion holds up for a staggering array of everyday situations and is a workhorse in modern science and engineering. The way almost anything moves can be worked out using his laws of motion – how much force it will take to accelerate a train, whether a cannon ball will reach its target, how air and ocean currents move or whether a plane will fly are all applications of Newton's second law. He even used the laws of motion, combined with his universal law of gravitation, to explain why planets move the way they do.

Weight is a force, equal to an object's mass multiplied by the gravitational acceleration caused by the Earth (equal to 10 metres per second per second), in the direction of the centre of the planet. The reason you don't fall through the ground, of course, is explained by Newton's third law of motion, which says that the surface of the Earth is pushing up against your feet at a force equal but opposite to your weight.

A modified version of the second law applies when the mass of an object is changing, such as a rocket, which burns up fuel and becomes lighter as it climbs through the atmosphere.

We all know the second law in practice, if not in mathematics. You need to exert more force (and therefore more energy) to move a heavy grand piano than to slide a small stool across the floor. When you catch a fast-moving cricket ball, you know it will hurt less if you move your arm back as you catch it – by giving the moving ball more time to slow down your hand has to exert less opposing force on the ball.

The cricket ball example demonstrates that forces not only have a size but act in a particular direction. Forces belong to a category of physical properties, which includes momentum and velocity, known as vectors. These contrast with scalars, which have a size but no direction, for example temperature or mass.

The F in Newton's second law refers to the net force acting on an object. Working out what happens to an object that has several forces acting on it, therefore, requires you to take account of both the directions and sizes of each force. Two forces might have the same sizes but, if they are pointed directly opposite one another, they will cancel to zero.

A game of tug-of-war is a good way to think about this. When two teams are pulling in opposite directions, the movement of the rope (as calculated by Newton's second law) will be determined by the net force on the rope. The size of that net force is the difference in the sizes of the forces being exerted by the two teams. The direction of the net force will be in the direction of whichever team is pulling harder.

To describe atoms, and even smaller things, physicists use versions of force and momentum in the equations that include quantum-mechanical descriptions of time as well as space. At this scale, forces are the mathematical by-products arising when fundamental particles of matter, such as electrons and quarks, exchange particles such as photons, gluons or W or Z particles, that "carry" forces and are collectively known as gauge bosons.

Newton's second law works as a way to describe the motion of everything in a quantum mechanical system as long as the particles are not moving near the speed of light.

When an object is moving close to the speed of light, we get into the realm of special relativity, which tells us that the mass of an object will increase as it moves faster. You need to take this into account when calculating forces at these speeds.

Indeed, most of Newton's classical physics needs to be modified in extreme situations – the second law is not accurate when immense gravitational forces are present, around a black hole or in the context of the huge masses of entire galaxies for example, where general relativity takes over as the best way to describe the movement within a system.

guardian.co.uk © Guardian News and Media 2014

# Why do bubbles expand when you heat them?

Think of a cake. When you put it in the oven, it starts off at a particular volume and then, an hour later, it has risen to perhaps double its size. It is obvious what has happened – the air bubbles that you have carefully folded into the mixture during the preparation and the little bubbles of carbon dioxide created by the baking powder have expanded as they are heated in the oven, taking the rest of the cake with it. All this time, the pressure of the air inside those bubbles has stayed the same (you know that because cakes don't usually explode when you slice them after cooking).

It is an intuitive idea that bubbles of air will expand if you heat them, as long as the pressure remains constant. It is also a fundamental component of the ideal gas laws, first written down in the early 19th century by the French natural philosopher Joseph Louis Gay-Lussac. He was working on the relationship between the volume and temperature of a gas, building on work carried out several decades earlier by the inventor and mathematician Jacques Charles who had shown that volume and temperature were proportional – heat a gas and its volume will increase and vice versa, as long as the pressure remains constant.

The first piece of the ideal gas puzzle came in the 17th century. Robert Boyle had been carrying out experiments with air, which he proposed was full of particles connected by tiny invisible springs. He found that the pressure of a gas had an inverse relationship to its volume. If the volume doubled, its pressure halved and vice versa, when the temperature is held constant.

As well as the volume/temperature relationship, Gay-Lussac extended the work and experiments, from a century earlier, of the inventor Guillaume Amontons to show that, in a fixed volume of gas, pressure was directly proportional to absolute temperature.

With the three relationships between pressure, volume and temperature measured and written down, French engineer Benoît Paul Émile Clapeyron, one of the founding fathers of thermodynamics, combined the work of Boyle, Charles and Gay‑Lussac into the combined ideal gas equation above, in 1834.

In short, the ideal gas law shows the relationship between the four properties of a gas that you need to know in order to predict how it will behave: pressure, temperature, volume and the number of particles of gas (ie atoms or molecules) present. It is "ideal" because the law is a model that assumes the particles are infinitely small points and do not interact with each other. All collisions between ideal gas particles are elastic, which means they do not lose any energy when they rebound off each other.

In practice, real gas particles do have measurable sizes and sometimes attract or repel each other. Nevertheless, the ideal gas equation is a highly successful way to understand how gases shift and change depending on their surroundings.

The law states that the product of the pressure (P) and volume (V) of a gas is directly proportional to its absolute temperature (T, measured in kelvin). On the right-hand side of the equation is the number of moles of gas present (n) in the system, where a mole is equal to 6.02214129×1023 particles, a number known as the Avogadro constant. Also on the right is the universal gas constant (R), equal to 8.3145 joules per mole kelvin.

The ideal gas equations can be used to work out how much air inside a cake will expand (though it's unlikely to be used for that) but it also applies to plenty of other situations. Ever noticed a bicycle pump get hot when you fill a tyre with air? That's because you're quickly putting energy into the air inside the pump by pushing the piston and reducing the volume at the same time, which causes the molecules to bounce around faster in a smaller volume and the gas heats up.

Refrigeration works in the opposite way to the bicycle pump. If you release a gas very quickly from high pressure (inside a storage tank, say) to a region of lower pressure (outside air at atmospheric pressure), then the gas will expand. The energy required to do this will come from the molecules of gas themselves and so the overall temperature of the gas will drop. You can see this in action when pressurised carbon dioxide inside a fire extinguisher turns instantly into a frost when it is released through the nozzle and on to a fire. More prosaically, the same mechanism keeps your food cold in a refrigerator.

The relationships between these so-called "state properties" of a gas make sense intuitively.

But the ideal gas law can also be derived mathematically, from first principles, by imagining particles bouncing around a box. About two decades after Clapeyron wrote it down, August Krönig and Rudolf Clausius independently looked at the statistical distribution of speeds (and hence energy) among the particles to work out how pressure, volume and temperature related to each other in a gas – an approach known as statistical mechanics. In essence, this meant looking at the properties of huge numbers of tiny components or particles inside a system in order to calculate the macroscopic results. In other words, a box containing a gas will have trillions of particles flying around inside it in random directions, bouncing off each other and off the walls.

In this model, the kinetic energy of the particles is proportional to the temperature of the gas. Particles hitting the sides of the box translate in to the pressure of the gas.

In this "kinetic" version of the ideal gas law, the right-hand side is written slightly differently. Instead of "nR" are terms for the number of molecules in the gas and the Boltzmann constant (k) equal to 1.38065×10-23 joules per kelvin.

The two versions of the equation describe identical things. Whether it is cakes, bicycle pumps, refrigerators or even when modelling the behaviour of stars (which are, in essence, just clouds of hydrogen gas), you can use these simple relations work out how what the gases are doing.

["Boiling Water In A Pot" on Shutterstock]

# What violins have in common with the sea – the wave principle

You're reading these words because light waves are bouncing off the letters on the page and into your eyes. The sounds of the rustling paper or beeps of your computer reach your ear via compression waves travelling through the air. Waves race across the surface of our seas and oceans and earthquakes send waves coursing through the fabric of the Earth.

As different as they all seem, all of these waves have something in common – they are all oscillations that carry energy from one place to another. The physical manifestation of a wave is familiar – a material (water, metal, air etc) deforms back and forth around a fixed point.

Think of the ripples on the surface of a pond when you throw in a stone. Looking from above, circular waves radiate out from the point where the stone hits the water, as the energy of the collision makes water molecules around it move up and down in unison. The resulting wave is called "transverse" because it travels out from the point the stone sank, while the molecules themselves move in the perpendicular direction. A vertical cross-section of the wave would look like a familiar sine curve.

Sound waves are known as "longitudinal" because the medium in which they travel – air, water or whatever else – vibrates in the same direction as the wave itself. Loudspeakers, for example, move air molecules back and forth in the same direction as the vibration of the speaker cone.

In both cases, the water or air molecules remain, largely, in the same place as they started, as the wave travels through the material. They are not shifted, en masse, in the direction of the wave.

The one-dimensional wave equation (pictured) describes how much any material is displaced, over time, as the wave proceeds. The curly "d" symbols scattered through the equation are mathematical functions known as partial differentials, a way to measure the rate of change of a specific property of the system with respect to another.

On the left is the expression for how fast the material is deforming (y) in space (x) at any given instant; on the right is a description for how fast the material is changing in time (t) at that same instant. Also on the right is the velocity of the wave (v). For a wave moving across the surface of a sea, the equation relates how fast a tiny piece of water is physically deforming, at any particular instant, in space (on the left) and time (on the right).

The wave equation had a long genesis, with scientists from many fields circling around its mathematics across the centuries. Among many others, Daniel Bernoulli, Jean le Rond d'Alembert, Leonhard Euler, and Joseph-Louis Lagrange realised that there was a similarity in the maths of how to describe waves in strings, across surfaces and through solids and fluids.

Bernoulli, a Swiss mathematician, began by trying to understand how a violin string made sound. In the 1720s, he worked out the maths of a string as it vibrated by imagining the string was composed of a huge number of tiny masses, all connected with springs. Applying Isaac Newton's laws of motion for the individual masses showed him that the simplest shape for vibrating violin string, fixed at each end, would be the gentle arc of a single sine curve. A violin string (or a string on any instrument, for that matter) vibrates in transverse waves along its length, which creates longitudinal waves in the surrounding air, which our ears interpret as sound.

Some decades later, mathematician Jean Le Rond d'Alembert generalised the string problem to write down the wave equation, in which he found that the acceleration of any segment of the string was proportional to the tension acting on it. The waves created by different tensions of the string produce different notes – think of how the sound from a plucked string can be changed as it is tightened or loosened.

The wave equation started off describing movement of physical stuff but it is much more powerful than that. Mathematically, it can also describe, for example, the movement of heat or electrical potential, by changing "y" from describing the deformation of a substance to the change in the energy of a system.

Not all waves need to travel through a material. By 1864, the physicist James Clerk Maxwell had derived his four famous equations for the interactions of the electric and magnetic fields in a vacuum around charged particles. He noticed that the expressions could be combined to form wave equations featuring the strength of the electric or magnetic fields in the place of "y". And the speed of these waves (the "v" term in the equation) was equal to the speed of light.

This simple mathematical re-arrangement was one of the most significant discoveries in the history of physics, showing that light must be an electromagnetic wave that travelled in the vacuum.

Electromagnetic waves, then, are transverse oscillations of the electric and magnetic fields. Discovering their wave-like nature led to the prediction that there must be light of different wavelengths, the distance between successive peaks and troughs of the sine curve. It was soon discovered that wavelengths longer than visible light include microwaves, infrared and radio waves; shorter wavelengths include ultraviolet light, X-rays and gamma rays.

The wave equation has also proved useful in understanding one of the strangest, but most important, physical ideas in the past century: quantum mechanics. In this description of the world at the level of atoms and smaller, particles of matter can be described as waves using Erwin Schrödinger's eponymous equation.

His adaptation of the wave equation describes electrons, for example, not as a well-defined object in space but as quantum waves for which it is only possible to describe probabilities for position, momentum or other basic properties. Using the Schrödinger wave equation, interactions between fundamental particles can be modelled as if they were waves that interfere with each other, instead of the classical description of fundamental particles, which has them hitting each other like billiard balls.

Everything that happens in our world, happens because energy moves from one place to another. The wave equation is a mathematical way to describe how that energy flows.

guardian.co.uk © Guardian News and Media 2014

# Why you can't travel at the speed of light: A short history of Einstein's theory of relativity

Albert Einstein is famous for many things, not least his theories of relativity. The first, the special theory of relativity, was the one that began the physicist's reputation for tearing apart the classical worldview that had come before. Special relativity, a way of relating the motion of objects in the universe, led scientists to re-evaluate their assumptions about things as fundamental as time and space. And it led to important revelations about the relationship between energy and matter.

Special relativity was published by Einstein in 1905, in a paper titled "On the Electrodynamics of Moving Bodies". He came to it after picking on a conflict he noticed between the equations for electricity and magnetism, which the physicist James Clerk Maxwell had recently developed, and Isaac Newton's more established laws of motion.

Light, according to Maxwell, was a vibration in the electromagnetic field and it travelled at a constant speed in a vacuum. More than 100 years earlier, Newton had set down his laws of motion and, together with ideas from Galileo Galilei, these showed how the speed of an object would differ depend on who was measuring it and how they were moving relative to the object. A ball you are holding will seem still to you, even when you're in a moving car. But that ball will seem to be moving to anyone standing on the pavement.

But there was a problem in applying Newton's laws of motion to light. In Maxwell's equations, the speed of electromagnetic waves is a constant defined by the properties of the material through which the waves move. There is nothing in there that allows the speed of these waves to be different for different people depending on how they were moving relative to each other. Which is bizarre, if you think about it.

Imagine someone sitting in a stationary train, throwing a ball from where he's sitting to the opposite wall, a few metres further down the train from him. You, standing on the station platform, measure the speed of the ball at the same value as the person on the train.

Now the train starts to move (in the direction of the ball), and you again measure the speed of the ball. You would rightly calculate it as higher – the initial speed (ie, when the train was at rest) plus the forward speed of the train. On the train, meanwhile, the game-player will notice nothing different. Your two values for the speed of the ball will be different; both correct for your frames of reference.

Replace the ball with light and this calculation goes awry. If the person on the train were shining a light at the opposite wall and measured the speed of the particles of light (photons), you and the passenger would both find that the photons had the same speed at all times. In all cases, the speed of the photons would stay at just under 300,000 kilometres per second, as Maxwell's equations say they should.

Einstein took this idea – the invariance of the speed of light – as one of his two postulates for the special theory of relativity. The other postulate was that the laws of physics are the same wherever you are, whether on an plane or standing on a country road. But to keep the speed of light constant at all times and for all observers, in special relativity, space and time become stretchy and variable. Time is not absolute, for example. A moving clock ticks more slowly than a stationary one. Travel at the speed of light and, theoretically, the clock would stop altogether.

How much the time dilates can be calculated by the two equations above. On the right, Δt is the time interval between two events as measured by the person they affect. (In our example above, this would be the person in the train.) On the left, Δt' is the time interval between the same two events but measured by an outside observer in a separate frame of reference (the person on the platform). These two times are related by the Lorentz factor (γ), which in this example is a term that takes into account the velocity (v) of the train relative to the station platform, which is "at rest". In this expression, c is a constant equal to the speed of light in a vacuum.

The length of moving objects also shrink in the direction in which they move. Get to the speed of light (not really possible, but imagine if you could for a moment) and the object's length would shrink to zero.

The contracted length of a moving object relative to a stationary one can be calculated by dividing the proper length by the Lorentz factor – if it were possible for an object to reach the speed of light its length would shrink to zero.

It is important to note that if you were the person moving faster and faster, you would not notice anything: time would tick normally for you and you would not be squashed in length. But anyone watching you from the celestial station platform would be able to measure the differences, as calculated from the Lorentz factor. However, for everyday objects and everyday speeds, the Lorentz factor will be close to 1 – it is only at speeds close to that of light that the relativistic effects need serious attention.

Another feature that emerges from special relativity is that, as something speeds up, its mass increases compared with its mass at rest, with the mass of the moving object determined by multiplying its rest mass by the Lorentz factor. This increase in relativistic mass makes every extra unit of energy you put into speeding up the object less effective at making it actually move faster.

As the speed of the object increases and starts to reach appreciable fractions of the speed of light (c), the portion of energy going into making the object more massive gets bigger and bigger.

This explains why nothing can travel faster than light – at or near light speed, any extra energy you put into an object does not make it move faster but just increases its mass. Mass and energy are the same thing – this is a profoundly important result. But that is another story.

guardian.co.uk © Guardian News and Media 2014

# Auld Lang Syne in the Antarctic as stranded adventurers await rescue

In the afternoon on New Year's Eve, three dozen passengers from the MV Akademik Shokalskiy walked town the gangplank and stepped on to one of the ice floes that had trapped the ship since Christmas Day.

They walked, carefully, along a pathway marked by small flags until they reached a relatively flat area of the ice covered in fresh snow. They formed a chain at one side, linked arms and proceeded to stomp, in formation, across the snow, compressing the powder under their feet. This is how you make a helipad in Antarctica.

Minutes before, we had been told that the Australian icebreaker, the Aurora Australis, had failed to get through. This was the second icebreaker that hadn't made it – the Chinese ship, the Xue Long, got some of the way through the ice on Friday last week before turning back. The only option left is to evacuate by helicopter.

Ben Maddison, a historian and author on board the Shokalskiy with the Australasian Antarctic Expedition (AAE), had led the passengers on to the ice near the ship.

"I wanted people to participate in their own rescue, to create the conditions for the helicopter to land, a moment of activity so we're not so passive in the face of events," he said.

With arms linked, the 30 or so people sang songs – including a festive Auld Lang Syne – as they tramped down the soft snow for their rescue helicopter to land.

Whenever anyone's feet fell a little too deeply through the uncertain surface, the whole line stopped to help them out, and then carried on across the field.

"It was like a choir – we were a big Antarctic choir in the middle of a rainstorm," said Maddison.

The Shokalskiy was heading east around Commonwealth Bay, to the Mertz glacier, after a brief visit to the huts built a century ago by the scientist and Antarctic explorer, Douglas Mawson.

Just before Christmas Day, strong south-easterly winds began pushing the sea ice towards the Antarctic coastline, quickly forming a dense mass around the ship and pinning us in place.

When the captain issued the call for help, on Christmas morning, the ship was only two nautical miles from open water. Seven days later, after blizzards and more strong winds, the sea is 20 nautical miles away. This barrier of ice, now up to 5 metres deep in parts, has proved impossible to penetrate so far.

When we first got trapped, we were just over two weeks into our month-long expedition, from Bluff in New Zealand to East Antarctica, following in the spirit of Mawson, leader of the original AAE of 1911, repeating his wildlife, ocean and weather observations to build a scientific picture of how this part of the world had changed in 100 years.

The AAE was set up as a mix of adventure and science, of researchers alongside paying members of the public with an interest in climate and zoology, who wanted to venture where few others do.

Mary Regan, a Sydney-based consultant, holds a deep interest in the heroic age of Antarctic exploration – the time of Scott, Shackleton and Roald Amundsen. She also wanted to help measure climate change.

The moment she first saw the domed ice cap of Antarctica, during a visit to the Hodgeman Islands last week, she said it took her breath away.

"I thought, I'm really here. It was so majestic, so grand. There was that beauty of how we were there in this beautiful but tough, cruel environment. It grabbed me. I had tears in my eyes."

The ship's misfortune has not dampened her enthusiasm. Everyone is on board through some sort of love for Antarctica and not, she said, to tick off boxes. They wanted to experience the continent. Changes of plan, unexpected situations, these were all part of the adventure, she said.

Sean Borkovic, who has a recent masters in environmental management in Sydney, said he had always wanted to sail the Southern Ocean.

He recalled the Shokalskiy's first approach to the frozen continent, almost two weeks ago, after several days of rough seas. We had finally made it to calm waters and, on a clear, sunny day, we approached the Antarctic landmass through a corridor of enormous, pure-white icebergs.

"The sunshine was a surprise," said Borkovic. "But the stark nature was everything and more than I imagined."

Maddison has visited the frozen continent 10 times but each time it has surprised him. "I feel like I've only been touching the surface – I want to come back again and touch it in a deeper way."

During an excursion two weeks ago we all got into Zodiacs – tough inflatable rubber dinghies – and passed close to a decaying iceberg. We were with Maddison on the lookout for leopard seals.

The morning had started off grey and cold but the sun came out. Maddison took his boat and passengers far from the hum of ship's engines, far from the other Zodiacs and well beyond the range of everyone's radios.

"The ice floes were separated by big pools of water that were so still that they were like mirrors," he said. "We switched off the engines and sat there, in complete silence, in between these glistening mirrors."

For 12-year-old Robbie Turney, who has been begging his father, AAE expedition leader, Chris Turney, to visit Antarctica for years, it has been the trip of a lifetime.

"I've always heard stories of Antarctica in winter, I was expecting 80mph winds, struggling to stand," he said. "It was different, it was warmer. If you're walking or building you can do it in shorts and sleeves. It's beautiful and white. It's better than I was expecting."

Last week, whenever allowed to venture on to the lumpy, sludgy ice floes around us, Turney has taken charge of constructing snow slides and snow men.

There have been trips on to the ice – blizzards permitting – for people to contemplate the stillness, watch penguins and investigate this desolate landscape that appeared out of nowhere.

"It's this week of being beset in the ice that people will remember and people will be asked about," said Maddison.

"A lot of people can't remember what we did before this and this is what they'll be asked about."

As we have watched the weather and tracked the progress of the Chinese and Australian icebreakers, everyone has kept their spirits high and fingers crossed. It would be untrue to say no one has felt deflated, or worried.

But each turn of events has been met with, as AAE co-leader Greg Mortimer put it, an adventurous spirit. And we have held on to the fervent hope that our ship would somehow be freed, to sail back to New Zealand under our own steam.

Mortimer has told us that that was unlikely to happen. Instead, we needed to pack our bags and be ready to be evacuated by helicopter to the Aurora Australis, via the Xue Long, when the weather cleared.

From there we would sail to Australia, via the Australian Antarctic base, Casey. We would be several weeks late home but what a story we would have to tell.

Perhaps it was the isolation as much as the personalities, but this was a group of people who had grown strong bonds. Every meal this past month had been spent together. Films, games and countless trips on land and ice were shared in the same small groups.

In the past week, stuck in the ice, the AAE members each took turns leading lessons for others in everything from salsa dancing to yoga.

On New Years' Eve, as the clock counted down to midnight, a hastily arranged choir belted out a song, written by AAE member and Australian Green party senator-elect Janet Rice. The song recounted the story of the Shokalskiy and the icebreakers. "Bloody great shame we're still stuck here," was the refrain.

"You often feel very close to people on an Antarctic vessel, and that's as much to do with the place you're in and things you're doing, as the bonds you actually have," said Maddison. "Because you're remote and isolated, you huddle together more."

With departure almost imminent, Maddison said he was emotional about leaving the Shokalskiy and, in particular, the spot where the ship had been stranded. The past few days, he said, had satisfied an enormous craving he had held for years.

"I've been through this body of water many times and always wanted to stop in it," he said. "And I have, accidentally. I've always wanted the time to contemplate and notice it in a way I don't have while in a ship."

This had been Regan's first trip to the continent but, like Maddison, her experience is not yet complete. "It's met my expectations but not satisfied them. I caught myself saying the other day, "Next time I come to Antarctica … "

Borkovic said he had been humbled by what he had seen in the past few weeks. "There's a quality to it that I expected but couldn't appreciate until I was here," he said. "It is such an extreme environment, the last untouched wilderness. I'll remember this forever."

[Image via AFP]

# Scientists retracing the footsteps of 1911 Antarctic expedition trapped in ice

Icebreaker ships go to help MV Akademik Shokalskiy after captain issues distress call

A team of scientists and members of the public who have been retracing the footsteps of the Australasian Antarctic Expedition (AAE) of 1911 have become trapped in heavy ice a few miles from the coast of Antarctica.

Passengers aboard the ship, the MV Akademik Shokalskiy, were informed on Christmas morning that the captain had issued a distress call to the Maritime Service Authority based in Falmouth in the UK earlier in the day. Three nearby icebreaker ships have been notified of the Shokalskiy's situation and are on their way to help.

The nearest ship, the Chinese Xue Long (Snow Dragon), will take just over a day to reach the Shokalskiy's position, around 1,500 nautical miles from Hobart in Tasmania. A French ship called the Astrolabe, and sent out from the nearest Antarctic base, Dumont D'Urville, could arrive around the same time. The furthest ship, also on its way, is the Australian icebreaker, Aurora Australis.

"The ship is no danger," said Chris Turney. "We're currently in heavy ice and we need help to get out. It's frustrating – we're only two miles from open water. Everyone is well on board and morale is high. We've had a fantastic Christmas and the science programme has been continuing while we're stuck in position. The results looking really exciting. We're very fortunate the Chinese are in the area, passing relatively close by."

The Snow Dragon is a 166-metre-long icebreaker, cruising towards the Shokalskiy at 14.5 knots.

The Russian-built Shokalskiy left the port of Bluff in New Zealand on 8 December with 48 passengers and 20 crew members to follow in the footsteps of the great Antarctic explorer and scientist Douglas Mawson. Led by the climate scientist Chris Turney of the University of New South Wales, the ship has been sailing through the Southern Ocean, repeating and extending many of Mawson's wildlife and weather observations in order to build a picture of how this part of the world has changed in the past 100 years.

The expedition had already reached the fast ice off Commonwealth Bay, carrying out measurements of the Southern Ocean along the way. A small team of scientists and conservationists also reached Mawson's Huts at Cape Denison on Thursday last week, 40 miles (65km) across the ice from where the ship was anchored. The expedition was heading east on Tuesday to spend a day at the Mertz glacier when it became trapped among thick ice floes near Stillwell Island, off Cape de la Motte.

A spokeswoman for the Australian Maritime Safety Authority, which is co-ordinating the rescue, said: "It's in quite a remote part of the world. But we have everyone safe. The vessel isn't in any immediate danger."

Passengers were kept updated throughout the day about the ship's situation. When the ship is free from the ice, expected by the weekend, this modern incarnation of the AAE will continue on to the Southern Ocean and will return to Bluff via a stop on Macquarie Island to carry out a short programme of wildlife, oceanography and climate research.

"We can't wait to get to Macquarie Island," said Turney. "It was a cornerstone of the original AAE and we're keen to finish the research programme there."

guardian.co.uk © Guardian News and Media 2013

Watch a snapshot of Frank Hurley's footage of the Australasian Antarctic Expedition from 1911 to 1914:

# Scientists will re-trace 1913 journey of legendary Antarctic explorer Douglas Mawson

Geologist almost lost his life mapping unknown Antarctic regions in 'the Edwardian equivalent of space travel'

When Douglas Mawson plodded into base camp at Commonwealth Bay in Antarctica in February 1913 his fellow explorers barely recognised him. The geologist was in apalling physical shape after a harrowing journey into the Antarctic interior during which two of his fellow explorers had died. By the time his ship, the SY Aurora, arrived in December 1913 to take his team home, they had spent more than two years on the frozen continent – a whole year longer than planned.

Mawson's was one of the major expeditions during what has become known as the "Heroic Age" of Antarctic exploration of a century ago. Unlike his more well-known contemporaries Ernest Shackleton, Roald Amundsen and Robert Falcon Scott, he had no interest in racing to the South Pole, preferring to focus on scientific research. Two-thirds of his crew were scientists engaged in geological, marine and wildlife research and their measurements, carefully made in the face of tragic losses and horrendous conditions, are some of the most valuable scientific data in existence.

This Sunday, scientists will begin a month-long expedition to re-trace Mawson's journey and examine how the eastern Antarctic, one of the most pristine, remote and untouched parts of the world's surface, has responded after a hundred years of climate changes."They collected a wealth of scientific data on this entirely new continent," said Prof Chris Turney, a climate scientist at the University of New South Wales, Australia, and leader of the Australasian Antarctic Expedition 2013. "As a result it provides this incredibly good baseline – we're going to repeat the measurements and see how much has changed over the last century."

The Guardian is the only newspaper travelling with Turney's expedition, and it will be possible to follow the Australasian Antarctic Expedition 2013, as it happens, on the Guardian's "Antarctica Live" blog. Mawson sent regular wireless messages from Antarctica for the first time in his original 1911 expedition; with the help of satellite connections. our website will have, for the first time, daily updates from every stage of the expedition, sent direct from the field

.

At the start of the 20th century, atlases printed large blank spaces in the bottom third of their southern hemispheres, stamped with the legend "unexplored regions". No one knew what was at the bottom of the world, apart from some sections of coastline that had been spotted from ships. Whether these were islands or parts of a bigger continent was unknown.

After picking up the Antarctic bug on Shackleton's Nimrod expedition in 1907, Mawson turned down an invitation from Scott to join the fateful 1910 Terra Nova expedition and decided to organise his own. By 1911, he had raised the necessary funds – tens of millions of dollars in today's money – chartered the Scottish-built SY Aurora, borrowed some dogs from Amundsen and set off from Hobart in Tasmania. "Mawson wanted to know what lay south of Australia," said Turney. "It was the Edwardian equivalent of space travel: they were off the map."

Mawson was an innovator - on board Aurora was the first aeroplane to be taken to the continent, which he hoped to use for reconnaissance, and he was the first to set up wireless relay stations on Macquarie Island in the South West Pacific, around mid-way between New Zealand and Antarctica, to send back weather reports every day. "It immediately improved the forecast across that region. So much so that the Bureau of Meteorology maintained the station after the expedition had finally returned," says Turney. "These things had immediate impact for people living back home."

A century later, Turney's scientists will set off from Bluff at the southernmost tip of New Zealand but make for the same place as Mawson, Commonwealth Bay in eastern Antarctica. Geological features of the area make this home the windiest place on earth – where the wind touches around 70mph on average in the summer, more like 200mph in the dark winters.

The 21st century expedition will conduct environmental research too. "We're heading towards East Antarctica in an area that's traditionally been thought of as very stable – you can do almost anything to it, environmentally and climatically, and it will just sit there. But in the last few years we're realising that that's clearly not the case. Parts of it are very vulnerable," says Turney. "The ice thickness there is about 3km on average and you don't need to do much to it to have a big effect globally, be it through sea level or climatic changes more generally."

The scientists will measure the temperature and saltiness of the Southern Ocean in their journey to and fro, count bird populations every day and explore under the surface of the water using remote-operated vehicles equipped with high-definition cameras.

On Antarctica itself, they will use drones to fly over and map the surface, drill cores into the ice and drop temperature probes deep under the surface. But their big challenge will be to reach Mawson's huts, built when the explorer first arrived in 1911 and which sheltered the team as they waited for their ship back home through 1913. Access to the huts will be difficult, because of a 78-kilometre-long iceberg, with a surface area of roughly 2,500 square kilometres, that has wedged itself on the coastline. "The result is that that's completely disrupted the local ocean circulation," says Turney. They will look at the impact on climate, oceanography and wildlife too.

Turney's modern scientific equipment – everything from automated floats in the sea to underwater and aerial robots – will be able to collect more detailed measurements than anything Mawson could ever have managed. That the team will be able usefully to compare their high-tech data to the hand-collected measurements from a century ago is a testament to Mawson's sheer force of will in maintaining a continuous log about his enforced year long sojourn on the continent, despite horrendous conditions.

In 1912, Mawson led a sledge team into the Antarctic interior as part of his mapping project. With a British officer, Bellgrave Ninnis, and a Swiss ski-champion, Xavier Mertz, he spent several weeks travelling into the continent in an effort to link up his mapping with the areas Scott had mapped during his expedition.

The team was travelling in single file across a field that they knew had a lot of crevasses. Mertz was at the front, then Mawson, then Ninnis. Most of their food supplies were on the final sledge, which was drawn by the best dogs. The thinking was that if a sledge fell down a crevasse at the front, their vital supplies would remain safe.

Unfortunately, the reverse happened. Ninnis's sledge disappeared into a crevasse that the others had already walked over. The survivors were left with one and a half weeks' worth of food, but were 500km from the coast. "They were in a lot of trouble," said Turney. "They decided to return, with a real sense that they might not survive."

Running out of food, Mertz and Mawson began eating the dogs, unaware that they were poisoning themselves. "They didn't realise that dogs' livers contained toxic levels of vitamin A, so their hair started falling out, they complained of enormous exhaustion," said Turney. "The soles of Mawson's feet fell off, he had to strap them on with lanolin every morning. Mertz suffered more than Mawson and, sadly, had a fit of insanity and bit off the tip of his finger and eventually died." But, says Turney, noting his predecessor's stocism: "In spite of all these things, he was still making weather observations."

Now alone, Mawson pressed on towards the coast, where the Aurora was supposed waiting to take his team home. He got trapped in blizzards for days and, several times, fell down crevasses and hauled himself out. When he reached the coast, he saw smoke on the horizon - because the Aurora had left that morning.

The result was that Mawson was forced stayed on the ice for another year with five others who had elected to wait for him, carrying on with his scientific research programme through the dark Antarctic winter.

The data retrieved in the most extreme of circumstances will be crucial to the 2013 re-run of Mawson's epic expedition, because it provides a record of conditions at the start of the 20th century to make comparisons against. "We have continuous observations from satellites, from the late 1970s, and yet here at Commonwealth Bay, we have two years' worth of continuous observations in a really important part of Antarctica as baseline data," says Turney. "It's so precious."

During the expedition, which runs until January 4, 2014, we will also give Guardian readers several opportunities to talk live and direct to people on board the ship as we sail through some of the roughest seas in the world, visit the windiest place on Earth and try (icebergs-permitting) to reach Mawson's iconic huts, the heroic explorer's base camp more than 100 years ago as he drew the first maps of this part of the world.

Website: theguardian.com/science/antarctica-live [PLEASE CHECK THIS URL!]

guardian.co.uk © Guardian News and Media 2013

# What is the second law of thermodynamics?

Endless movement between hot and cold will eventually mean the end of the universe

Thermodynamics is the study of heat and energy. At its heart are laws that describe how energy moves around within a system, whether an atom, a hurricane or a black hole. The first law describes how energy cannot be created or destroyed, merely transformed from one kind to another. The second law, however, is probably better known and even more profound because it describes the limits of what the universe can do. This law is about inefficiency, degeneration and decay. It tells us all we do is inherently wasteful and that there are irreversible processes in the universe. It gives us an arrow for time and tells us that our universe has a inescapably bleak, desolate fate.

Despite these somewhat deflating ideas, the ideas of thermodynamics were formulated in a time of great technological optimism – the Industrial Revolution. In the mid-19th century, physicists and engineers were building steam engines to mechanise work and transport and were trying to work out how to make them more powerful and efficient.

Many scientists and engineers – including Rudolf Clausius, James Joule and Lord Kelvin – contributed to the development of thermodynamics, but the father of the discipline was the French physicist Sadi Carnot. In 1824 he published Reflections on the Motive Power of Fire, which laid down the basic principles, gleaned from observations of how energy moved around engines and how wasted heat and useful work were related.

The second law can be expressed in several ways, the simplest being that heat will naturally flow from a hotter to a colder body. At its heart is a property of thermodynamic systems called entropy – in the equations above it is represented by "S" – in loose terms, a measure of the amount of disorder within a system. This can be represented in many ways, for example in the arrangement of the molecules – water molecules in an ice cube are more ordered than the same molecules after they have been heated into a gas. Whereas the water molecules were in a well-defined lattice in the ice cube, they float unpredictably in the gas. The entropy of the ice cube is, therefore, lower than that of the gas. Similarly, the entropy of a plate is higher when it is in pieces on the floor compared with when it is in one piece in the sink.

A more formal definition for entropy as heat moves around a system is given in the first of the equations. The infinitesimal change in entropy of a system (dS) is calculated by measuring how much heat has entered a closed system (δQ) divided by the common temperature (T) at the point where the heat transfer took place.

The second equation is a way to express the second law of thermodynamics in terms of entropy. The formula says that the entropy of an isolated natural system will always tend to stay the same or increase – in other words, the energy in the universe is gradually moving towards disorder. Our original statement of the second law emerges from this equation: heat cannot spontaneously flow from a cold object (low entropy) to a hot object (high entropy) in a closed system because it would violate the equation. (Refrigerators seemingly break this rule since they can freeze things to much lower temperatures than the air around them. But they don't violate the second law because they are not isolated systems, requiring a continual input of electrical energy to pump heat out of their interior. The fridge heats up the room around it and, if unplugged, would naturally return to thermal equilibrium with the room.)

This formula also imposes a direction on to time; whereas every other physical law we know of would work the same whether time was going forwards or backwards, this is not true for the second law of thermodynamics. However long you leave it, a boiling pan of water is unlikely to ever become a block of ice. A smashed plate could never reassemble itself, as this would reduce the entropy of the system in defiance of the second law of thermodynamics. Some processes, Carnot observed, are irreversible.

Carnot examined steam engines, which work by burning fuel to heat up a cylinder containing steam, which expands and pushes on a piston to then do something useful. The portion of the fuel's energy that is extracted and made to do something useful is called work, while the remainder is the wasted (and disordered) energy we call heat. Carnot showed that you could predict the theoretical maximum efficiency of a steam engine by measuring the difference in temperatures of the steam inside the cylinder and that of the air around it, known in thermodynamic terms as the hot and cold reservoirs of a system respectively.

Heat engines work because heat naturally flows from hot to cold places. If there was no cold reservoir towards which it could move there would be no heat flow and the engine would not work. Because the cold reservoir is always above absolute zero, no heat engine can be 100% efficient.

The best-designed engines, therefore, heat up steam (or other gas) to the highest possible temperature then release the exhaust at the lowest possible temperature. The most modern steam engines can get to around 60% efficiency and diesel engines in cars can get to around 50% efficient. Petrol-based internal combustion engines are much more wasteful of their fuel's energy.

The inefficiencies are built into any system using energy and can be described thermodynamically. This wasted energy means that the overall disorder of the universe – its entropy – will increase over time but at some point reach a maximum. At this moment in some unimaginably distant future, the energy in the universe will be evenly distributed and so, for all macroscopic purposes, will be useless. Cosmologists call this the "heat death" of the universe, an inevitable consequence of the unstoppable march of entropy.

guardian.co.uk © Guardian News and Media 2013

# Venus orbits the Sun inside huge 'zodiacal cloud' of space dust

Scientists have discovered that Venus circles the sun embedded within a huge band of dust that is 10-15 million kilometres high and stretches all the way around its orbit. The finding will help astronomers better understand the dust clouds within planetary systems so that they can be taken into account when examining planets outside our solar system.

The inner solar system is filled with dust between the planets, called the zodiacal cloud, which starts out at the asteroid belt and slowly drifts towards the sun. "This cloud is a prominent feature when you look from one of these space cameras, you can see this cloud very clearly, but it looks like a very smooth cloud, we don't see very much structure in it," said Mark Jones, of the Open University. "What we've found is this ring near Venus which results from an interaction of that dust with the planets."

The dust in the cloud normally takes around 100,000 years to travel from the asteroid belt to the sun, said Jones, but if any of it gets near Venus the particles can get trapped, by the gravity of the planet, inside its dust ring for a very long time. "It makes this sort of structure in the dust cloud, this ring that goes all the way around the sun," Jones said. His findings are published on Thursday in the journal Science.

The dust in the ring is only fractionally – around 10% – more dense than the rest of the zodiacal cloud. The orbit of Venus around the sun is 220 million kilometres in diameter and the dust cloud resembles a huge wedding band around the star. "It's too faint to see it from the surface of the Earth but if you could see it, it would stretch 45 degrees either side of the sun, it would fill half of the daytime sky," Jones said.

Studying the dust ring will allow scientists to improve their ability to detect planets outside our solar system. To get the best resolution optical images of planets around other stars, astronomers need to take account of how the dust in those systems behaves. Any dust rings in other systems might give a signal that scientists might erroneously think is a planet. There are theoretical models of how to potentially take account of the dust but these need to be tested with experiments.

"You need to understand what these rings are doing in order to understand what these future exoplanet observations are like," said Jones. "What this Venus ring will allow you to do is test some of those detailed models."

guardian.co.uk © Guardian News and Media 2013

# Big cats' oldest ancestor Panthera blytheae discovered in Tibetan Himalayas

The fossil skull of the Panthera blytheae, the precursor to all modern lions, tigers and leopards was found in the Himalayas

Scientists have found in the Tibetan Himalayas the fossil skull of the oldest known big cat, the precursor to all modern lions, tigers and leopards, pushing back the fossil record of these animals by at least 2m years and lending weight to the idea that they evolved in Asia, rather than Africa, where the previous oldest fossil was found.

The species, named Panthera blytheae, would have lived between 4-6m years ago in cold regions of the Himalayas.

"In terms of the overall size it would be a little bit smaller than a snow leopard– the size of a clouded leopard and those living cats grow up to around 20kg [44lb]," said Jack Tseng, a postdoctoral fellow at the American Museum of Natural History in New York, who led the team that discovered the fossil. "You would most likely recognise it as a big cat."

Based on the wear of the teeth in the skull, the animal probably hunted like modern snow leopards do, said Tseng. "They used their front teeth to pick at a hide or hunt in very gritty areas where they get heavy wear on the front teeth. They used their back teeth, which remain very sharp, to cut through soft tissue. You can imagine [them] hunting among the cliffs of the Himalayas, ambushing the sheep or antelopes or smaller mammals."

Tseng's team found the fossilised skull under a mound of bones, which included antelope and horse limbs, in the summer of 2010 when driving in a remote area near the China-Pakistan border.

He said the cat would have had a broad forehead, associated with an expanded sinus cavity in the head, an adaptation suiting cold environments since it helped the animal warm up the air it breathed in.

The fossil is described in detail in Wednesday's edition of the Proceedings of the Royal Society B: Biological Sciences.

Studies of the genomes of modern big cats suggest they diverged from a common ancestor about 6.37m years ago, but till now the oldest known fossil from this group of animals amounted to some teeth found in Tanzania, dated to 3.6m years ago. Tseng dated Panthera blytheae to 4.10m-5.95m years.

Anjali Goswami, a palaeobiologist at University College London, who was not involved in the research, said: "This age has the result of pushing back the origin and evolution of Pantherinae by several million years, which is more consistent with molecular estimates.

"Divergence estimates for pantherines have been based in large part on very fragmentary material, so having a beautifully preserved specimen to accurately place in the big cat family tree means that we can have a lot more confidence in the result.

"This is also potentially the biggest weakness with the study, as the locality is not very well constrained in terms of age, meaning that there is a nearly 2 million year window on when this organism lived. Any part of this window still places it as the oldest pantherine, but constraining that to a more specific date between 4-6m years ago will be essential to furthering our understanding of pantherine evolution and indeed the evolution of this new species."

The new fossil suggests central Asia, rather than Africa, was where the panthera sub family, including lions, jaguars, tigers, leopards, snow leopards, and clouded leopards, diverged from the rest of the cat family tree, felinae, which includes cougars, lynxes, and domestic cats.

Goswami said that the geographic origin of pantherines was perplexing, because the distribution of living species suggested an Asian origin for the group, but the oldest fossils were from Africa, suggesting an African origin.

"This beautiful fossil supports the Asian origin for the group, bringing together molecular, living and fossil data into a unified view of pantherine evolution.

"It also supports the idea that the Tibetan plateau was, and remains, an important biogeographic region for large mammals and is the centre of origin for many important groups. Nailing down the place of origin for pantherines also means that we can better understand the environmental and ecological context in which this group evolved."

© Guardian News and Media 2013

# Stephen Hawking: Physics would be 'more interesting' if Higgs boson hadn't been found

World-famous cosmologist admits to losing bet as a result of particle's discovery

Physics would have been "far more interesting" if scientists had been unable to find the Higgs boson at the Large Hadron Collider (LHC) in Cern, according to Stephen Hawking, who has admitted to losing a bet as a result of the discovery in July last year.

The world-famous cosmologist was speaking at an event to mark the launch of a new exhibit on the LHC at London's Science Museum and, in a speech, discussing the unanswered questions at the edges of modern physics as part of a history of his own work in the field.

Though the Higgs boson was predicted by theory in the early 1960s, not everyone believed it would be found. If it had not been found, physicists would have had to go back to the drawing board and rethink many of their fundamental ideas about the nature of particles and forces – an exciting prospect for some scientists.

"Physics would be far more interesting if it had not been found," said Hawking. "A few weeks ago, Peter Higgs and François Englert shared the Nobel Prize for their work on the boson and they richly deserved it. Congratulations to them both. But the discovery of the new particle came at a personal cost. I had a bet with Gordon Kane of Michigan University that the Higgs particle wouldn't be found. The Nobel Prize cost me \$100."

Hawking hoped the LHC would now move on from the Higgs boson to looking for evidence of more fundamental theories that explain the nature universe and, in particular, he hoped it would find the first evidence for the M theory, which is the best candidate that physicists have to unify all the four fundamental forces of nature. It unites gravity (which rules at the largest scales of the universe) with quantum mechanics (which controls the behaviour atoms and below). As yet there has been no incontrovertible experimental evidence to show that M theory is correct.

"There is still hope that we see the first evidence for M theory at the LHC particle accelerator in Geneva," said Hawking. "From an M theory perspective, the collider only probes low energies, but we might be lucky and see a weaker signal of fundamental theory, such as supersymmetry. I think the discovery of supersymmetric partners for the known particles would revolutionise our understanding of the universe."

Supersymmetry is the concept that each known particle – such as electrons, quarks and photons – has a heavier and as-yet-undetected "superpartner". The superpartners of quarks and electrons, for example, are called squarks and selectrons; the superpartners of the Higgs, and of force carriers such as the photon, are the higgsino and photino. Experimental evidence for the idea has, however, been elusive.

In recalling the bet he made with physicist Gordon Kane about the Higgs boson, Hawking admitted to enjoying gambling. "Throughout my life, I have had a gambling problem. When I was 12, one of my friends bet another friend a bag of sweets that I would never come to anything. I don't know if this bet was ever settled, and if so, which way it was decided. I had six or seven close friends, and we used to have long discussions and arguments about everything, from radio-controlled models to religion. One of the things we talked about was the origin of the universe, and whether it required a God to create it and set it going."

Hawking is no stranger to losing bets about the nature of cosmos. Along with Kip Thorne, he bet John Preskill that information should be destroyed when something fell into a black hole. The so-called "information paradox" was troubling because Hawking's calculations suggested that anything that fell into a black hole would be obliterated, including the information about what that stuff was. But destroying information is not allowed under the rules of quantum mechanics.

After 30 years of arguing, Hawking said he eventually found a resolution. "Information is not lost in black holes, but it is not returned in a useful way," he said. "It is like burning an encyclopaedia. Information is not lost, but it is very hard to read."

He gave Preskill a baseball encyclopaedia to concede his side of the bet. "Maybe I should have just given him the ashes. The fact that I used to think that information was destroyed in black holes was my biggest blunder. Well, at least it was my biggest blunder in science."

Many of Hawking's insights have come from studying the cosmos, and the scientist said people needed to get more interested in the space around us for more prosaic reasons. "We must also continue to go into space for the future of humanity. I don't think we will survive another thousand years without escaping beyond our fragile planet. I therefore want to encourage public interest in space, and I've been getting my training in early," he said. Hawking recently took part in a zero-gravity flight, which is part of the training for astronauts to experience the weightlessness of space.

Hawking said that the recent Nobel prize for Engelert and Higgs had been a reminder to him that it was "a glorious time to be alive, and doing research in theoretical physics. Our picture of the universe has changed a great deal in the last 50 years, and I'm happy if I have made a small contribution."

He added: "So remember to look up at the stars and not down at your feet. Try to make sense of what you see and hold on to that childlike wonder about what makes the universe exist."

© Guardian News and Media 2013

# A deadly virus could travel at jet speed around the world. How do we stop it in time?

Silent and deadly, a virus will leap from an animal to a human and literally fly round the world. Millions of lives will depend on the skills of scientists.

Walk past the endless rows of vegetables, past the dozens of stalls selling every possible part of a pig and, at the centre of Cao Lanh city's market, a woman is doing a brisk trade in selling rats for food. Two cages swarm with them on a table next to her. Live frogs are available too, and, on the floor near her stall is a box of sluggish snakes. Chickens and ducks cluck and quack nearby. A faint smell of urine thickens air that is already heavy from the previous night's rains.

Rats are a staple source of meat in Vietnam, farmed and sold much like any other livestock. The stallholder butchers the animals to order. Reaching into the cage she will grab an animal by its tail, hit its head across a large stone, chop off its feet and head with a large pair of scissors, skin it, cut it into pieces and place everything into a small yellow plastic bag. Inevitably, the animal's blood ends up on her hands.

Scores of people are selling and butchering live animals, breathing the same air and in constant contact with the animals' blood, urine and faeces. This woman, and many others like her who work in the farms and abattoirs deep in southern Vietnam's Mekong delta, are doing what they have done for generations. And now they are in the front line in a new scientific race to predict the next pandemic.

Of the roughly 400 emerging infectious diseases that have been identified since 1940, more than 60% are zoonotic ie they came from animals. Throughout history this has been common. HIV originated in monkeys, ebola in bats, influenza in pigs and birds. The rate at which new pathogens are emerging is on the rise, even taking into account the increase in awareness and surveillance. Which pathogens will cross the species barrier next, and which one is the greatest potential public health concern, is a subject of intense interest. A modern outbreak, caused by a previously unknown virus, could travel at jet-speed around the world, spreading across the continents in just a few days, causing illness, panic and death.

Pathogens have transferred from animals to people for as long as we have had contact. The ancient domestication of livestock led to the emergence of measles, and further intensification of farming in recent decades has caused problems such as the brain-wasting Creutzfeldt-Jakob disease, the human form of BSE. Expanding trade routes in the 14th century spread the rat-borne Black Death across Europe and smallpox to the Americas in the 16th century. Today's tightly connected world has seen the spread of swine flu, Sars, West Nile virus and H5N1 bird flu.

The biggest pandemic on record was the 1918 Spanish influenza, which killed 50 million people at a time when the fastest way to travel the globe was by ship. In 2009 swine flu was the most recent pandemic that got public health officials concerned; first detected in April of that year in Mexico, it turned up in London within a week.

One of the most worrying recent outbreaks for scientists was the re-emergence of the H5N1 bird flu virus in 2005. Jeremy Farrar, a professor of tropical medicine and global health at Oxford University and, until recently head of the university's clinical research unit in Vietnam, says he remembers the night a young girl came into the children's hospital in Ho Chi Minh City with a serious lung infection. Initially, he thought that it might have been Sars – a coronavirus that had first been identified in China in late 2002 and had spread rapidly to Canada among other places – making its comeback. That was until he heard the girl's story from a colleague.

"This is years ago and I remember the story as if it was yesterday," he says. "She had been playing with her duck, arguing with her brother. They had buried it when it died and she had dug it up later to re-bury it somewhere she wanted to bury it."

The duck was the crucial part of the evidence in determining that this was a new outbreak and Farrar says that for the next few hours, no one knew how bad it would get. Would the girl's family come in during the night with infections? Would the nurses and doctors be affected?

H5N1 did not become the next Sars and was contained, although 98 people were infected and 43 died in 2005. It has not gone away, says Farrar, and is still circulating in poultry and ducks in almost the whole of Asia, remaining a major concern for human cases, given how virulent it is when people get infected.

A successful zoonotic pathogen manages to jump from an animal to a person, invades their cells, replicates and then finds a way to transmit to other people. Working out which pathogens will make the leap – a process called "spillover" – is not easy. A pathogen from a primate, for example, is more likely to spill over to humans than a pathogen from a rat, which is more likely to do so than something from a bird. Frequency of contact is also important; someone working on a live bird farm is more likely to be exposed to a multitude of animal viruses than someone living in a city who only sees a monkey in a zoo.

"The truth is, we really don't know how much of this happens," says Derek Smith, a professor of infectious disease informatics at the University of Cambridge. "Much more is noticed today than was noticed 50 years ago and was noticed 50 years before that. There are reasons to think this might be because we disrupt habitats and come into contact with animals we haven't been in contact with before. We have different things that we do socially, perhaps, than we did in the past. But we also look harder."

Viruses and other pathogens continually flow between species, often with no effects, sometimes mutating, once in a while causing illness. This mixing is known as "viral chatter" and the more different species come into regular close contact, the higher the chances of a spillover event occurring.

"This is how viruses have always worked, the big change is us," says Mark Woolhouse, a professor of infectious disease epidemiology at the University of Edinburgh. "The big change happened probably several thousands of years ago when we became a crowd species and that gave these viruses new opportunities which they hadn't had before in humans. Ever since then, from time to time a new virus has come along to take advantage of this new, very densely populated, crowded species – humans – that it can now spread between much more easily. That process is still happening; the viruses are still discovering us. We like to think we discover viruses, but it's also the viruses discovering us."

Tracking what is moving between which species is the task of Stephen Baker's team, based at the Oxford University clinical research unit in Ho Chi Minh City. Baker is an infectious disease biologist who co-ordinates the Vizions project and I met him at his lab while I was making a Radio 4 documentary about the scientific hunt for the next big pandemic.

His sampling teams visit farms, markets and abattoirs across Vietnam to take regular blood from people at high risk of being subject to a spillover event. This high-risk cohort, which will eventually number 1,000 people, will be monitored every six months and, if they ever turn up sick at a hospital, Baker's team will get an alert. The sampling teams also take blood and faecal swabs from pigs, chickens, dogs, cats and rats and anything else living nearby.

During a trip to a smallholding near the Cao Lanh food market, Baker explains that it is at places like this, where people are in regular and close contact with animals, that scientists will be able to get their first hints of any spillovers that might become a bigger threat. The farm, which is typical of Vietnam and other parts of south-east Asia, has a range of animals – pigs, ducks and free-range chickens. They are in close exposure to each other and any farmworkers, too. The farms next door are only separated by lines of trees or small fences. As well as the farm animals, Baker's team also do their best to sample wild animals in the vicinity, including civets, rats and bats, that can easily transport pathogens across wide distances.

The other part of the Vizions project is to enrol around 10,000 people over the next three years from those who turn up to hospitals with infections of the central nervous system, respiratory system, lower gut or jaundice. By cataloguing the viruses in their blood and other bodily fluids, Baker wants to build up a database of the kinds of things circulating in different parts of the country.

If there is new influenza, or other zoonotic virus outbreak, Baker's samples will allow scientists to go back in time and investigate where it had been circulating before: "That will allow us to document, retrospectively, what animals that was circulating in and how many people were potentially exposed. We're on the front line of trying to understand how frequently these things may occur."

Another animal of interest to Baker, and many other groups around the world, is the bat. It has become clear in the past few decades that they are the source of some of the most feared human infections, including ebola, Marburg and all the rabies viruses. Bats are also the natural reservoirs for the coronaviruses (including Sars and the recent Mers virus) and newer viruses such as nipah and hendra. Sometimes these have transferred directly to people, and other times they have first crossed into domestic animals.

How do bats survive as reservoirs for all these viruses that are so deadly in other species? James Wood, head of the department of veterinary medicine at the University of Cambridge, says there is likely to be a variety of reasons, not least that bats have different or better-developed innate immune systems that allow them to cope with pathogens that kill other species. With colleagues in Ghana, he has been following populations of fruit bats, sometimes numbering in excess of 10 million individuals, that pass through Accra or Kasanka National Park in a remote part of Zambia.

"The particular viruses we're looking at in this species include a rabies-like virus and a henipavirus, a family of viruses in Australia and south-east Asia that have passed from bats to humans," says Wood. "The populations we study, we're repeatedly sampling from on a quarterly or two-monthly basis depending on the season the bats are there. We take blood samples and swabs and urine and faecal samples then release them."

Henipaviruses cause brain infections in people and can be deadly – around half of those infected die. These viruses have spread from bats to humans either directly, such as the 2004 outbreak of nipah in Bangladesh. Or it can spread via domestic animals; in 2010, hendra spread via horses in Australia.

Wood and his colleagues have also been looking at what other environmental factors there might in working out why, in some situations, people get infected and in others they do not. "It may be that the local ecosystem services play a key role in determining risk," he says. "It may well be that, in some situations where there's really rich biodiversity, that can act as a sink for these different viruses, which makes them less likely to spread over into human populations. In other ecosystems that are perhaps more degraded, it may well be that there is more chance, because you have just single species living on their own, there's more chance of spillover happening from bats to humans or from bats to other animals."

Efforts around the world to collect and analyse blood from people and animals will give scientists and public health officials plenty of data to help track new infections. In the best case, having sequences of viruses on file, located to particular countries or even to particular regions within countries, will give vital information after a novel virus is spotted in a hospital. As well as medical and travel histories for a patient, clinicians will be able to match the virus to known viruses and will therefore be able to concentrate their efforts in containing it. They cannot, however, use this data to predict spillover events or, more crucially, when a virus might be dangerous enough to cause a pandemic.

"Not every virus that crosses over will make it [to an outbreak]," says Woolhouse. "Understanding the differences between those that do and those that don't is a major research question. That comes back to reading the [virus] genome – the information that you're going to have quickly that you didn't have a few years ago is the genome sequence.

"If you could read that and interpret it and say, "this one does look like it has the potential to infect and spread between humans" then we're much further ahead of the game than we were before."

Ron Fouchier's microbiology labs are on the 17th floor of a building on the sprawling building site that is currently the Erasmus University medical centre based in Rotterdam. His work encompasses a wide variety of viruses, everything from influenza to HIV, carried out by PhD students and postdocs. They work on some deadly pathogens, but the safety protocols are well-trained into everyone who walks the halls and the atmosphere is convivial and unworried.

One lab, however, is not among this network of rooms. Fouchier will not say where it is and, in fact, will not even hint at its general direction from his office. Last year, in that biosafety level 3 facility, he carried out his experiments to mutate the virulent H5N1 flu virus from its wild form, which is dangerous when it infects people but cannot transmit between people, into a modified form that can potentially transmit from one person to another.

The air inside the level 3 lab is at a lower pressure than the air outside, to stop anything escaping through the doors. The air itself goes through virus filters and all experiments are carried out in small, sealed boxes where the airflow is carefully controlled. The scientists operating inside always work in pairs and have to wear masks, thick rubber gloves and are all vaccinated against H5N1. Only six people have access to the steel lockers where the mutated flu virus is stored.

The work was not without controversy. The US authorities prevented Fouchier, and a separate team of scientists led by Yoshihiro Kawaoka of the University of Wisconsin-Madison, from publishing their work for many months, fearful that the information might be used by those who want to make biological weapons.

Fouchier said his work addressed a crucial question of basic science: "Scientists didn't really know what makes any virus airborne in mammals so we were really in the dark. We knew of some mutations from previous pandemics, but whether that would apply to other flu viruses nobody knows." The only way to figure that out was to take those mutations and see if they can make the H5N1 virus airborne as well – in other words, make it transmissible between people via a cough or sneeze.

Influenza pandemics of the past century and a half have all been viruses of the H1, H2 and H3 subtypes. So far, no H5 viruses have caused pandemics because of their inability to transmit between people in the wild. "Until we did the experiments, many expert virologists were of the opinion that H5N1 would never become airborne," says Fouchier. "With [our] information we show a very strong message to the field that we should not underestimate the chances of an unknown virus subtype causing the next virus pandemic," says Fouchier.

His experiments found that the natural version of the H5N1 virus, which is currently circulating in flocks of birds around the world, needs only five mutations in its genetic sequence to become potentially transmissible between people.

In a separate paper, Derek Smith looked at how common these mutations were in the wild. "We found that, of the five mutations that were identified, two existed in the wild in quite large numbers and that there really are only three further mutations the virus would need, of the ones Fouchier and Kawaoka identified, in order to be potentially transmissible between humans," he says. "Of those three, two had never been seen in the wild but one had been seen very, very occasionally. When it had been seen, it didn't seem to occur in a particular region and then persist for a little while and go away, it was just seen in two or three viruses sporadically."

Fouchier's work was controversial at the time of publication but he is bullish about its benefits. Better surveillance capability, for a start, since scientists can routinely look for the specific five mutations in any new influenza viruses that emerge in the wild. If a flock of chickens was found to have a virus with three or four of the dangerous mutations, for example, the decision to cull them would be more clear-cut. The genetically modified viruses will also be useful in testing new vaccines and antivirals more accurately.

The detailed knowledge being gathered about influenza is already impressive but any prediction of transmission events will require even more granular data. Such as, how many virus particles are transmitted in a cough? How likely is it that the viruses that are transmitted are the versions that are the most dangerous in terms of being able to cause a pandemic?

And this sort of work needs to happen with other viruses if scientists want a hope of predicting big pandemics. Scientists might be worried about bats, for example, but have precious little knowledge about the physiology of their viruses. "A huge amount of basic biology needs to be done with these viruses to understand their mode of transmission between different species," says Wood. "Understanding at that whole animal level about transmission but also understanding the sub-cellular mechanisms of replication of these viruses could be really valuable in terms of trying to pinpoint what particular virus features are associated with transmission to humans and which ones aren't."

As the scientific effort to build a front-line defence against pandemics gathers pace, authorities need protocols to handle and make decisions on the information coming in. The detection of a potential pandemic virus needs scientific boots on the ground for surveillance, but what happens if they spot something they think is dangerous? A decade ago, when Sars was breaking out in China, the country restricted information and some people think this led to the outbreak lasting longer than it should have done.

Things are different now, says Farrar, who took up his new post as the director of the Wellcome Trust in October. "It really has changed out of all recognition in that 10 years and large areas of the response mode is now reasonable, we've made progress. Sars was in Asia and Canada; coming through to H5N1 we had learned a little bit and improved but there were still gaps; coming through to H7N9, which is another new virus emerging which humans do not have any immunity to in China this year, the Chinese response has been exemplary. As soon as it emerged, it was picked up, the information was communicated both privately and publicly to everybody who needed to know about it. They should be applauded, they did do a great job."

This does not mean public health cannot be improved to deal with potential new threats. The World Health Organisation is nominally in charge when a pandemic is looming and Farrar says its greatest strength is that it represents so many states. But that could also be its greatest weakness: "Because it always has to reach a compromise everybody can sign up to. We now have the international health regulations where it's mandatory that countries report new events. My view is that those regulations were, in the end, a compromise that didn't go as far as anybody, including the WHO, would want in terms of what must be reported."

We are in a better position to detect a potential problem than we have ever been, but all the surveillance does not mean scientists will not be caught out by something that is sitting in an animal to which nobody happens to be paying attention. Woolhouse says there is always the potential for something to come out of left-field, something that surprises us.

And how should anyone making policy prioritise preparing for the next pandemic with more urgent concerns? Many public health officials might point out that emerging infectious diseases are a potential future threat but we also need to deal with real, major threats now such as malaria, TB or HIV. Woolhouse says the counter argument is that, although the toll of current diseases is huge and dealing with them is important, public health services have learned to accommodate them. Emerging infections such as influenza or Sars or the next pandemic would create a shock with the potential not only to overburden health systems but to shut down travel networks, close down work.

"The concern is that these things present such a huge shock that the global system is not really able to cope," he says. "That's why, despite the somewhat forward-looking aspect of this, we think they are, and should remain, a priority. The costs of an H1N1 or Sars pandemic is in the billions to hundreds of billions – substantial costs we could do well without."

Persuading members of the public or governments to keep the surveillance networks strong is an ongoing and crucial task, Woolhouse says: "This is one of those investments that, if it's working, no one notices."

With thanks to Andrew Luck-Baker. Listen to The Next Global Killer, the Radio 4 programme on the hunt for the next pandemic, at 8pm on 26 November.

© Guardian News and Media 2013

[Image credit: Hinochika / Shutterstock.com]

# What is Heisenberg's Uncertainty Principle?

How the sun shines and why the vacuum of space is not actually empty

The uncertainty principle is one of the most famous (and probably misunderstood) ideas in physics. It tells us that there is a fuzziness in nature, a fundamental limit to what we can know about the behaviour of quantum particles and, therefore, the smallest scales of nature. Of these scales, the most we can hope for is to calculate probabilities for where things are and how they will behave. Unlike Isaac Newton's clockwork universe, where everything follows clear-cut laws on how to move and prediction is easy if you know the starting conditions, the uncertainty principle enshrines a level of fuzziness into quantum theory.

Werner Heisenberg's simple idea tells us why atoms don't implode, how the sun manages to shine and, strangely, that the vacuum of space is not actually empty.

An early incarnation of the uncertainty principle appeared in a 1927 paper by Heisenberg, a German physicist who was working at Neils Bohr's institute in Copenhagen at the time, titled "On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics". The more familiar form of the equation came a few years later when he had further refined his thoughts in subsequent lectures and papers.

Heisenberg was working through the implications of quantum theory, a strange new way of explaining how atoms behaved that had been developed by physicists, including Neils Bohr, Paul Dirac and Erwin Schrödinger, over the previous decade. Among its many counter-intuitive ideas, quantum theory proposed that energy was not continuous but instead came in discrete packets (quanta) and that light could be described as both a wave and a stream of these

quanta. In fleshing out this radical worldview, Heisenberg discovered a problem in the way that the basic physical properties of a particle in a quantum system could be measured. In one of his regular letters to a colleague, Wolfgang Pauli, he presented the inklings of an idea that has since became a fundamental part of the quantum description of the world.

The uncertainty principle says that we cannot measure the position (x) and the momentum (p) of a particle with absolute precision. The more accurately we know one of these values, the less accurately we know the other. Multiplying together the errors in the measurements of these values (the errors are represented by the triangle symbol in front of each property, the Greek letter "delta") has to give a number greater than or equal to half of a constant called "h-bar". This is equal to Planck's constant (usually written as h) divided by 2π. Planck's constant is an important number in quantum theory, a way to measure the granularity of the world at its smallest scales and it has the value 6.626 x 10-34 joule seconds.

One way to think about the uncertainty principle is as an extension of how we see and measure things in the everyday world. You can read these words because particles of light, photons, have bounced off the screen or paper and reached your eyes. Each photon on that path carries with it some information about the surface it has bounced from, at the speed of light. Seeing a subatomic particle, such as an electron, is not so simple. You might similarly bounce a photon off it and then hope to detect that photon with an instrument. But chances are that the photon will impart some momentum to the electron as it hits it and change the path of the particle you are trying to measure. Or else, given that quantum particles often move so fast, the electron may no longer be in the place it was when the photon originally bounced off it. Either way, your observation of either position or momentum will be inaccurate and, more important, the act of observation affects the particle being observed.

The uncertainty principle is at the heart of many things that we observe but cannot explain using classical (non-quantum) physics. Take atoms, for example, where negatively-charged electrons orbit a positively-charged nucleus. By classical logic, we might expect the two opposite charges to attract each other, leading everything to collapse into a ball of particles. The uncertainty principle explains why this doesn't happen: if an electron got too close to the nucleus, then its position in space would be precisely known and, therefore, the error in measuring its position would be minuscule. This means that the error in measuring its momentum (and, by inference, its velocity) would be enormous. In that case, the electron could be moving fast enough to fly out of the atom altogether.

Heisenberg's idea can also explain a type of nuclear radiation called alpha decay. Alpha particles are two protons and two neutrons emitted by some heavy nuclei, such as uranium-238. Usually these are bound inside the heavy nucleus and would need lots of energy to break the bonds keeping them in place. But, because an alpha particle inside a nucleus has a very well-defined velocity, its position is not so well-defined. That means there is a small, but non-zero, chance that the particle could, at some point, find itself outside the nucleus, even though it technically does not have enough energy to escape. When this happens – a process metaphorically known as "quantum tunneling" because the escaping particle has to somehow dig its way through an energy barrier that it cannot leap over – the alpha particle escapes and we see radioactivity.

A similar quantum tunnelling process happens, in reverse, at the centre of our sun, where protons fuse together and release the energy that allows our star to shine. The temperatures at the core of the sun are not high enough for the protons to have enough energy to overcome their mutual electric repulsion. But, thanks to the uncertainty principle, they can tunnel their way through the energy barrier.

Perhaps the strangest result of the uncertainty principle is what it says about vacuums. Vacuums are often defined as the absence of everything. But not so in quantum theory. There is an inherent uncertainty in the amount of energy involved in quantum processes and in the time it takes for those processes to happen. Instead of position and momentum, Heisenberg's equation can also be expressed in terms of energy and time. Again, the more constrained one variable is, the less constrained the other is. It is therefore possible that, for very, very short periods of time, a quantum system's energy can be highly uncertain, so much that particles can appear out of the vacuum. These "virtual particles" appear in pairs – an electron and its antimatter pair, the positron, say – for a short while and then annihilate each other. This is well within the laws of quantum physics, as long as the particles only exist fleetingly and disappear when their time is up. Uncertainty, then, is nothing to worry about in quantum physics and, in fact, we wouldn't be here if this principle didn't exist.

© Guardian News and Media 2013

# NASA's Curiosity rover finds water on Mars

Dirt sample reveals two pints of liquid water per cubic feet, not freely accessible but bound to other minerals in the soil

Water has been discovered in the fine-grained soil on the surface of Mars, which could be a useful resource for future human missions to the red planet, according to measurements made by NASA's Curiosity rover.

Each cubic foot of Martian soil contains around two pints of liquid water, though the molecules are not freely accessible, but rather bound to other minerals in the soil.

The Curiosity rover has been on Mars since August 2012, landing in an area near the equator of the planet known as Gale Crater. Its target is to circle and climb Mount Sharp, which lies at the centre of the crater, a five-kilometre-high mountain of layered rock that will help scientists unravel the history of the planet.

Last night NASA scientists published a series of five papers in the journal Science, which detail the experiments carried out by the various scientific instruments aboard Curiosity in its first four months on the martian surface. Though highlights from the year-long mission have been released at conferences and NASA press conferences, these are the first set of formal, peer-reviewed results from the Curiosity mission.

"We tend to think of Mars as this dry place – to find water fairly easy to get out of the soil at the surface was exciting to me," said Laurie Leshin, dean of science at Rensselaer Polytechnic Institute and lead author on the Science paper which confirmed the existence of water in the soil. "If you took about a cubic foot of the dirt and heated it up, you'd get a couple of pints of water out of that – a couple of water bottles' worth that you would take to the gym."

Around 2% of the soil, by weight, was water. Curiosity made the measurement by scooping up a sample of the Martian dirt under its wheels, sieving it and dropping tiny samples into an oven in its belly, an instrument called Sample Analysis at Mars. "We heat [the soil] up to 835C and drive off all the volatiles and measure them," said Leshin. "We have a very sensitive way to sniff those and we can detect the water and other things that are released."

Aside from water, the heated soil released sulphur dioxide, carbon dioxide and oxygen as the various minerals within it were decomposed as they warmed up.

One of Curiosity's main missions is to look for signs of habitability on Mars, places where life might once have existed. "The rocks and minerals are a record of the processes that have occurred and [Curiosity is] trying to figure out those environments that were around and to see if they were habitable," said Peter Grindrod, a planetary scientist at University College London who was not involved in the analyses of Curiosity data.

Flowing water is once thought to have been abundant on the surface of Mars, but it has now all but disappeared. The only direct sources of water found so far have been as ice at the poles of the planet.

The other papers included x-ray diffraction images of the soil in order to work out the crystalline structure of the minerals on the Martian surface and analysis of a volcanic rock called "Jake_M", which is named after a NASA engineer. The analysis showed that the rock was similar to a type on Earth known as a mugearite, which is typically found on ocean islands and in rift zones.

Grindrod said that the latest results published by the NASA team were just the start of the scientific insights that would come from Mars in the next few years. "It's the first flexing of Curiosity's analytical muscles," he said. "Curiosity spent a long time checking out the engineering, instruments and procedures it was going to use – these papers cover just that engineering period. The targets here weren't chosen because of their science goals as such but as good targets to test out the instruments."

Leshin said that, as well as the excitement of exploring a new world for the first time, the increasingly detailed analysis of the Martian surface would be critical information for planning human missions. As well as the water discovery, analysis of the soil has also shown, for example, the presence of a type of chemical called a perchlorate, which is can be toxic to people. "It's only there at a 0.5% level in the soil but it impedes thyroid function," she said. "If humans are there and are coming into contact with fine-grained dust, we have to think about how we live with that hazard. To me it's a good connection between the science we do and the future human exploration of Mars."

She added: "I do think it's inevitable that we'll send people there and so let's do its as smartly as we can. Let's get as smart as we can before we go."

© Guardian News and Media 2013

[Image via Agence France-Presse]