Tuesday, October 30, 2007

5 Simple Ways to Increase Your Intelligence

Your brain needs exercise just like a muscle. If you use it often and in the right ways, you will become a more skilled thinker and increase your ability to focus. But if you never use your brain, or abuse it with harmful chemicals, your ability to think and learn will deteriorate.

Here are 5 simple ways anyone can squeeze a bit more productivity out of the old gray matter.

1. Minimize Television Watching - This is a hard sell. People love vegetating in front of the television, myself included more often than I’d like. The problem is watching television doesn’t use your mental capacity OR allow it to recharge. It’s like having the energy sapped out of a muscle without the health benefits of exercise.

Don’t you feel drained after a couple hours of TV? Your eyes are sore and tired from being focused on the light box for so long. You don’t even have the energy to read a book.

When you feel like relaxing, try reading a book instead. If you’re too tired, listen to some music. When you’re with your friends or family, leave the tube off and have a conversation. All of these things use your mind more than television and allow you to relax.

2. Exercise - I used to think that I’d learn more by not exercising and using the time to read a book instead. But I realized that time spent exercising always leads to greater learning because it improves productivity during the time afterwards. Using your body clears your head and creates a wave of energy. Afterwards, you feel invigorated and can concentrate more easily.

3. Read Challenging Books - Many people like to read popular suspense fiction, but generally these books aren’t mentally stimulating. If you want to improve your thinking and writing ability you should read books that make you focus. Reading a classic novel can change your view of the world and will make you think in more precise, elegant English. Don’t be afraid to look up a word if you don’t know it, and don’t be afraid of dense passages. Take your time, re-read when necessary, and you’ll soon grow accustomed to the author’s style.

Once you get used to reading challenging books, I think you’ll find that you aren’t tempted to go back to page-turners. The challenge of learning new ideas is far more exciting than any tacky suspense-thriller.

4. Early to Bed, Early to Rise - Nothing makes it harder to concentrate than sleep deprivation. You’ll be most rejuvenated if you go to bed early and don’t sleep more than 8 hours. If you stay up late and compensate by sleeping late, you’ll wake up lethargic and have trouble focusing. In my experience the early morning hours are the most tranquil and productive. Waking up early gives you more productive hours and maximizes your mental acuity all day.

If you have the opportunity, take 10-20 minute naps when you are hit with a wave of drowsiness. Anything longer will make you lethargic, but a short nap will refresh you.

5. Take Time to Reflect - Often our lives get so hectic that we become overwhelmed without even realizing it. It becomes difficult to concentrate because nagging thoughts keep interrupting. Spending some time alone in reflection gives you a chance organize your thoughts and prioritize your responsibilities. Afterwards, you’ll have a better understanding of what’s important and what isn’t. The unimportant stuff won’t bother you anymore and your mind will feel less encumbered.

I’m not saying you need to sit on the floor cross-legged and chant ‘ommm’. Anything that allows a bit of prolonged solitude will do. One of my personal favorites is taking a solitary walk. Someone famous said, “All the best ideas occur while walking.” I think he was on to something. Experiment to find the activity that works best for you.

Conclusion - I hope you aren’t disappointed that none of the techniques I’ve proposed are revolutionary. But simple, unexciting answers are often the most valid. The challenge is having the will to adhere to them. If you succeed in following these 5 tips, you’ll be rewarded with increased mental acuity and retention of knowledge.

Sunday, October 28, 2007

Global Warming - The Science

  • The greenhouse effect occurs naturally, providing a habitable climate.
  • Atmospheric concentrations of some of the gases that produce the greenhouse effect are increasing due to human activity and most of the world's climate scientists believe this causes global warming.
  • Over one-third of human-induced greenhouse gases come from the burning of fossil fuel to generate electricity. Nuclear power plants do not emit these gases.

The "greenhouse effect" is the term used to describe the retention of heat in the Earth's lower atmosphere (troposphere). In colloquial usage it often refers to the enhanced global warming which is considered likely to occur because of the increasing concentrations of certain trace gases in the atmosphere. These gases are generally known as greenhouse gases*. Concentrations of them have increased significantly during the 20th century, and a large part of this increase is attributed to human sources, i.e. it is anthropogenic.

* or more specifically as radiative gases.

Furthermore, although most sources of anthropogenic emissions can be identified in particular countries, their effect is in no way confined to those countries, - it is global.

The Greenhouse Effect

The greenhouse effect itself occurs when short-wave solar radiation (which is not impeded by the greenhouse gases) heats the surface of the Earth, and the energy is radiated back through the Earth's atmosphere as heat, with a longer wavelength. In the wavelengths 5-30µm a lot of this thermal radiation is absorbed by water vapour and carbon dioxide, which in turn radiate it, thus heating the atmosphere. This is what keeps the Earth habitable, - without the greenhouse effect overnight temperatures would plunge and the average surface temperature would be about minus 18oC, about the same as on the moon.

The particular problem arises in the 8-18µm band where water vapour is a weak absorber of radiation and where the Earth's thermal radiation is greatest. Part of this "window" (12.5-18µm) is largely blocked by carbon dioxide absorption, even at the low levels originally existing in the atmosphere. The remainder of the "window" coincides with the absorption proclivities of the other radiative gases: methane, (tropospheric) ozone, CFCs and nitrous oxide. It also appears that increased levels of carbon dioxide will increase the capture of heat in its absorption band to some, perhaps significant, extent.

The result of all this is that less heat is lost to space from the Earth's lower atmosphere, and temperatures at the Earth's surface are therefore likely to increase.

A number of indicators suggest that warming due to increased levels of greenhouse gases is indeed observable since 1980, despite some masking by aerosols (see below). One problem is that while global air temperatures do appear to have risen about 0.6oC over the last century, this has been irregular rather than steady, and does not correlate well with the steady increase in greenhouse gas concentrations. While the amount is consistent with natural climate variability, the seven warmest years on record have been in the last decade. However, the climate is a complex system and other factors influence global temperatures.

Balancing Factors

The major role of water vapour in absorbing thermal radiation is in some respects balanced by the fact that when condensed it causes an albedo effect which reflects about one third of the incoming sunlight back into space. This effect is enhanced by atmospheric sulfate aerosols and dust, which provide condensation nuclei. Nearly half the sulfates in the atmosphere originate from sulfur dioxide emissions from power stations and industry, particularly in the northern hemisphere. However, in many countries there are now programmes to reduce sulphur dioxide emissions from power stations, as these emissions cause acid rain, so the impact of this balancing factor will reduce.

In recent decades volcanoes have contributed substantially to dust and acid aerosol levels high in the atmosphere. While at lower levels in the atmosphere sulfate aerosols and dust are short-lived, such material in the stratosphere remains for years, increasing the amount of sunlight which is reflected away. Hence there is, for the time being, a balancing cooling effect on the earth's surface. In the northern hemisphere the sulfate aerosols counter nearly half the heating effect due to anthropogenic greenhouse gases. As emissions of sulfates reduce and this balancing factor diminishes the rate of temperature increase due to greenhouse gases may increase.

Global Warming

There is clear evidence of changes in the composition of the greenhouse gases in the lower atmosphere. Ice core samples show that both carbon dioxide and methane levels are higher than at any time in the past 160,000 years.

Estimates of the individual contribution of particular gases to the greenhouse effect, - their Global Warming Potential (GWP), are broadly agreed (relative to carbon dioxide = 1). Such estimates depend on the physical behaviour of each kind of molecule and its lifetime in the atmosphere, as well as the gas's concentration. Both direct and indirect effects due to interaction with other gases and radicals must be taken into account and some of the latter remain uncertain:

greenhouse gas concentration
1800s - 2000
anthropogenic sources GWP proportion of total effect
(approximate)
carbon dioxide 280 - 370 ppm fossil fuel burning, deforestation 1 60%
methane 0.75 - 1.75 ppm agriculture, fuel leakage 21 20%
halocarbons 0 - 0.7 ppb refrigerants 3400+ 14%
nitrous oxide 275 - 310 ppb agriculture, combustion 310 6%
ozone 15? - 20-30 ppb urban pollution

Sources, Residence and Sinks

Relating these atmospheric concentrations to emissions, sources and sinks is a steadily evolving sphere of scientific inquiry. Certain inputs to the atmosphere can be discerned and readily quantified, - carbon dioxide from fossil fuel burning (about 26 billion tonnes per year, 7.2 GtC) and CFCs from refrigerants for instance. Others, such as methane sources, are less certain, though about one fifth of the methane emissions appear to be from fossil sources (coal seams, oil and natural gas, about 100 million tonnes per year).

Electricity generation is one of the major sources of carbon dioxide emissions, providing about one third of the total. Coal-fired generation* gives rise to twice as much carbon dioxide as natural gas per unit of power at the point of use, but hydro, nuclear power and most renewables do not directly contribute any. If all the world's nuclear power were replaced by coal-fired power, electricity's carbon dioxide emissions would rise by a third - about 2.5 billion tonnes per year. Conversely, there is scope for reducing coal's carbon dioxide contribution by substituting natural gas or nuclear, and by improving the efficiency of coal-fired generation itself, a process which is well under way.
* in developed countries, with average 33% thermal efficiency. The difference is greater considering developing countries' average 25% efficiency.

Estimates of carbon dioxide concentrations in the atmosphere in the next century all show substantial increases. Global emissions are expected to be about 50% higher in 2010 than in 1990.

Then there is the question of residence time in the atmosphere. For example methane has about an eleven year residence time before it is oxidised to carbon dioxide. Hydroxyl (OH) radicals are the main means of this oxidation. Carbon dioxide has a much longer residence time in the atmosphere, until it is either used up in photosynthesis or absorbed in rain or oceans.

Finally, in relating emissions to atmospheric concentrations, there is the question of sinks, or natural processes for breaking down or removing individual gases, particularly carbon dioxide. While the increase in carbon dioxide concentrations is remarkable, and the rate of anthropogenic emissions considerable (some 30 billion tonnes per year), even this is only about three percent of the natural flux between the atmosphere and the land and oceans. This perspective is important as a reminder that only a very small change to natural processes is required to compensate for (or exacerbate) anthropogenic emissions.

In fact, study of the atmospheric carbon cycle shows that only about half of the anthropogenic emissions show up as increased carbon dioxide levels. This puzzle is not fully explained, but it seems that some terrestrial sinks are functioning as a negative feedback, that is to say they have increased their uptake as the atmospheric concentration has increased. The oceans are a major sink.

Climate Change

The outcome of any significant global warming will be various changes in climate rather than simply an overall increase in average or nocturnal temperatures. Climate researchers have designed models to predict the consequences both in air and ocean circulation patterns. These give a range and probability of climatic impacts on different regions of the world.


Source: IAEA Bulletin 42,2; 2000

Defining climate change prospects, effects and mitigation

The science behind the politics of global warming took a step forward and also ratcheted up concerns with the release of the Third Assessment Report from the UN's Intergovernmental Panel on Climate Change (IPCC), in September 2001.

In 2007 the IPCC is publishing the results of their Fourth Assessment Report. This is being published in three parts. The first, which was published in February, details the physical scientific basis for climate change. The second, published in April covered the impacts of climate change, the options for adaptation and identified where people and the environment are most vulnerable. The third part of the report, published in May, identifies options for mitigation of climate change. A synthesis of all three reports, including a Summary for Policy Makers, will be published in November.

The first part of the Fourth Assessment report on the science relating to climate change concluded that the evidence that human-derived greenhouse gas emissions had already had an impact on the climate had strengthened. Furthermore, there was greater confidence in predictions of the impacts of future greenhouse gas emissions.

Among the findings were:

  • Eleven of the last twelve years (1995-2006) rank among the 12 warmest years in the instrumental record of global surface temperature (since 1850).
  • Most of the observed increase in globally averaged temperatures since the mid-20th century is very likely (90%+ probability) due to the observed increase in anthropogenic greenhouse gas concentrations.
  • The average temperature of the global ocean has increased to depths of at least 3000 m and that the ocean has been absorbing more than 80% of the heat added to the climate system. Such warming causes seawater to expand, contributing to sea level rise.
  • Mountain glaciers and snow cover have declined on average in both hemispheres. Widespread decreases in glaciers and ice caps have contributed to sea level rise
  • Global average sea level rose at an average rate of 1.8 mm per year over 1961 to 2003. The rate was faster over 1993 to 2003, about 3.1 mm per year.
  • Average Arctic temperatures increased at almost twice the global average rate in the past 100 years.
  • More intense and longer droughts have been observed over wider areas since the 1970s, particularly in the tropics and subtropics.
  • Widespread changes in extreme temperatures have been observed over the last 50 years. Cold days, cold nights and frost have become less frequent, while hot days, hot nights, and heat waves have become more frequent
  • The global atmospheric concentration of carbon dioxide has increased from a pre-industrial value of about 280 ppm to 379 ppm in 2005. The atmospheric concentration of carbon dioxide in 2005 exceeds by far the natural range over the last 650,000 years (180 to 300 ppm) as determined from ice cores.
  • The primary source of the increased atmospheric concentration of carbon dioxide since the pre-industrial period results from fossil fuel use, with land use change providing another significant but smaller contribution. Annual fossil carbon dioxide emissions increased from an average of 23.5 Gt CO2 per year in the 1990s, to 26.4 Gt CO2 per year in 2000-2005.
  • The global atmospheric concentration of methane has increased from a pre-industrial value of about 715 ppb to 1732 ppb in the early 1990s, and is 1774 ppb in 2005.
  • The combined radiative forcing due to increases in carbon dioxide, methane, and nitrous oxide is +2.30 W/m2, and its rate of increase during the industrial era is very likely to have been unprecedented in more than 10,000 years.

The IPCC predicts that, based on a range of scenarios, by the end of the 21st century climate change will result in :

  • A probable temperature rise between 1.8°C and 4°C, with a possible temperature rise between 1.1°C and 6.4°C.
  • A sea level rise most likely to be 28-43cm
  • Arctic summer sea ice disappearing in second half of century
  • An increase in heatwaves being very likely
  • A likely increase in tropical storm intensity.

The second part of the 2007 report deals with impacts, adaptation and vulnerabilities. It concludes that climate change will have significant impacts including increased stress on water supplies and a widening threat of species extinction.

The third part of the report in May 2007 deals with the mitigation of climate change, outlining the prospects and options for change, particularly in the energy sector, which accounts for 60% of emissions. It was signed off by over 100 countries which agree that major changes are required, to adopt low-carbon energy technologies. It says that a key to achieving this is putting a price on carbon emissions, particularly from power generation. The report acknowledges that nuclear power is now and will remain a 'key mitigation technology'.

It says that the most cost-effective option for restricting the temperature rise to under 3°C will require an increase in non-carbon electricity generation from 34% (nuclear plus hydro) now to 48 - 53% by 2030, along with other measures. With a doubling of overall electricity demand by then, and a carbon emission cost of US$ 50 per tonne of CO2, nuclear's share of electricity generation is projected by IPCC to grow from 16% now to 18% of the increased demand. This would represent more than a doubling of the current nuclear output by 2030. The report projects other non-carbon sources apart from hydro contributing some 12-17% of global electricity generation by 2030.

These projected figures are estimates, and it is evident that if renewables fail to grow as much as hoped it means that other non-carbon sources will need to play a larger role. Thus nuclear power's contribution could triple or perhaps quadruple to more than 30% of the global generation mix in 2030. The report also states that costs of achieving any overall target for atmospheric greenhouse gas concentrations would increase if any generation options were excluded. Clearly, any country excluding or phasing out nuclear energy is raising the overall cost of meeting emission reduction targets. This runs counter to the economic objectives of sustainable development.


CARBON DIOXIDE EMISSIONS AVOIDED BY NUCLEAR ENERGY

(2007 data is similar)


Nuclear generation 2005 OPERABLE at May 2006 Approx CO2 avoided per year

billion kWh % e No. MWe Million tonnes
Argentina 6.4 6.9 2 935 6
Armenia 2.5 43 1 376 6
Belgium 45.3 56 7 5728 45
Brazil 9.9 2.5 2 1901 9
Bulgaria 17.3 44 4 2722 17
Canada* 86.8 15 18 12595 85
Chinese




mainland 50.3 2 10 7587 50
Taiwan 38.4 20 6 4884 36
Czech Rep. 23.3 31 6 3472 23
Finland 22.3 33 4 2676 22
France 430.9 79 59 63473 430
Germany 154.6 31 17 20303 154
Hungary 13 37 4 1755 13
India 15.7 2.8 15 2993 15
Japan 280.7 29 55 47700 280
Korea RO (S) 139.3 45 20 16840 139
Lithuania 10.3 70 1 1185 10
Mexico 10.8 5 2 1310 10
Netherlands 3.8 3.9 1 452 3
Pakistan 2.4 2.8 2 425 2
Romania 5.1 8.6 1 655 5
Russia 137.3 16 31 21743 130
Slovakia 16.3 56 6 2472 16
Slovenia 5.6 42 1 676 5
South Africa 12.2 5.5 2 1842 12
Spain 54.7 20 8 7442 50
Sweden 69.5 45 10 8938 69
Switzerland 22.1 32 5 3220 22
Ukraine 83.3 49 15 13168 83
UK 75.2 20 23 11852 75
USA 780.5 19 103 98054 780
WORLD 2,626 16 441 369,374 2600
Sources: WNA to 31/5/06, IAEA- for electricity production
Operating = Connected to the grid
MWe = Megawatt (electrical as distinct from thermal), kWh = kilowatt-hour

Basis: 1 billion kWh would require 409,000 tonnes black coal with 67% carbon.
World carbon dioxide emissions from electricity generation are about 9500 million tonnes per year (most from coal). Electricity contributes about 40% of total world CO2 emissions.

Thursday, October 25, 2007

PHILL COLLINS MUSIC.

I
H

Light II

In 1873, seventy years after Thomas Young presented his experimental results on the nature of light (see Light I: Particle or Wave?), a Scottish physicist named James Clerk Maxwell published a theory that accounted for the physical origins of light. Throughout the nineteenth century, many of science’s greatest minds dedicated themselves to the study of two exciting new ideas: electricity and magnetism. Maxwell’s work synthesized these two ideas, which had previously been considered separate phenomena. His new theory was aptly named a theory of “electromagnetism.”

The earliest experimental connection between electricity and magnetism came in the 1820’s from the work of the Danish physicist Hans Christian Oersted. Oersted discovered that a wire carrying electric current could deflect the needle of a magnetic compass. This planted the seed for Andre-Marie Ampere, a French physicist, to demonstrate that two current-carrying wires would interact with each other due to the magnetic field that they generated. Ampere found that two long, straight wires carrying current in the same direction would attract each other, and two wires carrying current in opposite directions would repel each other. Ultimately, Ampere formulated a general expression – called Ampere’s Law – for determining the magnetic field created by any distribution of electric currents.

Demonstration of wires carrying current in the same direction.

Demonstration of wires carrying current in opposite directions.

Ampere’s important contributions to magnetism and electricity led other scientists to conduct experiments that probed the relationship between these two cutting-edge areas of nineteenth century physics. For example, in 1831, Michael Faraday discovered that a change in the magnetic field passing through a loop of wire creates a current in the wire. Faraday, an English physicist with almost no formal mathematical training, had observed that passing a bar magnet through a coil of wire created an electric current. Similarly, moving a coil of wire in the vicinity of a stationary magnet also produced electric current. Faraday hypothesized that somehow the magnet “induced” the current in the wire, and named the phenomenon “induction.” Faraday’s name is still associated with this idea, in the form of “Faraday’s Law,” which, put simply, says that a changing magnetic field produces an electric field.

Demonstration of Faraday's Inductor

Today, the principle behind Faraday’s Law is at work in electrical generators. Using some mechanical source of energy (such as a hand crank, a windmill, the force of falling water, or steam from boiling water) to spin a turbine, magnets inside the generator spin next to a large coil of wire. As the magnets spin, the magnetic field that passes through the wire loop changes. This changing “magnetic flux” establishes an “induced” current in the wire and mechanical energy becomes electrical energy.

A Simple Electric Current Generator

An example of a simple electrical generator in which a magnet spins within a coil of wire generating an electric current.

(Flash required)

Over 40 years after Faraday, James Clerk Maxwell, based on little more than an intuitive feeling for the symmetry of physical laws, speculated that the converse of Faraday’s Law must also be true: a changing electric field produces a magnetic field. When Maxwell took the work of Ampere and Faraday and incorporated his new idea, he was able to derive a set of equations (originally there were twenty equations, but now they have been simplified to just four) that completely unified the concepts of electric and magnetic fields into one mathematical model. To learn more about the math behind the equations, be sure to follow the links under “Further Exploration” below.

After developing his now-famous equations, Maxwell and other physicists began exploring their implications and testing their predictions. One prediction that came from Maxwell’s equations was that a charge moving back and forth in a periodic fashion would create an oscillating electric field. This electric field would then set up a periodically changing magnetic field, which in turn would cause the original electric field to continue its oscillation, and so on. This mutual vibration allowed the electric and magnetic fields to travel through space in the form of an “electromagnetic wave,” as shown below:

Electromagnetic Wave - An electromagnetic wave.

An electromagnetic wave.

Movie of an electromagnetic wave

Because this new mathematical model of electromagnetism described a wave, physicists were able to imagine that electromagnetic radiation could take on the properties of waves. Thus, just like all waves, Maxwell’s electromagnetic waves could have a range of wavelengths and corresponding frequencies (see the Wave Motion module for more information on waves). This range of wavelengths is now known as the “electromagnetic spectrum.” Maxwell's theory also predicted that all of the waves in the spectrum travel at a characteristic speed of approximately 300,000,000 meters per second. Maxwell was able to calculate this speed from his equations in the following way:

Speed of Light Equation

where

Speed of Light Constants

Maxwell’s calculation of the speed of an electromagnetic wave included two important constants: the permittivity and permeability of free space. The permittivity of free space is also known as the “electric constant” and describes the strength of the electrical force between two charged particles in a vacuum. The permeability of free space is the magnetic analogue of the electric constant. It describes the strength of the magnetic force on an object in a magnetic field. Thus, the speed of an electromagnetic wave comes directly from a fundamental consideration of electricity and magnetism.

When Maxwell calculated this speed, he realized that it was extremely close to the measured value for the speed of light, which had been known for centuries from detailed astronomical observations. After Maxwell’s equations became widely known, the Polish-American physicist Albert Michelson made a very precise measurement of the speed of light that was in extremely close agreement with Maxwell’s predicted value. This was too much for Maxwell to accept as coincidence, and led him to the realization that light was an electromagnetic wave and thus part of the electromagnetic spectrum.

The Electromagnetic Spectrum

As scientists and engineers began to explore the implications of Maxwell’s theory, they performed experiments that verified the existence of the different regions, or groups of wavelengths, of the electromagnetic spectrum. As practical uses for these regions of the spectrum developed, they acquired now-familiar names, like “radio waves,” and “X-rays.” The longest wavelength waves predicted by Maxwell’s theory are longer than 1 meter, and this band of the electromagnetic spectrum is known as radio waves. The shortest wavelength electromagnetic waves are called gamma rays, and have wavelengths shorter than 10 picometers (1 trillion times shorter than radio waves).

Between these two extremes lies a tiny band of wavelengths ranging from 400 to 700 nanometers. Electromagnetic radiation in this range is what we call “light,” but it is no different in form from radio waves, gamma rays, or any of the other electromagnetic waves we now know exist. The only thing unique about this portion of the electromagnetic spectrum is that the majority of the radiation produced by the Sun and hitting the surface of the planet Earth falls into this range. Because humans evolved on Earth in the presence of the Sun, it is no accident that our own biological instruments for receiving electromagnetic radiation – our eyes – evolved to detect this range of wavelengths. Other organisms have evolved sensory organs that are attuned to different parts of the spectrum. For example, the eyes of bees and other insects are sensitive to the ultraviolet portion of the spectrum (not coincidentally, many flowers reflect ultraviolet light), and these insects use UV radiation to see. However, since the sun emits primarily electromagnetic waves in the “visible” light region, most organisms have evolved to use this radiation instead of radio or gamma or other waves. For example, plants use this region of the electromagnetic spectrum in photosynthesis. For more information about the different regions of the electromagnetic spectrum, visit the interactive Electromagnetic Spectrum page linked below.

Interactive Electromagnetic Spectrum

(Flash required)

Maxwell’s elegant equations not only unified the concepts of electricity and magnetism, they also put the familiar and much-studied phenomenon of light into a context that allowed scientists to understand its origin and behaviors. Maxwell appeared to have established conclusively that light behaves like a wave, but interestingly enough he also planted the seed of an idea that would lead to an entirely different view of light. It would be another thirty years before a young Austrian physicist named Albert Einstein would cultivate that seed, and in doing so spark the growth of a revolution in our understanding of how the universe is put together.

Light

Early Theories

For as long as the human imagination has sought to make meaning of the world, we have recognized light as essential to our existence. Whether to a prehistoric child warming herself by the light of a fire in a cave, or to a modern child afraid to go to sleep without the lights on, light has always given comfort and reassurance.

The earliest documented theories of light came from the ancient Greeks. Aristotle believed that light was some kind of disturbance in the air, one of his four "elements" that composed matter. Centuries later, Lucretius, who, like Democritus before him, believed that matter consisted of indivisible "atoms," thought that light must be a particle given off by the sun. In the tenth century a.d., the Persian mathematician Alhazen developed a theory that all objects radiate their own light. Alhazen’s theory was contrary to earlier theories proposing that we could see because our eyes emitted light to illuminate the objects around us.

In the seventeenth century, two distinct models emerged from France to explain the phenomenon of light. The French philosopher and mathematician Rene Descartes believed that an invisible substance, which he called the plenum, permeated the universe. Much like Aristotle, he believed that light was a disturbance that traveled through the plenum, like a wave that travels through water. Pierre Gassendi, a contemporary of Descartes, challenged this theory, asserting that light was made up of discrete particles.

Particles versus Waves

While this controversy developed between rival French philosophers, two of the leading English scientists of the seventeenth century took up the particles-versus-waves battle. Isaac Newton, after seriously considering both models, ultimately decided that light was made up of particles (though he called them corpuscles). Robert Hooke, already a rival of Newton’s and the scientist who would identify and name the cell in 1655, was a proponent of the wave theory. Unlike many before them, these two scientists based their theories on observations of light’s behaviors: reflection and refraction. Reflection, as from a mirror, was a well-known occurrence, but refraction, the now familiar phenomenon by which an object partially submerged in water appears to be “broken,” was not well understood at the time.

Refraction - Figure 1: A seemingly “broken” pencil in a glass of water is the result of the refraction of light.

Figure 1: A seemingly “broken” pencil in a glass of water is the result of the refraction of light.

Proponents of the particle theory of light pointed to reflection as evidence that light consists of individual particles that bounce off of objects, much like billiard balls. Newton believed that refraction could be explained by his laws of motion, with particles of light as the objects in motion. As light particles approached the boundary between two materials of different densities, such as air and water, the increased gravitational force of the denser material would cause the particles to change direction, Newton believed.

Newton’s particle theory was also based partly on his observations of how the wave phenomenon diffraction related to sound. He understood that sound traveled through the air in waves, meaning sound could travel around corners and obstacles, thus a person in another room can be heard through a doorway. Since light was unable to bend around corners or obstacles, Newton believed that light could not diffract. He therefore supposed light was not a wave.

Hooke and others – most notably the Dutch scientist Christian Huygens – believed that refraction occurred because light waves slowed down as they entered a denser medium such as water and changed their direction as a result. These wave theorists believed, like Descartes, that light must travel through some material that permeates space. Huygens dubbed this medium the aether.

Because of Newton’s fame and reputation, many scientists of the seventeenth and eighteenth centuries subscribed to the view that light was a particle. The wave theory of light, however, would receive a major boost at the beginning of the nineteenth century from an English scientist named Thomas Young.

The Waves Have It

On November 24, 1803, Thomas Young stood before the Royal Society of London to present the results of a groundbreaking experiment. Young had devised a simple scheme to see if light demonstrated a behavior particular to waves: interference. To understand this concept, imagine two waves traveling toward each other on a string, as shown in Figure 2:

Interference: Constructive - Figure 2: Traveling wave pulses interfering constructively.

Figure 2: Traveling wave pulses interfering constructively.

When the waves reach the same part of the string at the same time, as shown in the middle diagram, they will add together and create one wave with double the amplitude (height) of the original waves. This adding together of waves is known as “constructive interference” because the waves combine to construct a new, bigger wave.

Another possible scenario is shown in Figure 3:

Interference: Destructive - Figure 3: Traveling wave pulses interfering destructively.

Figure 3: Traveling wave pulses interfering destructively.

Here, the two waves approaching each other have equal and opposite amplitudes. When they pass each other (middle diagram), they completely cancel each other out. This canceling effect is known as “destructive interference” because the waves temporarily disappear as they pass.

Thomas Young recognized that if light behaved like a wave it would be possible to create patterns of constructive and destructive interference using light. In 1801 he devised an experiment that would force two beams of light to travel different distances before interfering with each other when they reached a screen. To accomplish this, Young set up a mirror to direct a thin beam of sunlight into a darkened room (and an assistant to make sure the mirror aimed the sun’s light properly!). Young split the beam in two by placing a very thin card edgewise in the beam, as shown in the figure below.

Young Experiment Illustration
Young Experiment: Schematic - Figure 4: Illustration and schematic diagram of Young’s experiment. The edge of the card splits the light into two beams. When the beams meet at the screen, they will have traveled different distances as they bend around the edge of the card. This leads to constructive and destructive interference, depending on whether the beams are in phase or out of phase in particular spots. Where constructive interference occurs, the path difference is an integer multiple of a wavelength (or is zero, as shown earlier), and the intensity of the light hitting the screen is at a maximum. Dark spots appear on the screen where destructive interference occurs, which is the result of a path difference that is equal to one half-wavelength of the light or an integer multiple thereof.

Figure 4: Illustration and schematic diagram of Young’s experiment. The edge of the card splits the light into two beams. When the beams meet at the screen, they will have traveled different distances as they bend around the edge of the card. This leads to constructive and destructive interference, depending on whether the beams are in phase or out of phase in particular spots. Where constructive interference occurs, the path difference is an integer multiple of a wavelength (or is zero, as shown earlier), and the intensity of the light hitting the screen is at a maximum. Dark spots appear on the screen where destructive interference occurs, which is the result of a path difference that is equal to one half-wavelength of the light or an integer multiple thereof.

When the two beams of light shone on a screen, Young observed a very interesting pattern of light and dark "fringes" where the two beams interfered with each other constructively and destructively. Bright fringes appeared where the intensity of the light hitting the screen was highest, and dark fringes appeared where the intensity was zero. Where the two beams of light were exactly "in phase" (see Figure 5), they interfered constructively and created light that was brighter than either beam by itself. Where the beams of light were exactly "out of phase," they interfered destructively to produce a dark spot where the total light intensity was zero.

Waves: In Phase
Waves: Out of Phase - Figure 5: In-phase and out-of-phase waves. Top: The red and orange waves are “in phase,” and the combination of these two waves (shown in blue) is a wave with double the amplitude of the each original wave. Bottom: The red and orange waves are “out of phase,” and the result (shown in blue) is a wave of zero amplitude.

Figure 5: In-phase and out-of-phase waves. Top: The red and orange waves are “in phase,” and the combination of these two waves (shown in blue) is a wave with double the amplitude of the each original wave. Bottom: The red and orange waves are “out of phase,” and the result (shown in blue) is a wave of zero amplitude.

To understand the pattern of fringes in Young’s experiment, let’s examine the movement of two waves in more detail. Imagine starting with two waves that are perfectly in phase, as shown in Figure 6:

Waves: In Phase With Screen - Figure 6: Two waves that are in phase upon reaching the screen at the right side of the figure.

Figure 6: Two waves that are in phase upon reaching the screen at the right side of the figure.

If one wave travels a greater distance than the other, the peaks and troughs of the waves will become offset from one another and they may be out of phase when they reach their destination, as shown in Figure 7.

Waves: Out of Phase with Screen - Figure 7: Two waves that have traveled different distances and are out of phase upon reaching the screen at the right side of the figure.

Figure 7: Two waves that have traveled different distances and are out of phase upon reaching the screen at the right side of the figure.

If the difference in distance traveled by the two waves is even greater, they will reach a point where the peak of one wave aligns with the trough of the other. Finally, if the wave that travels farther follows a path that is exactly one wavelength longer than the path the other wave follows (or two or three or any integer multiple longer), then their peaks will again align and they will arrive at their destination in phase, as shown in Figure 8.

Waves: In Phase with Path Difference - Figure 8: Two waves that have traveled different distances yet are in phase when they reach the screen at the right side of the figure.  The additional distance traveled by the red wave (indicated by the vertical green lines) is exactly equal to one wavelength, so the waves arrive at their destination in phase with each other even though they have traveled different distances.

Figure 8: Two waves that have traveled different distances yet are in phase when they reach the screen at the right side of the figure. The additional distance traveled by the red wave (indicated by the vertical green lines) is exactly equal to one wavelength, so the waves arrive at their destination in phase with each other even though they have traveled different distances.

Young realized that the bright spots on his screen occurred where the difference in the length of the path traveled by the beams of light was an integer multiple of the wavelength of the light. The waves that met at this spot were perfectly in phase and had formed a bright spot because the peaks and troughs aligned with each other.

At the spots where there was no light at all, the difference in path lengths was a multiple of exactly one half-wavelength, so the two waves were completely out of phase and interfered destructively, as seen in Figure 9.

Waves: Out of Phase with Path Difference - Figure 9: Two waves that have traveled different distances and are perfectly out of phase when they reach the screen at right.  The additional distance traveled by the red wave (indicated by the vertical green lines) is exactly equal to one half-wavelength, so the waves arrive at their destination out of phase and interfere destructively.

Figure 9: Two waves that have traveled different distances and are perfectly out of phase when they reach the screen at right. The additional distance traveled by the red wave (indicated by the vertical green lines) is exactly equal to one half-wavelength, so the waves arrive at their destination out of phase and interfere destructively.

Through this experiment (often called Young’s “Double Slit” experiment, and voted by The New York Times in 2002 as science’s fifth most beautiful experiment - see News & Events link), Young demonstrated with certainty the wave-like nature of light. His experiment answered Newton’s charge that light could not bend around corners or obstacles because, when it bent around the edge of the card, it had. Physicists now know that waves will go around obstacles, but only if the size of the obstacle is comparable to the size, or wavelength, of the wave. The card that Young used in his apparatus was very thin – only about as thick as the wavelength of the light he was using it to divide, so the light did, indeed, bend around the card.

In the face of this compelling evidence, nineteenth-century scientists had to concede that light was a wave. This happened slowly, though, hampered by Newton’s reputation and the legacy of his corpuscular theory. Yet, once it did take root, the idea of light as a wave paved the way for the nineteenth-century Scottish physicist James Clerk Maxwell to devise an elegant description of light as a wave, which unified two rapidly developing concepts of physics into one complete theory. It was this description that set the stage for a discovery that would arise 100 years later, when a young Austrian patent clerk by the name of Albert Einstein would show that the conception of light as a wave was not entirely correct and thereby revolutionize scientific thinking of the twentieth century.

Waves and Wave Motion

On July 17, 1998, three huge waves – “tsunamis” – up to 15 meters high struck the north coast of Papua New Guinea, killing at least 2,200 people. A major earthquake, itself consisting of waves traveling through the earth, triggered an underwater landslide that created the tsunamis. Radio stations reported the disaster by transmitting electromagnetic radio waves to listeners around the world. Listeners were able to hear the news transported by sound waves created by their radios.

Waves of one form or another can be found in an amazingly diverse range of physical applications, from the oceans to the science of sound. Put simply, a wave is a traveling disturbance. Ocean waves travel for thousands of kilometers through the water. Earthquake waves travel through the Earth, sometimes bouncing off the core of the Earth and making it all the way back to the surface. Sound waves travel through the air to our ears, where we process the disturbances and interpret them.

Ancient Wave Theories

Much of the current understanding of wave motion has come from the study of acoustics. Ancient Greek philosophers, many of whom were interested in music, hypothesized that there was a connection between waves and sound, and that vibrations, or disturbances, must be responsible for sounds. Pythagoras observed in 550 BC that vibrating strings produced sound, and worked to determine the mathematical relationships between the lengths of strings that made harmonious tones.

Scientific theories of wave propagation became more prominent in the 17th Century AD, when Galileo Galilei (1564-1642) published a clear statement of the connection between vibrating bodies and the sounds they produce. Robert Boyle, in a classic experiment from 1660, proved that sound can not travel through a vacuum. Isaac Newton published a mathematical description of how sound travels in his work Principia (1686). In the 18th Century, French mathematician and scientist Jean Le Rond d’Alembert derived the wave equation, a thorough and general mathematical description of waves, which laid the foundations for generations of scientists to study and describe wave phenomena.

Wave Basics

Waves can take many forms, but there are two fundamental types of waves: “longitudinal” and “transverse” (see Figures 1 and 2). Both of these wave-types are traveling disturbances, but they are different because of the way that they travel. As a wave travels through a medium, the particles that make up the medium are disturbed from their resting, or “equilibrium” positions. In a longitudinal wave, the particles are disturbed in a direction parallel to the direction that the wave propagates. A longitudinal wave consists of “compressions” and “rarefactions” where particles are bunched together and spread out, respectively (see Figure 1). For another view of this type of wave, take a look at the longitudinal wave video clip below. In a transverse wave, the particles are disturbed in a direction perpendicular to the direction that the wave propagates. The transverse wave video clip below provides a dynamic visualization of this type of wave. After either type of wave passes through a medium, the particles return to their equilibrium positions. Thus, waves travel through a medium with no net displacement of the particles in the medium.

Figure 1 - Figure 1: A longitudinal wave, made up of compressions - areas where particles are close together - and rarefactions - areas where particles are spread out.  The particles move in a direction that is parallel to the direction of wave propagation.

Figure 1: A longitudinal wave, made up of compressions - areas where particles are close together - and rarefactions - areas where particles are spread out. The particles move in a direction that is parallel to the direction of wave propagation.


Illustration of a longitudinal wave

(Quicktime Required)




transverse wave - Figure 2: A transverse wave.  The particles move in a direction that is perpendicular to the direction of wave propagation.

Figure 2: A transverse wave. The particles move in a direction that is perpendicular to the direction of wave propagation.


Illustration of a transverse wave.

(Quicktime Required)



Sound waves are examples of longitudinal waves: the individual particles (air molecules) vibrate back and forth in the direction that the sound is traveling. An example of a transverse wave is the classic sports arena phenomenon known as “The Wave.” As the wave travels around the stadium, each spectator stands up and sits down. Thus, the displacement of the “particles” is perpendicular to the direction the wave travels. Many other waves, such as ocean waves or Rayleigh Surface Waves are combinations of longitudinal and transverse wave motion.

Describing Waves

The waves we described above are all examples of “periodic waves,” in that they involve a cyclical pattern of motion. Waves travel through space and time, and can be described in terms of their characteristics in both of these dimensions. Imagine a Slinky, a toy that consists solely of a long, loosely coiled piece of metal or plastic. By shaking one end of the slinky up and down in a periodic fashion, it is possible to produce a transverse wave, as shown in the figures below.

Figure 3 represents a snapshot of a slinky, such as the one in the transverse wave video clip, as it is vibrating. The vertical axis represents the vertical position of the slinky, and the horizontal axis represents the horizontal position of the slinky. As indicated in the figure, the amplitude (A) of the wave is the maximum displacement of a particle from its equilibrium position – or the height of the wave. The length of the wave is the wavelength (lambda), and is simply the length of one cycle of the wave. In the figure, the wavelength is shown as the distance between two successive wave crests. The wavelength can also be measured between successive troughs, or between any two equivalent points on the wave. Both the amplitude and the wavelength of a wave are commonly measured in meters.

wave snapshot - one moment - Figure 3: A view of a slinky at a particular moment in time.

Figure 3: A view of a slinky at a particular moment in time.

Figure 4 is a graph of the displacement of one point on the slinky as a function of time. The amplitude of the wave is still the same measurement as before – the maximum displacement of the point from its equilibrium position. The period of the wave (T) is the time (measured in seconds) required for the point to complete one full cycle of its motion, from its highest point to its lowest and back again.

wave snapsot - point in space - Figure 4: The motion of one point on a slinky as it travels through time.

Figure 4: The motion of one point on a slinky as it travels through time.

The frequency of a wave (f) (not indicated in the figure) is a measure of how frequently the point completes one cycle of its motion. In other words, the frequency is the number of wave cycles completed by one point along the wave in a given time period. The frequency of a wave is related to the period of a wave by the following equation:

Frequency vs. Period - Equation relating frequency and period

where f is the frequency and T is the period. The frequency is measured in cycles per second, or hertz (Hz). If the period of a wave is 10 seconds (i.e., it takes 10 seconds for the wave to complete one cycle), then the frequency is 0.1 Hz. In other words, the wave completes 0.1 cycles every second.

Wave Speed

Remember that a wave is a traveling disturbance. Wave speed is a description of how fast a wave travels. The speed of a wave (v) is related to the frequency, period, and wavelength by the following simple equations:

Wave Speed 1 - equation relating wave speed to wavelength and period
Wave Speed 2 - equation relating wave speed to wavelength and frequency

where v is the wave speed, lambda is the wavelength, T is the period, and f is the frequency. Wave speed is commonly measured in units of meters per second (m/s). For example, the musical note “A” is a sound wave with a frequency of 440 Hz. The wavelength of the wave is 78.4 cm. What is the speed of the sound wave?

To determine the speed of the wave, we can use equation 3 and substitute the given values for wavelength and frequency, making sure we are using the standard units.

Waves Lesson Calculations - calculations for waves lesson part one

This value (345 m/s) is the approximate value of the speed of sound in air. Interestingly, the speed of sound in air depends on temperature and pressure. A musician who plays a wind instrument, such as a trumpet, could tune her trumpet at the base of a mountain, hike up the mountain to where the air pressure is lower, and find that her trumpet is no longer in tune. Similarly, a change in air temperature could also change the tuning of the instrument.

As the example above illustrates, waves are all around us in everyday life. The Ancient Greeks began their study of waves by thinking about music, but now almost every branch of physics involves waves in one way or another.