The Scientific Method: What It Actually Is

What is a scientific model?

Here’s the central point: Science is a model of reality, not reality itself. It tries to approximate reality as closely as possible. For example, in the kinetic theory of gases, the molecules are considered as rigid spheres. However, it is not correct to say that the molecules are hard spheres. They are modeled as hard spheres and the theory works wonderfully well. Of course, it doesn’t exactly match experimental results, and we should not expect that either.

Another important point: Experiments are supreme.

If it disagrees with experiment, it’s wrong.

– Richard Feynman, late theoretical physicist at Caltech

How true. Nothing else matters. If theory predicts a rise in temperature and the experiment shows a decrease, the theory is wrong. Feynman also makes the brilliant point that we can never know that we are right. We can only know that we are wrong. Say we have a theory. We test it in a case in which it applies. If   the result agrees with prediction, can we say that the theory correct? NO! Just that it isn’t proved wrong. There could be an experiment conducted in the future which may produce a result that will contradict the theory. Till then, as long as experiments keep on verifying the theory, it will be proved less and less wrong. But it cannot be proved absolutely correct.

Albert Einstein Cartoon
Think! It helps.

From hypothesis to theory:

  • Make a guess

A theory starts off as a hypothesis a good guess. The guess needs to be checked for simple cases first and then for more intricate ones. The hypothesis needs to produce numbers which can be checked against actual experiments that can be conducted, i.e. it needs to be falsifiable.

  • Falsifiability: What can I do to prove it wrong?

Falsifiability is a vital criterion of a hypothesis to be even taken seriously. (The first thing to ask: How can I prove this wrong? If there is no answer, forget about the hypothesis! It’s Not Even Wrong‘ is the worst insult!)

  • Can it explain the known? Finding faults is not enough.

Next, we need to see how general the hypothesis is. Can it explain all of the results that are explained by the existing theory? We have to find where they clash, and find out which one prevails at that point. Does the hypothesis say something more than what the old theory says? Does it cover for the limitations of the old theory? The important point here is that it is not merely enough to show that the old theory lacks explanation for a certain phenomenon, but the new hypothesis should be able to be successful at explaining all that the old theory can explain. (Thus, creationism is NOT a theory, not even a valid hypothesis!)

  • Peer review, peer review, peer review.

THE most essential step for a hypothesis to become a theory is peer review. It needs to be published in a science journal. A hypothesis is rarely completely correct, but the good parts are generally noticed by readers.

It’s redundant to say that not every accepted theory is a revolution. You never know if a theory will be a revolutionary one. Don’t do science aiming for that! You’ll be disappointed.

Richard Feynman-Key to Science on youtube

 

Telescopes: Observing the Cosmos

The cosmos is a mysterious place, but we know a surprising lot about it. The knowledge, known as astronomy’, has been primarily acquired through observing the skies night after night, year after year. People used their eyes as the instrument of observation for a long time, but the problem is that the opening of the eye, called the pupil, is just too small – about one-eighths of an inch across. Very little light gets through and thus dim cosmic objects are invisible to the naked eye. For progress, we needed bigger apertures with which we could collect light and something which could automatically record images of the night sky. It turns out that a bigger aperture emerged long before the recording instruments.

One facet: ‘We need a bigger aperture’

A telescope is just a bigger and simpler version of the eye. Light falls on an opening, called the aperture, and is then focused onto a smaller opening, called the eye-piece, for a person to look through. Telescopes are basically of two types the reflecting ones and the refracting ones.

types of telescopes
Two types of optical telescopes

The main aim is to maximize the aperture for greater collection of light. This is easier in case of reflecting telescopes than for refracting telescopes. The problem is that a bigger aperture needs a bigger lens. Lenses tend to get heavy as they become bigger. Not only that, they tend to sag under their own weights. For a reflecting telescope, a bigger aperture does require a bigger reflecting mirror, but that is much less of a problem. This is the reason why reflecting telescopes have taken over. All large optical telescopes today are reflecting ones.

Examples of really large modern optical telescopes are the KECK and the SALT. The most successful of all telescopes is, of course, the Hubble Space Telescope.

Edwin Hubble gazing through his telescope at Mount Wilson
Edwin Hubble gazes through his giant reflecting telescope at Mt. Wilson
Keck
The pair of KECK telescopes, both reflecting types

 

Another facet: ‘We need a large telescope in space’

The fact that a lot of radiation doesn’t reach the earth’s surface is a major stumbling block to observations in wavelength regions in the electromagnetic spectrum, other than optical. The following image explains this phenomenon the best. Note the wavelength regions that do reach and those that don’t.

Transmission of EM waves by the atmosphere
Showing the transmission of Electromagnetic waves by the atmosphere

 

Thus, it is impossible to build a ground based gamma or x-ray telescope as long as we have an atmosphere. The only option is to send the telescope off into space. The Chandra X-Ray Telescope is the best of the x-ray telescopes.

Chandra X-Ray Telescope
The best we have for X-Rays: Chandra X-ray telescope

The atmosphere creates problems even for the optical band of radiation. Air movements and thermal effects create density gradients both locally and globally in the atmosphere. These distort the passage of optical light – an aberration called ‘seeing’. Hence the Hubble Space Telescope was a necessity rather than a luxury, and how it has proved itself!

The Hubble Space telescope
Into space for a better view: The Hubble Space Telescope
Carina Nebula captured by HST
Spectacular Hubble: Carina Nebula (NGC 3372), one of the most active stellar nurseries

Radio is a region where a lot of observation has been done. Radio Telescopes are easy to build and radio is not attenuated by the atmosphere. By using certain specialized techniques, very accurate information can be found. Often cosmic structures have regions where the gas and dust density is so high that optical and even X-rays are absorbed. These optically opaque regions are conveniently observed in the radio band.

Very Large Array
A network of radio telescopes: The Very Large Array (VLA), New Mexico

Advanced imaging techniques have greatly improved on the amount of information we can obtain from an observation. Advances in solid state physics have allowed us to create excellent image reproduction devices like CCD’s and CMOS. One thing is for sure once man had looked up and seen the cosmic wonders, he was sure to get addicted to it.

Laser Cooling

One of the coolest things in physics is used as a common tool to make things really cold. Lasers are used to cool a bunch of atoms to extremely low temperatures, temperatures in the micro-Kelvin range. Extremely successful, laser cooling is surely one of the hottest topics in Physics.

Firstly, let’s note few important points:

  1. The ‘temperature’ of a substance is the average kinetic energy of the constituent atoms/ molecules. This is the definition. And, yes, this is exactly what we measure with a thermometer. Thus, in short, the faster the atoms in a substance move, the higher its temperature.
  2. There is a well known effect called the Doppler Effect. If you move towards a light source (or equivalently, if the source moves closer to you) the frequency of light you see goes up (“blueshift”). This is, obviously, a velocity dependent phenomenon. The faster you move, the more the shift in observed frequency. Similarly, if you move away from the source, the frequency goes down (“redshift”). So if a stationary observer observes green light, an observer moving towards it might see a blue light, while the observer moving away may see a red light.
  3. A substance absorbs energy only at discrete and specific frequencies of radiation. For example, if a substance absorbs at a frequency of red light, radiating it with a light of slightly lower wavelength will not cause it to undergo any transition. This light will not be absorbed. This is due to the quantum nature of the energy levels of the atoms.
  4. Light is made up of tiny packets of energy called photons. Each of these photons carries only one value of energy, dependent on the frequency of the radiation. The higher the frequency, the higher the energy. Also, photons carry momentum. Momentum for a photon is equal to its energy.
Doppler Effect
Frequency tends to increase as one moves towards source: Doppler effect
Doppler Shift in Spectra
Doppler Effect: Notice the bands and how they are shifted either to blue or to red

Armed with these points, we can finish off the explanation for laser cooling in a few lines. Take an atom and irradiate it with light with a frequency slightly lower than the one it would absorb. The light would just bounce off without any absorption (point 3). Now say that the atom is moving towards the laser. The frequency it will see is greater than the static laser frequency (Point 2). This higher frequency is now good enough for absorption.

Imagine two lasers irradiating the atom from opposite directions. One laser forces the atom to bounce off towards the other laser. This induces absorption.

Schematics of laser cooling
a) A group of atoms is irradiated. Assume it moves to the right. b)Then, it encounters the laser beam which seems a bit more blue than its static state. It absorbs radiation. c)It releases the radiation losing momentum and energy and is thus 'cooled'. d)The process is repeated.

Now this excited atom can release energy as a photon. The momentum it loses due to the emission of this photon lowers the kinetic energy. Thus, the kinetic energy decreases. But this is basically a lowering of temperature (point 1). Lo and behold, we have cooling.

Now this excited atom can release energy as a photon. The momentum it loses due to the emission of this photon lowers the kinetic energy. Thus, the kinetic energy decreases. But this is basically a lowering of temperature (point 1). Lo and behold, we have cooling.

Radiation Risks: How much is too much?

Many people are scared stiff by the possibility of an explosion at the Japanese nuclear reactor. They believe that a nuclear fallout will affect life as we know it. Since, the radiation of radioactive substances takes a long time to die down to zero, and these substances can be carried by strong wind currents across large distances, the consequences might be catastrophic. Well, this is only partially correct.

First point to note – radiation from a radioactive source NEVER goes to zero. There is always something there, no matter how feeble. This is not a bad thing. This means that we already live in a bath of weak radiation. Radiation comes from various sources, like radon gas (the main source!), cosmic rays (which bombard atoms in the atmosphere), elements in soil and even food. This is completely harmless. Even we are radioactive just stand in front of a Geiger-Muller counter and hear the clicks due to the disintegration of atoms in your body! All of this constitutes the background. The crucial point here is that once the radiation from a source is close or below the background, it is harmless.

Sources of background radiation
Sources of background radiation and relative contribution

Next point, radiation flux decreases as inverse-square of the distance. Taking into account various ionization phenomenon, the decrease is even faster; making the decrease a power law in distance with some power higher than 2. This essentially means that radioactive samples high up in the atmosphere will have negligible effects on life on the ground.

 

Getting Quantitative:

For measurements, we need units. The standard units are the following:

  1. Gray (Gy): Unit of measurement of the amount of radiation incident on any body per unit area per unit time. 1 Gy = 1joule/1kg.
  2. Sievert (Sv): Unit of measurement of radiation incident on a human body per unit area, per unit time. 1 Sv = 1 Gy . W, where W=weight factor

The presence of the weight factor (W) in the definition of Sievert measures the damage done to a human body due to different types of radiation. For example, W=1 for electrons, muons, x-rays etc. These are less harmful than neutrons at 50 keV, which have W=10. Alpha particles from a close enough source can have W=20. (Higher weight=more damaging.) Weight factors also take into account the portion of the human body receiving the radiation. Bones have W=0.01, whereas gonads may have W=0.2.

1 Sv is a very high dose, so units like ‘rem’, (1 rem=0.01 Sv) or milli-Sievert (Sv) (1 mSv = 0.001 Sv) are used.

 

How much is safe?

There are Threshold Limit Values (TLV) lists available. These recommend that a person on average receive no more than 50 mSv per year, i.e. 5 rem. Over a five year period, a person shouldn’t receive more than 20 mSv (or 2 rem) per year. These estimates vary with location of a person and his/her occupation. Check values here.

These values are slightly misleading because it misses the important point that a large dose (say 50 mSv) delivered all at once much more damaging than the same amount of radiation spread over a long period of time, like a year.

A normal person, not working in or near a nuclear power plant, receives less than 10% of the allowed dose. So don’t worry about the radiation. It’s just not that dangerous.

 

Earthquakes: The Science of Shaking Earth

There is simple science behind the quake that shook Japan recently and those that have occurred throughout Earth’s history. We consider here, only those earthquakes that are caused by forces arising from beneath the earth’s surface, commonly called tectonic forces.

Our earth’s crust is broken up into many fragments, all floating on semi-solid magma in the mantle of the earth. These fragments, called tectonic plates, are thus capable of moving, albeit very slowly and just a bit each year. Stress can build up along the edges between the plates if one cannot move against the other. Earthquakes are caused by tectonic plates suddenly sliding on each other at edges where they meet. The earth shakes as huge amounts of energy are released into the surrounding medium.

tectonic plates

There are two types of waves by which energy is transferred. One is called the primary or P wave, the other, secondary or S wave. They are also fundamentally different. The P wave travels much faster than the S wave. The P wave travels by compressing the ground at certain points and stretching it out at some other points, just like sound waves in air. This mode is called the longitudinal mode. The S wave travels by undulating the ground in the direction perpendicular to the propagation, like light waves. This mode is called the transverse mode.

types of waves
Longitudinal and Transverse Waves

 

Locating the epicenter: How do scientists know where the center of the earthquake (the epicenter) is? Since the P wave travels faster than the S wave, it arrives at a certain place before the S wave. Places nearer the epicenter will experience the two in quick succession and far off places will experience a greater delay between the two waves. Using delay data from different earthquake monitoring centers, one can triangulate the position of the epicenter and even track how far below the earth’s surface it lies.

Measuring an earthquake: What does a “magnitude 7 earthquake on the Richter Scale” mean? The Richter scale magnitude measures how much the earth shakes. It does so by considering how much the needle of the seismograph (figure below) oscillates.

seismograph
Seismograph

The Richter scale is a logarithmic scale. This means that a 5 magnitude earthquake is 10 times as powerful (causes 10 times more shaking on the seismograph) than a 4 magnitude quake. A 6 magnitude one is 10 times as powerful as a 5 magnitude one and 100 times as powerful as a 4 magnitude one.

Earthquakes are often followed by smaller ones called aftershocks, (and sometimes, preceded by ‘foreshocks’).   This happens because the earth restores itself and takes some time to do so.

Prediction of earthquakes is impossible with the current technology and will likely remain so for a very long time. As far as data suggests, earthquakes are completely random and no occurrence pattern has ever been observed. As to whether animals can really sense’ earthquakes before they happen is a matter of speculation.