VISUAL PHYSICS ONLINE

 

  GETTING STARTED WITH PHYSICS

  THE WAYS OF PHYSICS

 

 

 

Ian Cooper

email:  matlabvisulaphysics@gmail.com

 

       

 

 

Description: Image result for clip art physics words

 

 

 

Most people are curious.  They want to know how and why things work the way they do. Why is the sky blue? Why is the Sunset red?  Why does the moon look different tonight?  Physicists ask the same questions, but there is one important difference between curious people and curious physicists. Physicists validate their scientific beliefs very differently from many others.  This process forms the core of Physics; all the other skills and methods used by physicists flow from it.

The same process underlies all the physical sciences, so it is important to understand it. The “scientific method” is the result of a long process of evolution in human thought.

 

I will use the term "pre-scientific" to mean explanations of the physical universe that are deduced by imperfect reasoning from limited observation or that are justified by appeal to some infallible authority.  Often the pre-scientific explanation accords with “common sense”.  The argument that the Earth is motionless is a good example. If the Earth moves, then everything that is not tied down would slide off. Even if things didn't fall off, we would feel the movement. We do not perceive any motion, therefore the Earth must be motionless.  This is a good example of a common-sense explanation that is wrong.  It is based on the argument that we sense or perceive speed or velocity.  In fact, what we actually perceive as motion is change in motion (acceleration).

 

The ancient Greek philosophers were the first to attempt to understand the world around us in terms of logic and human reasoning.  We still read the works of Socrates, Plato and the other great philosophers for their insights into morality, society and politics.  They applied the same principles of logic to the study of nature, and Aristotle was in many ways the first "physicist.''

 

Aristotle (384 – 322 BC) distinguished between “substance” and “accidents”. The substance of a chair, for example, is that what makes it a chair. The colour, shape, material, etc., of a chair are all accidents or non-essential properties of the chair.  Aristotle thought that the basic substance of all matter was made up of the four elements: fire, Earth, air and water.  These elements were abstract or idealised substances not to be confused with real fire, Earth, etc.  Earth, for example, is the element that causes objects to sink to the centre of the Universe (i.e., the centre of the Earth) whereas fire moves towards the stars.  Aristotle regarded motion as a property which could only be imparted to a body by direct transfer from another body. This raises the question: where does motion originate?  To avoid an infinite chain of movers Aristotle postulated a Prime or Unmoved Mover. Aquinas  (c. 1225 – 1274) later used this argument as one of his “natural” proofs of the existence of God.

 

Another example:  Dense materials contain more “Earth” than light objects, and it follows that dense objects like lumps of lead will fall faster than less dense bodies like wooden balls.  Aristotelian science has often been criticised for ignoring experimental evidence. Aristotle in fact based his theory on his careful observations of nature, but given the limits of technology 2300 years ago it is not surprising that much of his theory is based on wrong information.

 

The Romans were less interested in philosophy than in practical matters of administration and engineering. When the Roman Empire collapsed, there were no institutions to preserve the knowledge of the ancient philosophers and this knowledge vanished from Europe. As well, because most of the engineering skills were passed on from master to apprentice and not written down, these too were lost.  There were, of course, other centres of learning in the Mediterranean region, particularly in the east.  These became part of the Islamic world, and it was Islamic scholars who preserved and extended the work of the Greeks. Greek philosophy, and the advances of Islamic mathematics and astronomy in particular, were introduced into Europe through Spain and other areas of contact between the Christian and Islamic worlds.

 

When the writings of the ancient philosophers were rediscovered by the West they were first regarded as dangerous and even heretical. However, Acquinas was able to reconcile much of Aristotle’s work with Christian philosophy and indeed incorporated much of it into his synthesis of Christian theology.  To question Aristotelian philosophy was subsequently perceived as an indirect attack on Christianity itself. 

 

What we now call science emerged during one of the most turbulent times in European history.  The Renaissance began in Italy and revolutionised the European world view. Perspective drawing in the visual arts, new (or rediscovered) technologies like concrete in the building industry, double entry bookkeeping, and banks were all developed at this time. The times were truly revolutionary with Martin Luther and others challenging the established Church. While some of the greatest works of Western art were being created, sectarian wars raged and the Roman Inquisition and its Protestant equivalents were torturing and murdering thousands of innocent people.  This same period saw the birth of the modern scientific method.

 

The development of the scientific method is frequently credited to the English philosopher Francis Bacon (1214 – 1292), but it was the Italian scientist Galileo (1564 –1642) who first used it in practice.

 

The key features of the scientific method are:

Identify a problem and make an educated guess to explain then phenomenon.  This is the hypothesis.

·       Use the hypothesis to make predictions.

·       Devise experiments to test whether or not the predictions are true.

 

If the hypothesis appears to be correct, devise the simplest possible theory that encompasses the hypothesis, the predictions and the experimental results (“Occam’s razor” – first proposed by the medieval philosopher William of Occam).

 

There are two other elements which are often considered part of the scientific method:

·       When performing experiments, keep detailed records of both the procedures used and the experimental results.

·       Having formulated a theory, publish it along with sufficient information about the experimental methods and results so that others can repeat the experiments and verify your theory.

 

One early example of the scientific method at work was the discovery of the laws of planetary motion by Kepler. Johannes Kepler was a contemporary of Galileo and a German Protestant who had access to the finest astronomical facilities in Europe. Unusually, he was a Platonist rather than an Aristotelian, regarding the visible world as a reflection of a higher world of Ideals. He was aware of Copernicus's work, and he formed a grand picture of the Universe with the Sun at the centre and the planets rotating around it, with their orbits determined by divinely placed Platonic solids.

 

Unfortunately, when Kepler examined the observational data, he discovered that his beautiful theory was wrong!  Kepler in fact was the only one who had access to the data, but rather than ignoring (or suppressing) the observations, he confronted the facts. In the end, he was forced to conclude that his theory was wrong and the planets apparently obeyed some totally unexpected laws. In particular, he found that the planets travelled in elliptical paths around the Sun, a result that offended Aristotelians and Platonists alike. He published his preliminary results in 1609; a fuller analysis was published 10 years later.  Kepler's intellectual honesty and courage were later vindicated when Newton showed that Kepler's laws were a direct consequence of his own Universal Theory of Gravitation.

 

Kepler was the last of the great naked eye astronomers. In 1610 modern astronomy was born when Galileo published the results obtained with his newly invented telescope.  His book, Sidereus Nuncius, contains direct quotes and sketches from his own notebooks.  In it he describes the satellites of Jupiter for the first time.  He immediately realised that the only explanation was that they orbited Jupiter “just like Mercury and Venus” orbit the Sun.

 

Isaac Newton (1642 – 1727) was born in the year of Galileo's death. With Einstein (1879 – 1955), he was undoubtedly one of the two greatest physicists of the modern era.  As well as his brilliant investigations into motion, gravitation, and optics, Newton added something new to the scientific method: the language of mathematics. Some would argue, however, that this is both a blessing and a curse.

 

It is a blessing in the sense that mathematical precision allowed Newton (and ourselves) to avoid problems like the paradox of the tortoise and the hare.  The paradox, which goes back to antiquity, is an early example of a Gedanken experiment or “thought experiment” (a device used brilliantly by Einstein).  A tortoise – the slowest of animals – challenges a hare    the fastest creature in all Greece – to a race.  The hare agrees, but to make the race fair he gives the tortoise a 100 m advantage.  They start the race, but when the hare has covered the first 100 m the tortoise has only “run”10 cm.  The hare runs another 10 cm, but the tortoise has moved ahead by 1 mm. You can see the problem: the tortoise will always be in the lead and the hare can never catch up. However, in a real race the hare will obviously always win.  The paradox is: why does the explanation fail to agree with the facts?

 

To solve this particular problem Newton invented what we now call the differential calculus (it was independently discovered by the German mathematician Leibnitz at approximately the same time). The problem with the classical explanation of the tortoise and hare problem is that it cannot handle the concept of infinite series and limits. This was a shortcoming of conventional language as much as anything else, and mathematicians have had to invent new "languages'' to cope with such concepts.

 

The curse of mathematical physics is that it is frequently unintelligible! Truly great physicists, like Einstein, can communicate not only to their peers using highly mathematical formalisms, they can also cogently present their theories to the wider world of intelligent non-specialist readers.  There are also many popular accounts of modern physics, which can be found in the science (or, unfortunately, the "New Age'' or "alternative lifestyle'') section of most bookshops.  Popularisations range from the truly excellent to the mind-bogglingly boring, but the real problem lies in books that peddle pseudo-science as the real thing. 

 

This brings us back to the scientific method and the basic question underlying this discussion.

Is physics correct?

Newton's second law of motion is often written:

         

 

that is, the resultant force on a body is equal to its mass times acceleration. In writing this, we assert something about the actual, real world. This equation is not a mathematical theorem, it is based on direct observation of nature. Unlike a purely algebraic expression like

          a2 – b2 = (a + b)(a - b)

 

it is a much richer statement because it asserts something about reality itself.

 


 

From the philosophical point of view, however, there is a serious problem: we cannot prove that  is true because it cannot be derived from a set of fundamental, “self-evident” postulates.

 

I cannot prove that   any more than I can prove that the Sun will rise in the morning.  Both of these are statements about reality but cannot be proved in any mathematical, logical or legal sense.  This is the Achilles heel of scientific reasoning.

 

Scientific theories are based on “inductive reasoning.” Aristotelian logic is based on deductive reasoning: if a general statement is true, then one can deduce particular statements that must also be true.  Classical logic and all of mathematics is based on deductive logic.  When we say that something is “true” in the mathematical sense, we mean that it can be deduced by logical reasoning from “first principles”.

 

Suppose that you have only seen swans in Australia.  You notice that all the swans are black, so you conclude that all swans are black.  This is an example of inductive reasoning.  If you go to Europe you will discover that they have white swans and your inductive reasoning is proved false.

 

Pseudo-scientists favourite gambit is: “It's only a theory”. Because science uses inductive reasoning, they argue, scientific theories can never be “proven” and consequently remain merely hypotheses.  The fact that mere hypotheses keep planes flying, television sets working, and are the basis for all of modern technology is of course irrelevant! Attempts have been made to strengthen the logical foundations of science.  One of the most regarded of these is due to the philosopher Karl Popper.  We can never prove a physical theory in the mathematical or logical sense, but we can disprove it.  It only takes one white swan, for example, to disprove the theory that all swans are black.  Scientists are constantly making predictions based on their theories.  If a prediction is false, then the theory is disproved.  The way we make progress is by devising a new theory that better explains the data. To be scientific, this theory must make testable predictions. This is a key point.  A theory which does not make predictions and which cannot be tested is not science but pseudo-science.  The statement that "God created the world in six days and rested on the seventh'' cannot form the basis of a scientific theory because it is not testable.  So-called creation science is not science because its basic assumption (that the account of creation given in Genesis is literally true) cannot be tested.

 

This brief summary of the notion of falsification is misleading in one crucial way. It makes science sound sterile and boring, when in fact it is exciting and fun. If you ask most scientists why they do science, they would probably reply by saying that they make new discoveries and expand human knowledge.  These are truly exciting and stimulating things to do.  As a by-product, they extend theories and push them to the limit, and it is at these intellectual frontiers that the cracks begin to appear in the theory.

 

A modern example of how a single experiment can "falsify'' a theory is the discovery of the cosmic background radiation.  The rapid advances in astrophysics in the 20th century made it possible to speculate about the physical origin of the Universe itself. In particular, Erwin Hubble in the1920s observed that distant galaxies appeared to be moving away from us at a rate that was proportional to their distance. It was soon realised that this could only be explained by assuming that the Universe itself was expanding in accordance with Einstein’s theory of General Relativity.

 


Two theories were proposed to explain this expansion:

·       The Universe began in a primordial explosion and what we now see is the expansion of the debris from the initial explosion (before the explosion there was nothing – neither time nor space – and the Universe is expanding into this nothingness of non-time and non-space).  One consequence of this “Big Bang” theory is that there should be some residual radiation from the initial explosion.  Using some basic physics the Russian-American physicist George Gamow and his colleagues were able to predict the properties of this radiation.

 

·       The Universe has always existed.  If the Universe is eternal, but expanding, its density will gradually decrease. According to the British astrophysicist Fred Hoyle and his co-workers Tommy Gold and Hermann Bondi matter (in the form of hydrogen) is constantly being created. The creation rate exactly balances the decrease in density due to expansion, and consequently the density of the Universe remains constant.  This was known as the steady-state theory.

 

Gamov's theory postulated that the Universe was created in a unique instant at, literally, the beginning of time, and this seemed to many to be more a theological rather than a scientific argument.  The steady-state theory did not have any worrying philosophical implications and was widely accepted as the correct view of the Universe.

 

In 1956 Penzias and Wilson at the Bell Research Laboratories in Holmdale, NJ investigated the “noise” in a large microwave receiver.  Noise is present in all electronic circuits and it is very difficult to measure a “signal” when it is smaller than the noise level of the receiver. Penzias and Wilson were trying to reduce the noise in a particular radio antenna. They found that there was excess noise that they could not account for. To eliminate all possibilities they even scrubbed their receiving antenna to remove accumulated bird poo! They concluded that this extra noise term had to be due to something outside of the antenna, and it was soon established that it had exactly the characteristics of the cosmic background radiation predicted by the Big Bang theory.

 


 

Their result showed that the steady state theory could not be correct, and the Big Bang theory is now the standard model for the origin of the Universe.  This example shows how one experiment can indeed falsify an entire theory.  Penzias and Wilson were subsequently awarded the Nobel Prize for their work.  Penzias and Wilson's work stands out because it is extremely unusual for a single experiment to disprove or falsify an entire theory.

 

When a new discovery is made, the work is published and other scientists can then verify it in their own laboratories. This ensures that a “discovery” is not the result of instrumental error, missinterpretation of the data, noise, etc. In physics, “cold fusion” is an example.  Some researchers claimed that thermonuclear fusion (the nuclear process that powers the Sun) could be achieved at room temperature. The standard theory predicts that fusion can only take place at extremely high temperature and pressure, so this result appeared to falsify the standard theory.  It was important to verify this result.  Not only would it have changed our understanding of nuclear physics, it might provide a virtually limitless source of inexpensive power.  Needless to say, the result could not be reproduced in any other laboratories, and cold fusion has been discredited by most physicists.

Scientists' are very conservative. This leads to long stable periods in the development of scientific ideas, but also results in occasional revolutions.  This picture of the progress of science has been developed by the philosopher Thomas Kuhn.  “Classical” physics is the term used to describe physics from the time of Newton until 1900.  Classical physics was amazingly successful at describing the nature of the world around us and provided the scientific basis for the industrial revolution and modern communications.  Although classical physics explained practically everything, there were some curious exceptions. One difficulty was the photoelectric effect. The relationship between the brightness and colour of the light used and the current that is produced could not be explained by the standard theory of the time. Lord Rayleigh called this and other anomalies “small clouds” on the horizon of science. Most physicists put these problems in the proverbial “too hard” basket.

 

In 1900 Max Planck deduced an equation that correctly reproduced the observed intensity.  Planck had made a radical assumption that had no basis in classical physics and he was frankly uncomfortable with his result. In 1905 Einstein published a series of remarkable papers that triggered what Kuhn has called a paradigm shift. In one of these papers Einstein showed that the photoelectric effect could only be explained by assuming that light consisted of discrete quanta or particles. His theory could also explain other anomalous problems and formed the basis of a new branch of physics: quantum mechanics.  In the same year (1905) he invented special relativity and a few years later general relativity. By 1930 a new consensus emerged based firmly on these revolutionary ideas.

 

The only comparable episode in physics was the Copernican revolution that began in the late 16th century and it is perhaps not coincidental that both occurred during times of great social and political change.  Both revolutions found their clearest expression in the work of the two greatest scientific minds of Western civilisation: Newton and Einstein.

 

When a new area of science is opened up after a paradigm shift, the “too hard” basket is initially empty.  It is only after a discipline matures and scientists explore all the ramifications that problems start appearing.  Gradually the "too hard'' basket starts filling up, and eventually a genius like Darwin or Einstein comes along and revolutionises the field. 

 


 

We conclude that science is, in general, correct.  Modern technology from the mobile telephone to the Space Shuttle is proof of this.  As well, from a purely logical or philosophical point of view science is correct in the sense that it is testable. It is this characteristic that distinguishes science from pseudo-science. Even when pseudo-scientists make predictions, their attitude is very different to scientists.  In science, if a result contradicts the theory it is taken very seriously and may eventually lead to a refinement of the theory or even a complete paradigm shift.  In the case of pseudo-science the theory is always right, and the conflicting data is either ignored or the proponents argue that the conditions weren't right, the experimenters cheated, etc.  In other words, they make excuses in order to preserve their theory, rather than confronting the facts.

 

Children are rightly taught to obey authority, but on the other hand they are sceptical and constantly question their parents and teachers.  As adults we tend to conform, and scepticism is usually not regarded as a civic virtue. However, the basis of science is scepticism. In the words of Paul Hewitt (Conceptual Physics, 8th ed., Addison-Wesley, 1998, p. 682),

 


 

Sceptical thinking, in addition to sharpening common sense, is an essential ingredient in formulating a hypothesis that requires a test for wrongness. If I am wrong, how would I know? Shouldn't this be a key question to accompany any important idea – scientific or otherwise? Applied to social, political, and religious positions, you reduce your chances of being deluded. Socially you see other's points of view more clearly. Politically you see all social movements as experiments. Religiously you see science can be an awesome source of spirituality.

 

Physicists come in two flavours: theoretical and experimental.  All physicists recognise that physics is ultimately validated by reference to observations, but one can attack problems from two quite different directions.  If you ask a physicist to design a better golf club you will get two quite different reactions.  A theoretical physicist will start from Newton’s equations of motion, add in the elastic properties of the golf ball and club, consider the effect of air resistance, and so on. They will construct a mathematical model that she can then test on a computer, and then try to identify which physical parameters will lead to better performance.  An experimental physicist might glue force transducers onto an actual golf club and measure the forces when a golfer hits a ball.  He might ask the golfer to use a different grip, or change the mechanical properties of the club by adding weights to it. 

 

Which method is better?  Most physicists would agree that both are valid ways of tackling the problem.  The theoretician starts from the known theory and constructs a “model” of the golf club.  This process will often reveal what are the critical physical parameters that determine the behaviour of the system.  The experimentalist, however, approaches the situation with an open mind, and records the actual performance under a variety of conditions.  The best physicists attack problems from both ends!

 

The above article is from The Foundations of Physics, Bill Tango, School of Physics, University of Sydney.

 

 

An interesting short article on learning about science in the New Yorker -  Why We Don’t Believe In Science - Posted by Jonah Lehrer June 7, 2012

 

[cited June 2012]

 

http://www.newyorker.com/online/blogs/frontal-cortex/2012/06/brain-experiments-why-we-dont-believe-science.html

 

 

Recently, Gallup announced the results of their latest survey on Americans and evolution. The numbers were a stark blow to high-school science teachers everywhere: forty-six per cent of adults said they believed that “God created humans in their present form within the last 10,000 years.” Only fifteen per cent agreed with the statement that humans had evolved without the guidance of a divine power.

 

What’s most remarkable about these numbers is their stability: these percentages have remained virtually unchanged since Gallup began asking the question, thirty years ago. In 1982, forty-four per cent of Americans held strictly creationist views, a statistically insignificant difference from 2012. Furthermore, the percentage of Americans that believe in biological evolution has only increased by four percentage points over the last twenty years.

 

Such poll data raises questions: Why are some scientific ideas hard to believe in? What makes the human mind so resistant to certain kinds of facts, even when these facts are buttressed by vast amounts of evidence?

 

A new study in Cognition, led by Andrew Shtulman at Occidental College, helps explain the stubbornness of our ignorance. As Shtulman notes, people are not blank slates, eager to assimilate the latest experiments into their world view. Rather, we come equipped with all sorts of naïve intuitions about the world, many of which are untrue. For instance, people naturally believe that heat is a kind of substance, and that the Sun revolves around the Earth. And then there’s the irony of evolution: our views about our own development don’t seem to be evolving.

 

This means that science education is not simply a matter of learning new theories. Rather, it also requires that students unlearn their instincts, shedding false beliefs the way a snake sheds its old skin.

 

To document the tension between new scientific concepts and our pre-scientific hunches, Shtulman invented a simple test. He asked a hundred and fifty college undergraduates who had taken multiple college-level science and math classes to read several hundred scientific statements. The students were asked to assess the truth of these statements as quickly as possible.

 

To make things interesting, Shtulman gave the students statements that were both intuitively and factually true (“The moon revolves around the Earth”) and statements whose scientific truth contradicts our intuitions (“The Earth revolves around the Sun”).

 

As expected, it took students much longer to assess the veracity of true scientific statements that cut against our instincts. In every scientific category, from evolution to astronomy to thermodynamics, students paused before agreeing that the Earth revolves around the Sun, or that pressure produces heat, or that air is composed of matter. Although we know these things are true, we have to push back against our instincts, which leads to a measurable delay.

 

What’s surprising about these results is that even after we internalize a scientific concept—the vast majority of adults now acknowledge the Copernican truth that the Earth is not the centre of the universe—that primal belief lingers in the mind. We never fully unlearn our mistaken intuitions about the world. We just learn to ignore them.

 

Shtulman and colleagues summarize their findings:

 When students learn scientific theories that conflict with earlier, naïve theories, what happens to the earlier theories? Our findings suggest that naïve theories are suppressed by scientific theories but not supplanted by them.  

While this new paper provides a compelling explanation for why Americans are so resistant to particular scientific concepts—the theory of evolution, for instance, contradicts both our naïve intuitions and our religious beliefs—it also builds upon previous research documenting the learning process inside the head. Until we understand why some people believe in science we will never understand why most people don’t.

 

In a 2003 study, Kevin Dunbar, a psychologist at the University of Maryland, showed undergraduates a few short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time—a refutation of Aristotle, who claimed that heavier objects fell faster.

 

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. They found the two balls falling at the same rate to be deeply unrealistic. (Intuitively, we’re all Aristotelians.) Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the centre of the brain. The A.C.C. is typically associated with the perception of errors and contradictions—neuroscientists often refer to it as part of the “Oh shit!” circuit—so it makes sense that it would be turned on when we watch a video of something that seems wrong, even if it’s right.

 

This data isn’t shocking; we already know that most undergrads lack a basic understanding of science. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to identify the error; they knew Galileo’s version was correct.

 


 

But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful. If you don’t want to think about the ice cream in the freezer, or need to focus on some tedious task, your D.L.P.F.C. is probably hard at work.

 

According to Dunbar, the reason the physics majors had to recruit the D.L.P.F.C. is because they were busy suppressing their intuitions, resisting the allure of Aristotle’s error. It would be so much more convenient if the laws of physics lined up with our naïve beliefs—or if evolution was wrong and living things didn’t evolve through random mutation. But reality is not a mirror; science is full of awkward facts. And this is why believing in the right version of things takes work.


 

Of course, that extra mental labour isn’t always pleasant. (There’s a reason they call it “cognitive dissonance.”) It took a few hundred years for the Copernican revolution to go mainstream. At the present rate, the Darwinian revolution, at least in America, will take just as long.

 

Read more

 

http://www.newyorker.com/online/blogs/frontal-cortex/2012/06/brain-experiments-why-we-dont-believe-science.html#ixzz1xYpnX3xO