Math isn’t science

Science daily has posted an article on some new discoveries about Plato’s thoughts that were recently discovered in codes he had written. But, it seems that science daily is confused about what was important about the Scientific Revolution.

The hidden codes show that Plato anticipated the Scientific Revolution 2,000 years before Isaac Newton, discovering its most important idea — the book of nature is written in the language of mathematics. The decoded messages also open up a surprising way to unite science and religion. The awe and beauty we feel in nature, Plato says, shows that it is divine; discovering the scientific order of nature is getting closer to God. This could transform today’s culture wars between science and religion.

The book of nature follows laws and patterns, and mathematics is the name we have given to the collection of tools that we use for exploring patterns. But, is mathematics the most important idea to come out of the scientific revolution? Certainly not. The wonders of mathematics were celebrated long before the Scientific Revolution.

Noticing that math is useful for understanding astronomy was discovered by all of the great pre-scientific civilizations. It is an ancient idea. It is great that Plato noticed it too, but this hardly anticipates the Scientific Revolution.

The Scientific Revolution was a revolution in the ideas of astronomy, but also a revolution in the question of how we know things. Previously, being able to describe something with math meant that you knew that thing. In astronomy, it was already well understood that the motion of the planets could be described by math, thanks to the ancient astronomers. In fact, they had beautiful mathematics which could predict the positions of the planets to whatever precision they desired. If math is what science is about, they were done!

The problem with the Ptolemaic system for describing the planets, is that there are no orbits you could observe that can’t be described by it:

The only prediction made by the Ptolemaic system is that planetary orbits will happen in a plane. Since this is true, the belief is ‘good enough’ for any astronomer. But, the Ptolemaic system was eventually overthrown because of Kepler’s prediction of the transit of Venus, which he predicted by throwing away the common ideas about epicycles and circular orbits. With Newton’s unification of gravity with Kepler’s laws, the Scientific Revolution fundamentally changed how people thought about planetary motion. The key idea that brought about this change was that the leaders of the Scientific Revolution were not arguing what was true based on the beauty of mathematics (circles were considered much more beautiful than Kepler’s ellipses), their arguments were about physical evidence. Science rejects the beautiful in favour of what is actually true. I’ll let you decide for yourself how this idea factors into ‘today’s culture wars between science and religion’.

The real big idea of the scientific revolution was that you can get better beliefs about how astronomy works by critically evaluating your ideas against physical evidence. Science is about the discovery that even the most beautiful mathematical models can be overthrown by new physical evidence. Science is not about math, it is about evidence.


Scratching an itch makes you feel good

After a summer barbecue that stretched on past sunset, I awoke the next morning to find myself covered in itchy mosquito bites.

Some friends told me not to scratch the bites. They say, scratching only makes the itching worse. This is a popular piece of folk wisdom that dermatologists call “the itch-scratch-cycle“.

Sometimes scratching relieves isolated itches, hence the existence of devices such as the back scratcher. Often, however, scratching can intensify itching and even cause further damage to the skin, dubbed the “itch-scratch-itch cycle”.

[emphasis mine]

It is not hard to find this belief reflected on the internet in both reputable and casual sources from a simple Google search: (1) (2) (3). Apparently, many doctors and dermatologists believe it.

However, I’m skeptical. The popular perception that scratching relieves itching has been observed in recent studies. It can’t make it both better and worse? Can it? What’s going on here?

A recent article from the scientific literature sheds some light:

As referred to by the definition of itch sensation, itch is accompanied by the desire to scratch, which in turn reduces the itch. In addition, a hedonic experience (algedonic pleasure188) can accompany the scratching of an itch: “At least it may be noted that scratching an itch with a violence that would cause pain elsewhere may be experienced as one of the most exquisite pleasures”189. The hedonic aspect of scratching can be problematic in chronic itch: patients with atopic dermatitis may report that they scratch until it no longer provokes pleasant sensations rather than until the itch has subsided190. However, albeit being a key factor in the vicious itch/scratch cycle, this clinically important aspect has gained only little scientific attention187, 191

According to the evidence, scratching makes an itch feel better. The itch/scratch cycle is caused by scratching feeling good, not by scratching causing more itching.

I imagine the “scratching makes itching worse” nonsense was spread by parents hoping to frighten children (and doctors hoping to frighten patients) away from excessive scratching. This strategy makes as much sense as telling people with an eating disorder that potato chips will only make them hungrier in the long run. It may cause people to stop overindulging in pleasurable activities, but its dishonest because it’s not the real reason to not overindulge. It’s just bullshit.

If you have a personal policy of never succumbing to hedonistic tendencies and only engaging in activities that produce long-term benefits, then scratching should be avoided. However, for everyone else, scratching can be done in moderation without damaging the skin and is quite pleasurable.

Faith in the science classroom

PZ Meyers has recently written an excellent post about faith and skepticism. An interesting point raised here is what should be done about students who give faith-based answers to exam problems in a science course? In particular, how should one handle a faith-based answer on a cosmology exam? On the one hand, Pamela Gay argues that dismissing a students beliefs may place a wall between the student and the instructor that prevents learning. On the other hand, one worries about the message that is being sent to the student about how science works.

While I was an undergraduate student of physics, I was asked to go to a local high school with one of my professors so we could promote physics degrees to the senior students and answer questions about what its like to study science. One of the questions that we got was inquiring if it was okay to be a scientist and have faith.

I was about to answer “No! But, its great. When you study science, you learn that you don’t need to believe books and authority figures. You don’t need to believe things because you learned them when you were young. Things are true or false for other reasons. Better reasons! Evidence, not faith! That’s the coolest part about studying science!”

Luckily, the professor I was with was quicker on the draw than me and gave the politically correct answer: There are lots of scientists who balance their religious lives with their work lives and produce great science in the process.

For a lot of students, this is a big concern with learning science. A skeptic may be used to repeatedly adopting and discarding their beliefs as they become aware of different evidence, but to a religious person beliefs can form a major part of their identity. Having to give up beliefs–give up identity–to study science is a frightening prospect.

To me, the problem is similar to the age-old problem of students who get the wrong answer on an exam because the teacher said something untrue or misleading in a lecture: The students reasoned from authority, got the wrong answer and claim it’s not their fault because they were just repeating what the authority figure told them. As a student, I’ve made this very argument to successfully get extra grades on exams. After all, if the poor reasoning is an authority figure’s fault and not the student’s, it would be unfair to penalize the student, right?

Of course, appealing to authority is not good reasoning, so it is fair to penalize it in a science classroom. We are already used to giving students partial marks for math errors, so why not reasoning errors too? But, one has to be aware that reasoning from authority is the only tool in the toolbox for a majority of students. By college, a student has been rewarded for writing down knowledge handed down from textbooks and authority figures for twelve years. If the rules suddenly change, and the tools that they’ve developed to secure scholastic success are suddenly being punished, the student is likely not to understand the reason behind the punishment. These otherwise talented students are routinely chased away from science classes into arts classes where things are more familiar to them. This then contributes to an overall societal problem: some people love science, but everyone else hates, misunderstands, and fears science because of bad experiences in the science classroom.

It’s a huge mistake to ask students to reason like scientists all at once. What happens is that the science classroom becomes a filter where students who were already thinking like scientists get passed through and a large fraction of the students are filtered out, possibly scared away from science forever. Science is a different kind of reasoning than what is normal for humans, and it needs to be introduced early and reinforced frequently throughout a person’s education. There is a poorly studied process by which a normal person begins to think more like a scientist, and I doubt that training a person for many years to reproduce answers from a textbook before suddenly flicking the scientist switch is a useful way to lead students through that transition. As a society, however, we’ve collectively decided that its okay to put off teaching scientific reasoning until a student enters graduate school where they need to do professional quality scientific reasoning all of a sudden. One imagines that if we were better at education as a society, we could find a way to make this transition much more gradual and welcoming. Moreover, we could find ways to get more than just a few elite students through the transition.

A faith-based answer has no place on a science test and penalizing it is fair. In fact, one should not hesitate to send a message that this kind of reasoning is considered wrong in science. Not sending that message could be damaging to the students beliefs and understanding about how science works. However, the situation is delicate because bright religious students can easily go and study something else if they become offended. Good riddance, some of you may say. But, whenever this happens, science loses: a bright person who may have become pro-science is chased away to the opposing team.

Soccer as a science experiment

You know you are doing science when you are wearing a lab coat, using big words like ‘hypothesis’ and ‘experimental controls’, and when there are lots of beakers of bubbling fluids around. But, the basic ideas behind scientific thinking are actually quite simple and they have been a part of regular people’s lives since well before people started talking about science formally.

To illustrate that point, consider soccer. Here is a thoroughly fictional account of the history of soccer using the science jargon we all loved so much from school.

Two friends from neighbouring villages met up for their weekly tea and began to argue about whose village had the most athletic people.

“Harry from our village can pull trees out of the ground with his bare hands,” claimed Alice from village A.

“Susan from our village runs so fast she can run across the lake before she sinks!” claimed Bob from village B.

As the discussion went back and forth, the claims made by Alice and Bob got further and further from reality. They were both just saying things that would help them win the argument without really caring if they were true or not.

Although they may not have understood these words, both eventually realized that they couldn’t settle this argument with unverified anecdotes. They should find a meaningful way to compare the athleticism of their respective villages. What they need is a science experiment!

They sit down together and design a game that requires the kind of athleticism that each of them thinks is most important. The idea is that the most athletic village will be correlated with the village who wins the game. If they design their game well, the correlation will be strong and winning the soccer match will be a good measure of athleticism.

They sit and think hard about what a good goal for their game is. Bob soon realizes that if the only rule is to get the ball in the net, there are ways to do that without using athleticism. So, he invents the concept of experimental controls. They write in the rules that you aren’t allowed to attack the other players and you can’t use your hands to get the ball in the net. These controls are designed to help the experimenters from getting fooled about which team is the most athletic. These controls will be enforced by officials watching the game from on the field. Suppose the full set of rules they come up with correspond to what we know as soccer.

Alice and Bob head to the lab to perform their experiment.

After the game, Bob is upset after losing. “That’s not fair. Tony got injured and we had the sun in our eyes. We should do best of 3!”

Bob has stumbled on the idea of repeatability (perhaps he could also stumble onto experimental controls for injuries and sun in eyes). Maybe the experiment was influenced by something besides athleticism. So, that brings up doubt that the experiment can be repeated to obtain the same results. Since Alice is confident of her hypothesis (that her team will win because they are the better athletes), she is certain the experiment is repeatable. She agrees to play best of 3.

Village A is victorious once more. Bob is embarrassed, but realizes that you can’t measure something as complicated as athleticism with just one kind of measurement. You should find lots of ways to measure athleticism and then figure out some way to add up all those measurements. In this sense, the olympics are a better measurement of athleticism for a nation than is the World Cup.

As the argument continues, the experiments get better and better. Alice and Bob learn a great deal about exactly what is meant by ‘athleticism’ in the process. Eventually, there is enough evidence supporting one of the villages that no one in the world would doubt which village has the superior athletes. So, knowledge has increased about which village is stronger and faster. But, knowledge has also increased about how to measure ‘athleticism’ properly. When we do science, we learn something about nature, but we also learn something about how to do science better in the future.

Sports are just one way we exercise scientific thinking. Tests in school are another kind of science experiment. They are measurements of student learning. An election is a science experiment designed to measure popular will or political legitimacy. All of these kinds of experiments share many of the core ideas behind more familiar kinds of science: experimental controls, hypothesis testing, measurements, careful design. So, when you cut through the silly jargon that scientists use, the concepts are already familiar to all of us.

All of these example experiments can be done with varrying amouts of precision, accuracy and carefulness. Sometimes they are done quite poorly, because science does not come naturally to us. But, the same can even be true with science experiments. At the end of the day, they are all just attempts to make a measurement in order to evaluate a claim. ‘Science’ refers to a set of values and practices that help us to do that well when other people would prefer to do it poorly.

Science is not normal

Scientific thinking is difficult to teach and difficult to communicate to people who are inexperienced in thinking like a scientist. The reason is that scientific thinking is different from ‘normal’ thinking. It is not the kind of thinking that a person wants to do or ever feels inclined to do. As you can imagine, this makes teaching scientific thought extraordinarily difficult.

One way of looking at the problem is in terms of confirmation bias. It turns out that people tend to pay close attention to evidence that matches their beliefs, and ignore evidence that conflicts with their beliefs.

This phenomenon can be seen all over the place: TV shows a person watches, beliefs about a person’s personalities, gender and cultural stereotypes, and even the books a person chooses to read.

People who already supported Obama were the same people buying books which painted him in a positive light. People who already disliked Obama were the ones buying books painting him in a negative light.

We can see from a simple experiment in confirmation bias that folks do not come pre-wired from birth to be scientists:

A popular method for teaching confirmation bias, first introduced by P.C. Wason in 1960, is to show the following numbers to a classroom: 2, 4, 6

The teacher then asks the classroom to guess the teacher’s secret rule by offering up three numbers of their own. The teacher will then say “yes” or “no” if the order matches the rule. When the student thinks they have it figured out, they have to write it down and turn it in.

Students typically offer sets like 10, 12, 14 or 22, 24, 26. The teacher says “yes” over and over again, and the majority of people turn in the wrong answer.

To figure out the rule, students would have to offer sets like 2, 2, 2 or 9, 8, 7 – these, the teacher would say, do not fit the rule. With enough guesses playing against what the students think the rule may be, students finally figure out what the original rule was (three numbers in ascending order).

The exercise is intended to show how you tend to come up with a hypothesis and then work to prove it right instead of working to prove it wrong. Once satisfied, you stop searching.

The scientist knows that the right way to detect a pattern is to form a belief from the data, then try to falsify that belief, and form a new one as needed. Only when the scientist can no longer falsify a belief should he or she consider the pattern satisfactorily detected.

However, even highly trained scientists only think like scientists some of the time. If you have attended a college lecture, you have probably seen scientists who would actively challenge their own beliefs about chemistry, but who have superstitions about teaching which remain unchallenged. Notably, many practicing scientists proudly carry beliefs rooted in faith, in defiance of the value they place on evidence and falsification in their working lives.

So, it seems that even with scientific training, all that has been accomplished is segmenting a person’s mind into sections that are ruled by evidence and critical thought while other sections are still ruled by confirmation bias and unchallenged beliefs.

We see this phenomenon reflected in our society at large, as well. Questions, fields of research, and fields of study are divided into scientific and other. Our thoughts about how nature works are to be challenged and falsified. But, our thoughts about poetry and literature are to be felt from our guts. Our medicines are to be tested and recommended only when there is sufficient evidence, but our educational methods are to be considered experimental based on the amount of tradition supporting them instead of the amount of evidence.

We often hear the message that each person is entitled to their beliefs. This message is often interpreted as meaning that each person’s ideas are just as good or as valuable as anyone else’s; some beliefs are not to be challenged. So, our history of being intolerant toward people with different ideas as us has generated an unfortunate meme that poisons people against embracing science. The mixed message is that some beliefs are not subject to scientific thinking and others are subject to scientific thinking.

To a student suffering from confirmation bias, the message is that you clearly don’t have to practice critical thinking. After all, this is the message your brain prefers to hear.

From an educational point of view, we are making a huge mistake if we simply tell students the definition of hypothesis, show them a flowchart demonstrating the scientific method, then dust our hands and conclude that they have an idea how science works and why it is valuable to practice thinking like a scientist. The cards are generally stacked against students seeing the point of disciplined reasoning. Because science is not a normal way for the mind to work, critical thinking is something that needs to be emphasized over and over throughout a person’s life. It requires frequent practice, and constant encouragement and motivation.

Are we doing a good job of teaching it? Are we sure?

I was a skeptic until….

The Vancouver Sun discusses how easy it can be to transform a faux skeptic into a believer:

We’ve all heard the infamous anecdote used in a multitude of advertisements for various flavours of charlatanism, “I was a skeptic until I tried (insert your favourite snake oil product here).” It’s an effective and compelling sales pitch, so much so that it’s become overused to the point of being old and hoary. Scientists and those trained in testing methodology no longer know whether to laugh or facepalm when we hear it.

The answer is facepalm.

The charlatans will say “don’t knock it until you try it.”, but the “real” skeptic wouldn’t try it at all until there was evidence. After all, you shouldn’t base decisions on the promise of future evidence. You should base your decisions on the evidence currently available. In this case, the burden of proof is on those selling the products.

Put science to work for you. Approach all questionable products with the null hypothesis. The null hypothesis merely states that if there’s no proof, there’s probably nothing extraordinary going on with this product, and it’s not yet worth your money. If there’s good science demonstrating the product’s effectiveness, you have a right to see that proof before you buy. The burden of proof is theirs; you are not obligated to either accept their claims or to prove them false. And you are certainly not obligated to hand over your money to buy the product, to “try it before you knock it,” as some salesmen say we should.

Science gets misused and abused

The Economist has written a review of a recent book called Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes and Erik Conway. Apparently, the book is about how science can be misrepresented to sway public opinion and win political decisions.

In this powerful book, Naomi Oreskes and Eric Conway, two historians of science, show how big tobacco’s disreputable and self-serving tactics were adapted for later use in a number of debates about the environment. Their story takes in nuclear winter, missile defence, acid rain and the ozone layer. In all these debates a relatively small cadre of right-wing scientists, some of them eminent, worked through organisations sometimes created specially for the purpose to take on a scientific establishment that they perceived to be dangerously unsympathetic to the interests of capital and national security.

If bullshitters (misrepresenters of science) can clutter up the discussion and make it sufficiently confusing to sort out the good science from the fake science, it becomes harder and harder for people to make clear decisions. So, on one side of the battle are the people who care about making good decisions based on high-quality evidence. For them, a small amount of information takes a great deal of pain and expense to produce and analyse. On the other side of the battle are the bullshitters who hope to overload your cognitive abilities with a torrent of confusing and misleading information, often contradicting the careful science. For them, the information comes much easier and in much greater quantities, since it’s dishonest.

This highlights the importance of general science education. Everyone (scientists, voters, politicians, business owners, consumers, etc.) has to enter the above mentioned battlefield and sort out for themselves which information is worth listening to and which is just noise. This is the reason that understanding what science is and how it works is not just the responsibility of people who do science for a living; it is each persons individual responsibility to enable themselves to make sense of all of the information, good and bad. Science filters out the noise.