40,000 fMRI Studies Are Not Wrong

A common saying in the media used to be “If it bleeds, it leads.” Nowadays, people talk about click-bait headlines. The more salacious and tantalizing the headline, the more likely people will click on that article, driving up profit for web-based advertising. The newsworthiness or the veracity of the story are inconsequential. Unfortunately, click-bait language is now entering academia.

Recently, a paper was published in the Proceedings of the National Academy of Sciences called “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates” by Anders Eklund, Thomas E. Nichols, and Hans Knutsson. The paper uses resting state fMRI data to estimate the familywise error rate of group statistical analyses when using a cluster threshold to control for multiple comparisons. When expecting 5% false positives, the study instead found much higher false positive rates, including rates up to 70%. The authors state that the “results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.”

This is an example of click-bait language entering academic papers. This is a foolish (and unprofessional) claim to make. To make a claim like this, they would need to comb through hundreds of fMRI papers, determining how many studies use cluster thresholds and large, voxelwise p-values. But the authors do not do this. Nevertheless, this paper caused a firestorm in the popular science journalism online, making headlines such as these:

Bug in fMRI software calls 15 years of research into question (Wired)

A bug in fMRI software could invalidate 15 years of brain research (Science alert)

Tens of thousands of FMRI brain studies may be flawed (Forbes)

Software faults raise questions about the validity of brain studies (Ars Technica)

15 years of brain research has been invalidated by a software bug, says Swedish scientists (International Business Times)

When big data is bad data (ZDNet)

Thousands of fMRI brain studies in doubt due to software flaws (New Scientist)

 

These headlines are disappointing for many reasons. First, it was not a software bug that was discovered. Instead, the study found incorrect assumptions about fMRI data that led to a higher than expected rate of false positives. Second, the paper did not find that big data is bad data. Instead, it used big data to find the problem. This is an example of big data helping science.

 

In reality, the paper reports some very important findings. The study found that there are two incorrect assumptions that lead to inflated significance of statistical analyses when cluster thresholds are used. One is that the imaging data has non-Gaussian spatial autocorrelation functions. Spatial autocorrelation is a fancy way of saying that neighboring voxels (or 3D pixels) have similarities in their time course signals. It has previously been assumed that this similarity could be explained by a simple Gaussian model. The Eklund study showed that this is wrong. Two, the study found that the spatial smoothness in the images is not constant throughout the brain. Instead, the spatial smoothness varies.

 

Both of these findings are important and should be taken into account when applying the cluster threshold. Eklund et al. suggest the use of non-parametric statistical methods, meaning using bootstrap methods to determine the statistical significance of clusters. This is one good solution. However, other solutions exist as well. Researchers could choose to not use cluster threshold and instead rely on false discovery rate (FDR) thresholding, which fMRI researchers have been using since at least 2002. It may also be possible to use smaller cluster determining thresholds (also called voxelwise p-values).

 

The authors make one more very important point. They state that “the fMRI community should, in our opinion, focus on validation of existing methods.” This is most definitely true. Unfortunately, researchers are generally more motivated to develop new methods rather than refine old methods. New ideas grab grants and publications in leading journals. There is not much personal incentive for scientists to validate methods and repeat studies. However, the neuroimaging community would be much better served if existing methods could be validated, especially as there are many potential clinical uses of fMRI on the horizon.

 

One of the authors of the PNAS paper, Thomas Nichols, recently wrote a blog where he recalculated the number of fMRI papers that could be affected by the cluster threshold problem. He argued that it is more likely that 3,500 fMRI papers are affected instead of 40,000. An errata has been written by the authors and accepted by the journal. There is most likely a certain number of papers that use faulty statistical methodology, yielding inflated significance of activation. However, the number 3,500 is a rough estimation, not a true figure.

 

So, no, 40,000 fMRI papers are not wrong. And yet, the flurry of online articles making this hyperbolic claim have already been written and absorbed into the collective consciousness of the science-following public. I doubt we will see as many science journalists writing articles about the correction. Errata do not make for click-bait headlines.

 

In Defense of Evidence

Neil deGrasse Tyson recently made a tweet:

This 21-word tweet caused Sociologist Jeffrey Guhin to write an article denouncing him on the website Slate.

This article makes some good points, but it also has serious flaws. First is the straw man argument it puts up – “Science should also teach us how to live, pointing us toward the salvation religion once promised.” This is not what Tyson said, nor would he say something so foolish. Tyson is not endorsing scientism.  No rational scientist would. Tyson’s tweet was focused on making sure policy should be tied to evidence. Guhin makes no argument against using evidence to decide policy, completely missing Tyson’s point.

Guhin points out several flawed beliefs that have come under the name of science in human history – phrenology, eugenics, scientific racism. These examples have nothing to do with evidence. The latter two concepts do not qualify for science as we currently define the term – they are policies, not hypotheses. Furthermore, there is no evidence to support any of these ideas. Therefore, following Tyson’s tweet, there is no rational reason to follow them. Guhin claims that “Part of the problem here is that nobody really knows what science means.” In reality, I would argue that most scientists follow the definition of science put forth by Karl Popper: science represents the testing of falsifiable hypotheses.

The author continues to talk about logical fallacies like motivated reasoning. And yes, humans individually are prone to many types of logical fallacies. That includes scientists as well. But science as a whole – the community of scientists – functions well as a self-critical and self-correction entity. Bad hypotheses, like the Piltdown Man, are eventually discovered and discarded. Even good hypotheses can be discarded. Newton’s theory of gravity works well until Einstein started thinking about gravity on a grander scale. That’s what makes science different from all other belief systems. It’s the weight of the evidence that matters.

Guhin finished with an emotionally charged statement, saying “Scientists can’t tell us if it’s right to kill a baby with a developmental disability, despite how well they might marshal evidence about the baby’s relative life or her capacity to think or move on her own.” Again, no rational person would say that science is to be a guide to morality. But science can give us evidence about our world, and, as communities of people, we can weigh the evidence along with our personal views of morality, to decide what is wrong and right. If we were to discard evidence, as Guhin seems to be implying, we would only be acting with uninformed opinions. That’s a step that would take us back to the Dark Ages.

Tweets on Twitter are too short to fully represent a person’s full philosophy or nuanced way of thinking. I consider it quite irrational to take a tweet, assume a meaning, and write a rant against it. A much more rational response would be to engage the person in dialogue and find out what they mean. Such a dialogue may not give you a click bait headline to make money, but it would allow a constructive exchange of information.

Irrationality of Easter

I have no desire to convince people to stop believing in religion. Your beliefs your are own, and you are free to believe how you feel. However, when I think analytically about the Easter holiday, I can’t help but find serious, logical problems.

According to 1 Corinthians 15: 3-7:

[3] For what I received I passed on to you as of first importance: that Christ died for our sins according to the Scriptures, [4] that he was buried, that he was raised on the third day according to the Scriptures,[5] and that he appeared to Cephas, and then to the Twelve. [6] After that, he appeared to more than five hundred of the brothers and sisters at the same time, most of whom are still living, though some have fallen asleep. [7] Then he appeared to James, then to all the apostles,

The belief written here is that Jesus Christ died for our sins (whatever that means). This sacrifice is the basis of the crucifixion story, which is celebrated by Christians every Easter. I’m going to break this story apart and explain why it is highly problematic.

1) Death is permanent

Death is defined as the termination of all biological functions that sustains a living organism. Death is permanent. If a human ceases biological function, especially in a matter as severe as the crucifixion, there is no known way for that person to resume biological function.

You have to invoke a faith-based belief in a supernatural power to imagine that Jesus could be reanimated and brought back to life. Personally, I cannot take this leap of irrationality. But, for the sake of this argument, let’s assume that this is possible. (Maybe using advanced nanobots or something.)

2) You can’t claim to be dead if you’re alive

In verse 4 above, it states that Jesus was raised on the third day and appeared [alive] to his disciples and 500 others. Other Gospels state that Jesus appeared to Mary Magdalene. Clearly, the story goes that Jesus rose from the dead – meaning he was no longer dead. That Jesus changed his status to “alive” retroactively undoes the whole “dying for our sins” bit. Logically speaking, if Jesus became alive once more (however that might work), he can’t get credit for dying and the sacrifice that was his death.

To put it another way, either Jesus died, and that was the sacrifice, or he didn’t die, and there was no great sacrifice.

3) Living forever is no sacrifice

Mark 16:19 states:

So then the Lord Jesus, after he had spoken to them, was taken up into heaven and sat down at the right hand of God.

The Bible makes it clear that following the brief period of death and the appearance of being alive once more, Jesus went to heaven where, presumably, he was able to live happily forever. Ignoring the fact that he abandoned his colleagues on Earth and failed to fulfill the prophecy of becoming the Messiah, how can anyone attribute to Jesus a great sacrifice when he escapes his duties on Earth and gets to live forever far far away? It makes no rational sense whatsoever that being dead for under two days, leaving all your friends on Earth, and going to heaven to live forever can be considered a sacrifice.

4) He abandoned his mission

At this point, we reach the biggest question of all: if Jesus spent his life helping poor people, was believed to be the Messiah, survived crucifixion and came back to life, why didn’t he stay on Earth to continue his mission? Seriously, if he had supernatural powers and could take five loaves of bread and two fish and feed 5,000 people, why didn’t he continue this work? If he could survive being crucified by the Romans, why didn’t he stay on Earth to snub those who tried to end his life?

To me, a hero is someone who overcomes obstacles and continues doing what he thinks is right. A hero is someone who does not abandon his work when the going gets tough. A hero is someone who stays committed to the job. By coming back to life and leaving Earth and all his people behind, Jesus surely did not act like a hero. He sounds like someone who felt like things got too rough, so he decided to abort his mission.

Or maybe, he really did die. By Occam’s Razor, that seems the simpler explanation.

But that’s just my two cents. Believe what you feel is right.

– Analytical Cortex

Science is Like a Living Document

When someone asks the question “What is science?”, I often refer to the ideas of the Austrian-British philosopher Karl Popper, who wrote that a for an idea to be part of science, it must be falsifiable. That is, the idea must be such that you can form an experiment to show if the idea is false. For example, you can propose the scientific idea (or hypothesis) that when you let go of a ball, gravity will always pull it straight down to the ground. If this idea were false, the ball would follow a different trajectory such as a curved path to the ground or go straight up. You can test this idea by repeatedly dropping a ball. If, after 100 drops, you do not see the ball do something besides go straight down, you can argue that you have failed to falsify the idea that the ball will only fall straight down. Therefore, you would be inclined to accept the hypothesis that gravity will only pull dropped balls straight down. Generally speaking, after a hypothesis has been tested by many people, all of whom fail to falsify it, it becomes a theory. (Note, I am ignoring the effect of the Coriolis force, which will cause your ball to take a slightly curved path. In practice, however, this will not be noticed for normal balls.)

One disadvantage about using science as the center of your belief system is that science never says what is definitively true. Instead, our ideas (our theories) are conditionally true. We accept them until we can find evidence to the contrary. When new evidence arises, we  must alter our belief system. This level of uncertainty in a belief system may not sit well with some people, however, as many people may prefer belief systems with permanence. I will try to argue here that belief in conditionally true science is actually a good thing.

In law and business, there is a term called a living document. This refers to a document that is often edited and updated, as new information is discovered. Science is like a living document. Instead of the rigid, textbook formalism taught in schools, where Science is a set of laws discovered by people long since deceased, it is much better to think of science as a living document of ideas that have failed to be falsified. In other words, science is a living document of theories that have been shown to make accurate predictions of our universe. Over time, many researchers conducting many experiments have given us ample empirical evidence to believe in these theories.

The beauty of thinking of science as a living document is that it gives us both flexibility of change and the comfort of mountains of empirical evidence. For example, there is no reason to say that Einstein’s Theory of General Relativity is the final answer to gravity and no one is allowed to challenge it. It is perfectly acceptable to allow the possibility of another, greater theory of physics that can explain General Relativity and even more aspects of the universe (such as, say, quantum mechanics). The living document of science is wonderfully flexible.

And on the other hand, the living document of science is filled with the empirical evidence gather by scientists over hundreds of years. For example, ever since Charles Darwin proposed the Theory of Natural Selection, scientists from different fields of science have contributed mountains of evidence for supporting this idea. Geological evidence shows the Earth to be 4.5 billion years old. The fossil record shows an enormous variety of organisms that have lived and gone extinct. The discovery of DNA has given us the language of evolution. Although science gives us the flexibility for another explanation, the living document of science is overflowing with evidence supporting Natural Selection, making alternative views exceedingly unlikely.

In a way, science resembles Zeno’s (Dichotomy) Paradox. With every movement in the paradox, we cover half the remaining distance to the goal. But even with an infinite number of movements, we will never reach the goal. The same is true with science, with every experiment or discovery, we get closer to the truth, but we will never get to absolute truth. The living document of science is like a Zeno’s Paradox pursuit to truth.

In contrast, religion generally does not take the living document approach. Most religions teach a set of principles, customs, and stories of origin without room for change or improvement. If you don’t like the beliefs in your religion, no matter how much conflict they create, there is no wiggle room to change the religion. At best, you can leave the religion and form a new religion. But the original religion stands firm. This is one big difference between science and religion. As a belief system where ideas are conditionally true, science has more flexibility than religion at adapting and changing to new discoveries about the world. Furthermore, the living document of science contains hundreds of years of empirical evidence to support its ideas. These characteristics make science both elegant and useful.

– Analytical Cortex

Atheism vs. Agnosticism

If you’re an analytical thinker like me, you’ve probably had this argument in your head. Are you an atheist or an agnostic? What’s the difference? Is it important?

A google search for the definition of atheist states: “a person who disbelieves or lacks belief in the existence of God or gods.” Similarly, a google search for the definition of agnostic states: “a person who believes that nothing is known or can be known of the existence or nature of God or of anything beyond material phenomena; a person who claims neither faith nor disbelief in God.”

An analytical mind might see the first definition and argue, well, since I do not see evidence in God or gods, I lack belief in God and therefore must identify with atheist. If belief is required to be a theist, it is implied that it is a faith-based belief. As an analytical mind, you probably prefer to base your beliefs on evidence, not on faith. Therefore, you would call yourself an atheist.

Likewise, an analytical mind might look at the second definition, which implies that a lack of omniscience on your part requires you to be open to the possibility of God or “anything beyond material phenomena,” as you lack the ability to prove or disprove such a notion. Lacking absolute certainty, it is more logical to proceed with an open mind and identify yourself as an agnostic.

Both of these trains of thought are logical, but are they not mutually contradictory? I argue that the problem is that both definitions depend on the definition of God, which itself is complicated. One definition for God is: “(in Christianity and other monotheistic religions) the creator and ruler of the universe and source of all moral authority; the supreme being.” A second is: “(in certain other religions) a superhuman being or spirit worshiped as having power over nature or human fortunes; a deity.”

The first definition is more specific and implies an intelligent life form who not only created the Universe, but is also the source of  morality (and, oddly, a “ruler”). Our best scientific explanation for the origin of the Universe is that it was created in a Big Bang, or an expansion from a state of high density and energy. Cosmology has shown how the Universe can evolve from the earliest state to the present state of galaxies, stars, and planets. There is no evidence that an intelligent creator needed to be present for this process to occur. Furthermore, there is a logical problem with implying that an intelligent life form created the Universe. Primarily, it does not explain how the intelligent creator came into the Universe. And it introduces an unnecessary amount of complexity before the current Universe existed. There is no evidence for any extra complexity before our Universe existed, nor is there any logical reason for there to be. A 13.8 billion year progression of the Universe from a state of high density to the present is sufficient to explain our current Universe.

The first definition also mentions God being a source of moral authority. Certainly, human societies have moral codes. However, there is no evidence that a supernatural being has handed down morality. As evidence by studying the world, these moral codes vary slightly from once society to another and from one era to another. For 6,000 years, humans found slavery acceptable. Today we do not. Some current societies accept gay marriage. Others do not. Morality is constantly being altered, not from some higher power, but from the increased knowledge we acquire from Science and the increased economic output of labor-saving technologies resulting from Science.  Furthermore, our morality is largely based on the instinct we inherit being a member of a social species. Our species require us to exhibit behaviors like altruism and empathy. Other species have completely different codes of behavior. For example, the praying mantis exhibits a behavior called sexual cannibalism. During sexual reproduction, the female bites off the head of the male. Picturing humans exhibiting this type of behavior provokes an extreme sense of revulsion. (Not to mention the obvious loss of manpower that would result, clearly weakening human societies. The mantis species apparently does not suffer from this loss.) Clearly, morality is based partially on the instincts of each species and is species-dependent. There is no need or evidence for a higher power to act as a moral authority.

The second definition for God is more ambiguous – a superhuman being having power over Nature or human fortune. There is no creation of the Universe implied. Multiple gods could be imagined to fit into this definition. Could there be super-intelligent beings out there in the Universe who have powers we cannot imagine? Could they control Nature in ways we could not? Could they be here on the Earth controlling human fortunes? Logically speaking, this type of god (lowercase “g”) has some possibility. Just as humans evolved from the single-celled life over 4 billion years on Earth, other intelligent life could have evolved elsewhere in the Universe and traveled here. However, there is no evidence for such super-intelligent life forms. And it adds an unnecessary complexity to our world for no reason.

Based on this analysis, I would argue that it more logical to be an atheist regarding definition one of God, but it is more logical to be an agnostic regarding definition two.

A further problem, though, is that the word God will mean different things to different people. Because there is no evidence for a creator, a moral authority, or superhuman beings that control Nature and human fortune, it seems unnecessary for us to include a word like “God” in our vocabulary. For words to be useful, there should be agreed-upon definitions and evidence of the subject’s existence. For the word “God,” this is not the case. Plenty of people have created their own personal definitions of God. And there is no agreed-upon evidence for a God. Therefore, I would argue that any words whose definition depends on the definition of God (such as atheist and agnostic) are problematic.

That is why 20 years ago, when I had the internal debate of whether to call myself an atheist or an agnostic, I settled on the term secular humanist. The first part of the term is secular, meaning “denoting attitudes, activities, or other things that have no religious or spiritual basis.” I basically take it to mean not believing in anything supernatural, nothing beyond the laws of the Universe. The second part of the term is humanist, meaninga person having a strong interest in or concern for human welfare, values, and dignity.”I prefer referring to myself using a term based on things I believe in rather than referring to myself in terms of things I do not believe in. I also don’t believe in a flying spaghetti monster, but I don’t feel the need to call myself a flying spaghetti monster nonbeliever or a flying spaghetti monster agnostic. There are an unlimited number of irrational ideas people can conceive. Instead of making a list of things I don’t believe in, I refer to myself as one who believes in secular humanism, which, in simple terms, is: a person who believes that people should help people and that only people can help people. Of course, how exactly humans should help other humans and to what extent we should help other humans – these are tough questions with no easy answers.

– Analytical Cortex