We’ve all done some form of practical experiment at some point, whether this was in a professional research lab, or in secondary school with a couple of Bunsen burners. Now can you honestly say that when conducting an experiment, that you’ve never fiddled with your results a bit? Maybe they weren’t the same as everyone else’s, or maybe you got rid of a data point entirely because it just didn’t look right on that beautiful graph you’d been drawing? I’ll be the first to admit, I’ve had a terrible time with practical results in the past. From messing up the pipetting, to contaminating petri dishes; I’ve done them all, and come out with some results which I just couldn’t hand in at the end of the day.
So everyone’s guilty of this form of fraud (yes, that’s right, it’s a form of fraud, and if you used someone else’s results last week for your lab report, it even stretches to plagiarism), but how does this translate to the “real” world, where the research makes its way into our every day lives, and the results are used to make policies or administer certain medical treatments?
In 2013, a poll was conducted by Ipsos MORI to find out who we trust the most to tell us the truth. Doctors and teachers came out on top, followed by scientists, while at the bottom sat politicians, bankers and journalists. So all in all, we can see that scientists are generally trusted to tell the truth. I’m in no way suggesting that this trust is misplaced; it’s just hard to see how we make the transition from students, roughly drawing our graphs and tweaking our results, to the scientists that 83% of people would trust to tell us the truth. One of the ways research is reviewed before publication is by peer review – all work being considered for publishing must be reviewed by other scientists, to check that the results are reproducable and not plucked out of nowhere, that it makes sense and hasn’t been made up. However, no system is exempt from its flaws and we’re always going to have some sort of bias or human error.
One of the most ridiculous instances of research fraud is when William Summerlin, a scientist at the Memorial Sloan Kettering Cancer Center in 1974 conducted an experiment to find out if white mice would reject a skin transplant from black mice. If the skin transplant was rejected, the mice would remain white, but if the mice accepted it, their fur would become black. Later, a lab assistant discovered that some of the previously white mice had been coloured in with a black marker pen, and up until the point of discovery, this experiment was being accepted as a success! Cases like this seriously damage the public perception of science. When we hear a big, hyped up news story about a scientist making up a set of results, we lose a little bit of trust in science: we start to wonder how many other scientists might do this, and given enough high profile occurrences, our trust could start slipping down from that 83%. It can make us question a lot of the things we’re told about the world – we’ve all heard the theories that the Apollo moon landings were all an elaborate hoax and that man has never set foot on the moon – and given enough doubt, we could all start to believe more of these stories, despite only a small proportion of published scientific work being fraudulent.
While I’m not condoning it, there are plenty of reasons a scientist might fabricate work. They might be under pressure to get funding, to get a promotion, or to make a breakthrough and get the results they want. One of the most dangerous biases, one which most science students will have been guilty of in the lab, is a “confirmation bias”. This is when we expect a particular outcome, then when some of our results don’t fit our predictions, we ignore them, or we change them. This is so common in students, as a lot of the time, we already know what kind of results our experiment should give us, so when comparing results amongst classmates, students will often say that their results are “wrong”. Although in the short term, having the “right” results might help us when we submit our work to be graded, this is a dangerous habit to get into. In actual fact, these results which we assume are wrong, could in fact be the most important, suggesting that something more is going on, or that our initial assumptions and theories are incorrect.
But I’m not trying to make you over-weary of scientists. In 2012, The Guardian reported a tenfold increase in scientific papers being retracted for fraud. Now that sounds huge, but it’s much smaller when you realise that this increase was from 0.001% of papers being found to be fraudulent, to 0.01%. Fraud does happen, but it’s still committed by a very small percentage of scientists. Being a “good” scientist is hard, especially when it means going back to the drawing board over and over again, scrapping your hypothesis, or spending your entire career investigating a single protein. Science is compulsively checking and double-checking, until you find the truth, which is not necessarily the truth you expect or the one that everyone’s been asking for. Without acknowledging these unexpected findings, and flaws in our current thinking, how would science progress?
Please feel free to add your own thoughts on being a good scientist. How much do you trust the science you hear?
Comments and thoughts are always welcome.