Science has always been a great adventure for me. Whether I was four years old conducting “experiments” involving cocktails of soaps, perfumes and make-up or at university watching a cell preparation come to life under the microscope, I’ve always felt as if I was on a new and exciting journey of discovery. When I began to venture further down the road of science, however, I was disappointed to no longer feel like an explorer. Instead, I felt like someone asking the same question until I got the “right” answer. I wasn’t after a discovery…I was after a p-value less than 0.05.
Despite the fallibility of the scientific endeavour, advances in science along with technology have helped us answer more questions than ever before about life, the universe and, well… everything. When I started my science degree one of the first things I learnt was that science can accomplish amazing things by utilising rationality, critical thinking and objective reasoning along with a healthy dose of scepticism. Soon after I was taught not to trust any claims that weren’t published in peer-reviewed journals. So my freshman peers and I were told to be open-minded, objective and to appreciate the fallibility of science, but then to adopt the peer-review dogma. I didn’t realise this at the time and accepted it with an ironic lack of critical thinking. I now realise that it is important to step back and consider some of the biases inherant in the scientific press.
Publish, publish, publish
As I became more familiar with the way science and the scientific press really works – the peer-review process, the seagull-like scramble for crumbs of funding, pay-to-publish schemes and long hours spent with a human head, a brick wall and lots of banging – I re-considered what science was all about. I was under the impression that scientific research was conducted to “discover something”, but realised it instead aims to “discover something that will be published in a peer-reviewed journal”. Publication bias has a huge effect on scientific inquiry and the nature of discovery. As Stanford University clinical research methodologist, Professor John Ioannidis, writes, “there is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims.” Whilst some of his arguments remain controversial, there are reasons to think that Ioannidis is right, particularly when we ignore research that produces a “negative finding” – the null hypothesis.
What counts as a discovery?
Scientists need to publish their research to increase their chances of obtaining funds essential for starting or continuing a project. Obtaining a successful grant is no easy feat. Australia’s National Health and Medical Research Council (NHMRC), for instance, offered funding to 23.4 percent of the total submitted proposals in 2010. Researchers need some sort of reward and sustenance to keep them motivated or at the very least partially sane. Unfortunately a lot of research is about luck, and if results are “null” or “negative” then it isn’t a discovery … or is it?
As Ioannidis asserts, it is common for several research groups to work on answering the same question in isolation from one another. The combination of research group isolation and the lack of “null finding” publications leave research claims looking pretty contradictory. For example, let’s say Drug A is hypothesised to improve memory function in patients suffering for Alzheimer’s disease. The null hypothesis, or the “default” view, would be that Drug A does not improve memory function in these patients. This is what will be “tested against”. So for the purpose of potential publication, research concluding that Drug A does improve memory function counts as a discovery, but research concluding Drug A doesn’t have this effect won’t count. However, either outcome is a discovery because we didn’t really know what Drug A did to begin with. Unfortunately, it isn’t the exciting “positive result” journals are interested in. Let’s continue with our scenario. Out of five labs that are working on the research four of them find that Drug A doesn’t have an effect since the p-value is greater than 0.05. The fifth lab on the other hand has discovered that Drug A has a significant effect with a p-value less than 0.05. Based on this 0.05 level of significance each lab has a one in twenty chance that their conclusions are false. Because the fifth lab appears to have discovered a way to improve Alzheimer’s patients’ memory, they have scored big time. Hurrah! The reward these hard working researchers have been looking for has finally arrived.
What can be done?
Evidence that is contradictory to such a famous finding may indeed become more appealing to various journals, so the other four lab groups submit their counter-evidence. However, while some journals will probably publish the data, they may not reflect the high impact factor or the prestige of the two big ones: Nature and Science. More often than not, more “positive” findings and fewer “null” findings are seen in the press, both scientific and consumer. This poses a problem not only for scientific discovery but also for society. These are the studies that affect public health campaigns, insurance rebates and government support not to mention doctors, other healthcare professionals and their patients.
We need to recognise the weakness of a system where scepticism, criticism and open-mindedness are important but the peer-reviewed journal dogma is overarching. Refusing evidence unless it produces a particular result is not science – it’s dogma. A database that gathers research leading to discoveries, “null” or otherwise, is required, as is a peer-review process to ensure that the methods were reliable, the conclusions valid and the evidence sound.
The hunt for p < 0.05 only can be dangerous. I want a real science adventure, an adventure about the nature of things and a deepening of knowledge. I don’t want an adventure confined to the discovery of the “right” answer leaving science to become the stuff of myth and legend. We need to rid ourselves of the primacy of publication and remember what science is really about.