Studies of scientific papers reporting animal experiments have revealed many flaws in their design, which are generating considerable concern, not least among funders of research (1). These include, but are not limited to:
- Poor experimental design and risk of bias, even in high-impact journals (2, 3), in particular lack of statistical power (4) and lack of blinding (5)
- Artefacts caused by extraneous environmental factors, such as effects of animal age (6), cage conditions (7, 8, 9), concomitant subclinical infections (10), food/water restriction (11, 12) or the sex of the experimenter (13) or animal (14)
- Poor compliance (15, 16) with guidelines for reporting animal experiments (17), including lack of details about anaesthesia and analgesia (18, 19)
- Poor reproducibility of animal studies (20, 21, 22) when a model is moved from, for example, academic environments to pharmaceutic industry. This was the subject of a seminar organised by funders in the UK in 2015 (23)
- Lack of translatability from animals to humans (24, 25)
- p-value hacking (also called data dredging, data fishing, data snooping, data butchery)
- HARKING (Hypothesising After the Results are Known)
See also the section in the PREPARE guidelines on statistical power and significance levels
Anthony Rowe (2022) has written an excellent set of recommendations for improving the use and reporting of statistics in animal experiments.
The REWARD Alliance website was created to promote a series of papers on this topic in 2014 in The Lancet, to help increase the value of research and reduce waste.
It has been estimated that 85% of research is wasted, usually because it asks the wrong questions, is badly designed, not published or poorly reported. While this primarily diminishes the value of research, it also represents a significant financial loss: an estimated US$ 240,000,000,000 were wasted in Life Sciences research in 2010. However, many causes of this waste are simple problems that could easily be fixed.
- More resources in the section of the PREPARE guidelines on experimental design
- The p value wars (again) (Dirnagel, 2019)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Extrapolating from animals to humans (Ioannidis, 2012)
- Reducing waste from incomplete or unusable reports of biomedical research (Glasziou et al., 2014)
- A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic (Mobley et al., 2013)
- Four erroneous beliefs thwarting more trustworthy research (Yarborough et al., 2019)
- Reproducibility: seek out stronger science (Baker, 2016)
- Big names in statistics want to shake up much-maligned P value (Chawla, 2017)
- Introducing Therioepistemology: the study of how knowledge is gained from animal research (Joe P. Garner et al.)
- The importance of being second (an Editorial in PLOS Biology (2018) acknowledging the value of complementary studies which replicate others)
- Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments (Valerie C. Henderson et al., 2013)
- Ten common statistical mistakes to watch out for when writing or reviewing a manuscript (Makin & Orban de Xivry, 2019)
- Reproducibility vs. Replicability: A Brief History of a Confused Terminology (Plesser, 2018)
- Still not significant - a blog by Matthew Hankins (2013)
Did you find what you were looking for?Yes, I found it! No, I did not!
Thanks for your feedback! Please note that we cannot reply to you unless you send us an email.
What are you looking for?
We value your feedback so we can improve the information on the page. Please add your email address if you would like a reply. Thank you in advance for your help.!Please contact us by email if you have any questions.