Many of Séralini’s critics say that the tumour incidence and mortality rates fall within the range of “historical norms” and so can be dismissed. The critics are referring to historical control data – data from other experiments and sources. But using such data to dismiss findings of toxicity is bad scientific practice. The most valid control for an experiment is the concurrent control from within the experiment. Thus any effects seen in treated groups compared to the concurrent controls are likely to be a result of the substance being tested.
In its initial response to the Séralini study, Monsanto invoked “historical norms” to dismiss findings of increased tumour incidence and mortality rates in the treated rats. Monsanto said that the increased mortality rates and tumour incidence “fall within historical norms for this strain of laboratory rats, which is known for a high incidence of tumours”.1
By “historical norms”, Monsanto means historical control data – data from various other studies that they find in the scientific literature or elsewhere. Monsanto also used historical control data in its rebuttal of Séralini’s study in the same journal that published Séralini’s original paper.2
However, the use of historical control data to dismiss findings from a controlled experiment like Séralini’s is not valid. Invoking such data is an unscientific strategy used by industry and some regulators to dismiss significant findings of toxicity in treated groups of laboratory animals in toxicological studies intended to evaluate safety of pesticides, chemicals, and GMOs.
The valid control for scientific experiments is the concurrent control within the experiment, not historical control data. This is because scientific experiments are designed to reduce variables to a minimum. The concurrent control group achieves this because it consists of animals treated identically to the experimental group, at the same time, and within the same environment. The only variable is exposure to the substance being tested – in the case of Séralini’s experiments, NK603 maize and Roundup.
With this experimental design, any differences seen in the treated animals as compared with the concurrent controls are likely to be due to the substance being tested. Introducing irrelevant historical control data means that differences could be due to unrelated or irrelevant factors.
Historical control data consists of a wide range of data gathered from diverse experiments performed under widely differing conditions. As a result, factors irrelevant to the study under examination are responsible for the majority of differences between the data. Such factors can include variations in:
- environmental conditions
- husbandry (the way the animals are cared for by the researchers)
- diet for the animals
- pesticide residues in the diet (Séralini’s experiment shows that even tiny amounts of pesticides well below official safety limits could be crucial3)
- pathogen exposures
- genetic background of the animals
- years in which the experiments were performed, which is known to affect results for reasons that are poorly understood.4 5 6
In contrast, using the concurrent controls reduces such variables to a minimum and enables researchers to reach evidence-based conclusions about the effects of the substance being tested.
The authors of peer-reviewed papers on the use of historical control data warn that it should only be used in unusual circumstances, such as when effects seen in the experiment are borderline or in the case of rare tumours, where there is a genuine lack of data from concurrent controls.4 5 6
If historical control data is used, then the validity of each data point must be demonstrated. In other words, the researcher invoking the data has to prove that he has controlled for all the variables mentioned above and ensured that the historical data is comparable to the concurrent data.4 5 6
Who uses historical control data?
Independent (non-industry) scientists who publish toxicological studies in the peer-reviewed literature hardly ever invoke historical control data. They certainly do not use it to dismiss significant findings of harm in treated groups of animals.
Those who do use historical control data in this way include industry-affiliated sources and some regulators. The practice has been allowed into risk assessment by the Organisation for Economic Cooperation and Development – an organisation set up not to protect public health but to facilitate international trade.
Even OECD advises caution over historical control data
However, even the OECD advises caution in the use of historical control data, since “large differences can result from disparities in factors such as pathology nomenclature, strain, husbandry, pathologists”.7
The OECD 453 protocol stipulates strict conditions for the use of historical control data:
“Historical control data, if evaluated, should be submitted from the same laboratory, relate to animals of the same age and strain, generated during the five years preceding the study in question.”8
OECD guideline 116 on the conduct and design of chronic toxicity and carcinogenicity studies also stipulates that “only historical data collected over the last 5 years should be used”.7
The burden of proof lies on those who use historical control data to show that their data is valid, even by weak OECD standards, let alone by the standards of good scientific practice.
The OECD advises that historical control data “should only be used if the concurrent control data are appreciably ‘out of line’ with recent previous studies”.7 This is not the case with Séralini’s findings, as no comparable long-term studies had been carried out on this GM maize or on the complete Roundup formulation.
Interestingly, the OECD specifically warns against the use of historical control data in evaluating findings relating to tumours (such as those seen in Séralini’s study):
“It should be stressed that the concurrent control group is always the most important consideration in the testing for increased tumour rates.”7
So even by OECD standards, Séralini’s findings are valid and should not be dismissed through the use of historical control data.
Monsanto uses historical control data to ‘disappear’ signs of toxicity
Interestingly, although Monsanto invoked historical control data in 2012 to invalidate Séralini’s findings, in its own 90-day feeding study on NK603 maize (published in 2004) it argued that no relevant historical control data for a feeding study on this maize existed. This was because “Prior to the advent of biotechnology, newly developed corn [maize] hybrids were not fed to rats in 90 day toxicology studies. The only historical data available was that for control rats at the testing laboratory that were fed grain from non-transgenic corn varieties incorporated into commercial rodent diets.”9
The Monsanto researchers appear to mean that prior to their experiment, no one had fed new maize varieties, such as the variety from which GM maize NK603 was developed, to rats in the controlled doses required for a toxicity study. The only data that existed was from rats fed unknown non-GM maize varieties mixed into standard rodent diets in unknown quantities. Because of these uncertainties or “variables”, as the Monsanto authors correctly state, the diets were not comparable. So Monsanto correctly concluded that the historical control data derived from those experiments were irrelevant to its experiment.
Following Monsanto’s own logic, anyone attempting to use historical control data to dismiss Séralini’s findings would first have to prove that the diets fed to these control groups of animals were comparable with those in Séralini’s experiment.
According to good scientific practice, the Monsanto authors should have stopped there and restricted their experiment to comparing the effects of a NK603 GM maize diet with a valid control diet containing equivalent amounts of the non-GM isogenic (genetically the same) variety.
But the Monsanto authors did not do that. Instead they created their own spurious control data to use in place of the missing historical control data. They introduced into their experiment six “reference” control diets that included various varieties of non-GM maize that were not genetically equivalent (non-isogenic) and that were grown at different times and locations. This practice only served to increase rather than minimise variables in the experiment, creating data “noise” that masked the effects of the GM maize diet.
Indeed, statistically significant changes in approximately 50 biochemical and physiological parameters were seen in the GM NK603 maize-fed rats when compared with the correct control – the non-GM isogenic (genetically the same) variety. However, the Monsanto researchers then made these effects disappear by comparing them with the highly variable and irrelevant data from the non-isogenic GM maize-fed “reference” control groups.10
Now, Monsanto is again invoking irrelevant historical control data to dismiss Séralini’s findings.1 In doing so, it fails to control for the variable that it cautioned about in its 2004 paper – lack of comparability in diet.9
The is an additional reason why the historical control data cited by Monsanto is invalid: it relates to SD rats of a different origin (Charles River Labs) than Séralini’s rats (Harlan).1
Nature journal resorts to historical control data to rebut Séralini
The science journal Nature went to the extraordinary lengths of obtaining unpublished and unvalidated historical control data from Harlan, Séralini’s rat supplier, to cast doubt on Séralini’s findings.
In an article for Nature titled “Hyped GM maize study faces growing scrutiny”, the reporter Declan Butler cited this historical control data to argue that Harlan SD rats had low survival rates. He concluded that Séralini should have used more animals.11 This is to ensure that enough animals are left at the end of the experiment to provide good statistical power.
Butler also cited this historical control data to state that Harlan’s Wistar rats had better survival rates and fewer tumours.11 The implication is that Seralini should have used Wistar instead of SD rats.
Butler went on to cite the European Food Safety Authority (EFSA)’s view that Séralini’s findings were likely to be due to chance.11
However, Butler presented no evidence that the variables in the unpublished historical control data he cited had been controlled for. We know nothing about the conditions in which these rats were kept, what diets they were fed, and what pesticide residues, environmental pollutants, and pathogens they may have been exposed to.
Harlan itself confirmed to Séralini that its historical data may have come from rats fed GMOs, since this was not controlled for12 – making it irrelevant to Séralini’s study. As Monsanto itself confirmed in its 2004 paper on NK603 maize,9 historical control data are invalid if the diets in the historical experiments are not comparable with those in the experiment under consideration.
Citing the Harlan historical control data, Butler stated that SD rats have low survival rates in two-year experiments like Seralini’s. He wrote, “OECD guidelines state that for two-year experiments, rats should have a survival rate of at least 50% at 104 weeks. If they do not, each treatment group should include even more animals – 65 or more of each sex.”
However, the OECD makes no specific recommendation about which strain of rat should be used. It is only concerned about experiments where less than 50% of the animals in all groups survive and a negative result is claimed – in other words, a finding of ‘no harm’. The OECD states:
“For a negative result to be acceptable in a rat carcinogenicity bioassay, survival in the study should ideally be no less than 50% in all groups at 24 months.”7
The OECD makes clear that its purpose in making this recommendation is to protect the public against false negatives in industry tests performed for regulatory authorizations, when the substance is claimed not to be carcinogenic, but is in fact carcinogenic. If relatively few animals survive until the end of the experiment, a false conclusion of safety may be drawn, as those few animals may not be representative of the wider population.
But this OECD caution is irrelevant in the case of Séralini’s study, which did not conclude that the substances tested were safe, but that they were toxic. As statistics experts have pointed out [Statistics experts challenge the “too few rats” argument], fewer animals are required to prove that a substance is toxic than that it is safe.
In addition, the OECD’s observations about low survival rates of the SD rat are based on two papers (Nohynek et al. 1993;13 and Keenan et al., 199614) that are irrelevant to Seralini’s study, for three reasons:
- They are not comparable because the usual variables concerning diet and environmental conditions not been controlled for.
- They are so old that they contravene the OECD’s recommendation in its chronic toxicity protocol 453 that historical control data should be gathered from experiments within the past five years.8
- They refer to SD rats bred by Charles River, not Harlan.
In addition, Keenan noted declining survival rates not only in Charles River SD rats, but also in the other two major rat strains used in long-term toxicity tests, Wistar and Fischer 344. He added that tumour rates are also increasing in all three types of rat, along with degenerative diseases.14 So it is evident that any choice of rat is open to challenge by critics of a study.
However, there seems little point in choosing a rat strain that is especially resistant to tumours when the human lifetime risk of developing cancer in the UK is 40% for males and 37% for females15 – slightly higher than the 30% of control rats with “spontaneous” tumours in Seralini’s experiment.
Peer-reviewed data collected by the Cesare Maltoni Cancer Research Center at the Ramazzini Foundation in Italy confirms that the strain of SD rat bred at the Center is an excellent human-equivalent model in carcinogenicity studies and is highly predictive of effects on humans.16
Finally, Seralini’s experiment was for chronic toxicity, not carcinogenicity, so Butler’s arguments are spurious.
Seralini took historical control data into account
Those who insist on using historical control data to evaluate Séralini’s findings will be gratified to note that he did briefly refer to historical control data on the SD rat as published in the peer-reviewed literature. He used the historical control data as a reference point against which to assess the incidence of specific types of tumour found in his experiments.
Séralini found that the treatments in his experiments increased the incidence of mammary tumours 2-3-fold in comparison to spontaneous tumour rates in the same SD strain from the same supplier (Harlan),17 and 3-fold in comparison to the largest study with 1,329 SD female rats.18 Tumours in Séralini’s treatment groups also grew earlier and faster than in controls.
Finally, however, as Séralini had enough rats for a chronic toxicity protocol, the concurrent controls were sufficient and there is no reason to invoke historical control data, unless the intention is to bias the findings in order to conclude “no effect”.
1. Monsanto. Monsanto comments: Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize September 2012. http://www.monsanto.com/products/Documents/ProductSafety/seralini-sept-2012-monsanto-comments.pdf
2. Hammond B, Goldstein DA, Saltmiras D. Letter to the editor. Food and Chemical Toxicology. 7 November 2012.
3. Séralini GE, Clair E, Mesnage R, et al. Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize. Food and Chemical Toxicology. November 2012; 50(11): 4221-4231.
4. Haseman JK. Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies. Environmental Health Perspectives. 1984; 58: 385–392.
5. Hardisty JF. Factors influencing laboratory animal spontaneous tumor profiles. Toxicol Pathol. 1985; 13: 95–104.
6. Cuffe RL. The inclusion of historical control data may reduce the power of a confirmatory study. Stat Med. Mar 22 2011; 30(12): 1329–1338.
7. Organisation for Economic Cooperation and Development (OECD). Guidance document 116 on the conduct and design of chronic toxicity and carcinogenicity studies, supporting test guidelines 451, 452 and 453: 2nd edition: Environment directorate joint meeting of the chemicals committee and the working party on chemicals, pesticides and biotechnology. 13 April 2012.
8. Organisation for Economic Cooperation and Development (OECD). OECD guideline no. 453 for the testing of chemicals: Combined chronic toxicity/carcinogenicity: Adopted 7 September 2009. 2009.
9. Hammond B, Dudek R, Lemen J, Nemeth M. Results of a 13 week safety assurance study with rats fed grain from glyphosate tolerant corn. Food Chem Toxicol. Jun 2004; 42(6): 1003-1014.
10. de Vendomois JS, Roullier F, Cellier D, Séralini GE. A comparison of the effects of three GM corn varieties on mammalian health. Int J Biol Sci. 2009; 5(7): 706–726.
11. Butler D. Hyped GM maize study faces growing scrutiny. Nature. 10 October 2012; 490(7419).
12. Séralini GE, Mesnage R, Defarge N, et al. Answers to critics: Why there is a long term toxicity due to NK603 Roundup-tolerant genetically modified maize and to a Roundup herbicide. Food and Chemical Toxicology. 9 November 2012.
13. Nohynek GJ, Longeart L, Geffray B, Provost JP, Lodola A. Fat, frail and dying young: survival, body weight and pathology of the Charles River Sprague-Dawley-derived rat prior to and since the introduction of the VAFR variant in 1988. Hum Exp Toxicol. Mar 1993; 12(2): 87-98.
14. Keenan KP, Laroque P, Soper KA, Morrissey RE, Dixit R. The effects of overfeeding and moderate dietary restriction on Sprague-Dawley rat survival, pathology, carcinogenicity, and the toxicity of pharmaceutical agents. Exp Toxicol Pathol. Feb 1996; 48(2-3): 139-144.
15. Sasieni PD, Shelton J, Ormiston-Smith N, Thomson CS, Silcocks PB. What is the lifetime risk of developing cancer?: The effect of adjusting for multiple primaries. British journal of cancer. Jul 26 2011; 105(3): 460-465.
16. Soffritti M, Belpoggi F, Degli Esposti D. Cancer prevention: The lesson from the lab. In: Biasco G, Tanneberger S, eds. Cancer Medicine at the Dawn of the 21st Century: The view from Bologna. Bologna: Bononia University Press; 2006:49–64.
17. Brix AE, Nyska A, Haseman JK, Sells DM, Jokinen MP, Walker NJ. Incidences of selected lesions in control female Harlan Sprague-Dawley rats from two-year studies performed by the National Toxicology Program. Toxicol Pathol. 2005; 33(4): 477-483.
18. Chandra M, Riley MG, Johnson DE. Spontaneous neoplasms in aged Sprague-Dawley rats. Arch Toxicol. 1992; 66(7): 496-502.
Sources of criticism:
Haut Conseil des Biotechnologies (HCB), France