A historical account is given of the approach of "The Rothamsted School" to the analysis of designed experiments. The link between the way that experiments are designed and how they should be analysed is fundamental to this approach. The key figures are RA Fisher, Frank Yates and John Nelder
Invezz.com - Grow your wealth with trading signals
The Rothamsted School & The analysis of designed experiments
1. The Rothamsted
School The analysis
of designed
experiments and
the legacy of Fisher,
Yates and Nelder
Stephen Senn
Stephen Senn 2022
2. Outline
Part I (Not so technical )
• The roots of modern statistics
• Small data
• Careful design of experiments
• Some examples of problems with
judging causality from associations
in the health care field
• Two different objectives of clinical
trials
Part II (More technical )
• Design
• The Rothamsted (Genstat)
approach
• Some statistical issues
• Conclusion
Stephen Senn 2022
3. Part I
Less technical matter to do with history of statistics and basic ‘philosophical’
considerations
Stephen Senn 2022
5. Stephen Senn 2022
William Sealy Gosset
1876-1937
• Born Canterbury 1876
• Educated Winchester and Oxford
• First in mathematical moderations 1897
and first in degree in Chemistry 1899
• Starts with Guinness in 1899 in Dublin
• Autumn 1906-spring 1907 with Karl
Pearson at UCL
• 1908 publishes ‘The probable error of a
mean’
• First method available to judge
‘significance’ in small samples
6. Stephen Senn 2022
Ronald Aylmer Fisher
1890-1962
• Most influential statistician ever
• Also major figure in evolutionary
biology
• Educated Harrow and Cambridge
• Statistician at Rothamsted agricultural
station 1919-1933
• Developed theory of small sample
inference and many modern concepts
• Likelihood, variance, sufficiency, ANOVA
• Developed theory of experimental
design
• Blocking, Randomisation, Replication,
7. Small data challenges
Situation Problem Solution
Sample size small Too few data to estimate variance
adequately
Develop small sample test
(Student)
Experimental material not
homogenous
Dealing with variability Blocking and randomisation
(Fisher)
Limited time (1) How to study more than one thing Complex treatment structure
factorial experiments (Fisher, Yates)
Limited time (2) How to study very many factors Fractional factorials. (Yates)
Experimental material varies at
different levels
Some treatments can be varied at
lowest level but not all
General balance approach to
analysis (Nelder)
Stephen Senn 2022
8. Characteristics of development of statistics in
the first half of the 20th century
• Numerical work was arduous and long
• Human computers
• Desk calculators
• Careful thought as to how to perform a calculation paid dividends
• Much development of inferential theory for small samples
• Design of experiments became a new subject in its own right developed by
statisticians
• Orthogonality
• Made calculation easier (eg decomposition of variance terms in ANOVA)
• Increased efficiency
• Randomisation
• “Guaranteed” properties of statistical analysis
• Dealt with hidden confounders
• Factorial experimentation
• Efficient way to study multiple influences
Stephen Senn 2022
9. The Rothamsted School
Stephen Senn 2022
RA Fisher
1890-1962
Variance, ANOVA
Randomisation, design,
significance tests
Frank Yates
1902-1994
Factorials, recovering
Inter-block information
John Nelder
1924-2010
General balance, computing
Genstat®
and Frank Anscombe, David Finney, Rosemary Bailey, Roger Payne etc
10. Stephen Senn 2022
General Balance
• An idea of John Nelder’s
• Two papers in the Proceedings of the Royal Society, 1965 concerning
“The analysis of randomized experiments with orthogonal block
structure”
• Block structure and the null analysis of variance
• Treatment structure and the general analysis of variance
11. Stephen Senn 2022
Basic Idea
• Splits an experiment into two radically different components
• The block structure, which describes the way that the experimental units are
organised
• The way that variation amongst units can be described
• Null ANOVA – an idea of Anscombe’s
• The treatment structure, which reflects the way that treatments are
combined for the scientific purpose of the experiment
12. Stephen Senn 2022
Design Driven Modelling
• Together with a third piece of information, the design matrix, these
determine the analysis of variance
• Note that because both block and treatments structure can be hierarchical
such a design matrix is not, on its own sufficient to derive an ANOVA
• But together with John’s block and treatment structure it is
• For designs exhibiting general balance
• This approach is incorporated in Genstat®
13. An Example
• Incomplete blocks cross-over
design comparing three
treatments
• Placebo
• Formoterol 12 g
• Formoterol 24 g
• Patients treated in two periods
only
• 24 patients randomised to one
of six sequences
• Four per sequence
Patients per sequence and treatment
Sequence Placebo F12 F24
PF12 4 4
F12P 4 4
PF24 4 4
F24P 4 4
F12F24 4 4
F24F12 4 4
Stephen Senn 2022
14. Skeleton Analysis of Variance
BLOCK Sequence/Patient
TREATMENT Treatment
ANOVA
Analysis of variance
Source of variation d.f.
Sequence stratum
Treatment 2
Residual 3
Sequence.Patient stratum
18
Sequence.Patient.*Units* stratum
Treatment 2
Residual 22
Total 47
Stephen Senn 2022
15. Causal versus predictive inference
• Clinical trials can be used to try and answer a number of very
different questions
• Two examples are
• Did the treatment have an effect in these patients?
• A causal purpose
• What will the effect be in future patients?
• A predictive purpose
• Unfortunately, in practice, an answer is produced without stating
what the question was
• Given certain assumptions these questions can be answered using the
same analysis but the assumptions are strong and rarely stated
Stephen Senn 2022
16. Two models
Predictive
• The population is taken to be ‘patients in
general’
• Of course this really means future
patients
• They are the ones to whom the
treatment will be applied
• We treat the patients in the trial as an
appropriate selection from this population
• This does not require them to be typical
but it does require additivity of the
treatment effect
Causal
• We take the patients as fixed
• We want to know what the effect was for
them
• Unfortunately there are missing
counterfactuals
• What would have happened to control
patients given intervention and vice-versa
• The population is the population of all
possible allocations to the patients studied
Stephen Senn 2022
19. Trial in asthma
Basic situation
• Two beta-agonists compared
• Zephyr(Z) and Mistral(M)
• Block structure has several levels
• Different designs will be investigated
• Cluster
• Parallel group
• Cross-over Trial
• Each design will be blocked at a different
level
• NB Each design will collect
6 x 4 x 2 x 7 = 336 measurements of Forced
Expiratory Volume in one second (FEV1)
Block structure
Level Number
within higher
level
Total
Number
Centre 6 6
Patient 4 24
Episodes 2 48
Measurements 7 336
Stephen Senn 2022
20. Block structure
• Patients are nested with centres
• Episodes are nested within patients
• Measurements are nested within
episodes
• Centres/Patients/Episodes/Measurements
Stephen Senn 2022
Measurements not shown
21. Possible designs
• Cluster randomised
• In each centre all the patients either receive Zephyr (Z) or Mistral (M) in both
episodes
• Three centres are chosen at random to receive Z and three to receive M
• Parallel group trial
• In each centre half the patients receive Z and half M in both episodes
• Two patients per centre are randomly chosen to receive Z and two to receive
M
• Cross-over trial
• For each patient the patient receives M in one episode and Z in another
• The order of allocation, ZM or MZ is random
Stephen Senn 2022
25. Null (skeleton) analysis of variance with Genstat ®
Code Output
Stephen Senn 2022
BLOCKSTRUCTURE Centre/Patient/Episode/Measurement
ANOVA
26. Full (skeleton) analysis of variance with Genstat ®
Additional Code Output
Stephen Senn 2022
TREATMENTSTRUCTURE Design[]
ANOVA
(Here Design[] is a pointer with values corresponding
to each of the three designs.)
27. The bottom line
• The approach recognises that things vary
• Centres, patients episodes
• It does not require everything to be balanced
• Things that can be eliminated will be eliminated by design
• Cross-over trial eliminates patients and centres
• Parallel group trial eliminates centres
• Cluster randomised eliminates none of these
• The measure of uncertainty produced by the analysis will reflected what
cannot be eliminated
• This requires matching the analysis to the design
• Note that Genstat® deals with this formally and automatically. Other
packages do not.
Stephen Senn 2022
28. Stephen Senn 2022
To call in the statistician after
the experiment is done may be
no more than asking him to
perform a post-mortem
examination: he may be able
to say what the experiment
died of
RA Fisher
29. The Shocking Truth
• The validity of conventional analysis of randomised trials does not
depend on covariate balance
• It is valid because they are not perfectly balanced
• An allowance is already made for things being unbalanced
• If they were balanced the standard analysis would be wrong
• Like an insurance broker forbidding you to travel abroad in the policy but
calculating your premiums on the assumption that you will
• This accounts for unobserved covariates. What happens when they
are observed?
Stephen Senn 2022
30. Stephen Senn 2022
• Two dice are rolled
– Red die
– Black die
• You have to call correctly the probability of a total score of 10
• Three variants
– Game 1 You call the probability and the dice are rolled
together
– Game 2 the red die is rolled first, you are shown the score
and then must call the probability
– Game 3 the red die is rolled first, you are not shown the
score and then must call the probability
Game of Chance
31. Stephen Senn 2022
Total Score when Rolling Two Dice
Variant 1. Three of 36 equally likely results give a 10. The probability is 3/36=1/12.
32. Stephen Senn 2022
Variant 2: If the red die score is 1,2 or 3, the probability of a total of10 is 0.
If the red die score is 4,5 or 6, the probability of a total of10 is 1/6.
Variant 3: The probability = (½ x 0) + (½ x 1/6) = 1/12
Total Score when Rolling Two Dice
33. The morals
Dice games
• You can’t treat game 2 like game 1
• You must condition on the information
received
• You must use the actual data from the red die
• You can treat game 3 like game 1
• You can use the distribution in probability
that the red die has
Inference in general
• You can’t use the random behavior of
a system to justify ignoring
information that arises from the
system
• That would be to treat game 2 like game 1
• You can use the random behavior of
the system to justify ignoring that
which has not been seen
• You are entitled to treat game 3 like game 1
Stephen Senn 2022
34. The difference between
mathematical and applied
statistics is that the former is full
of lemmas whereas the latter is
full of dilemmas
Stephen Senn 2022
35. What does the Rothamsted approach do?
• Matches the allocation procedure to the analysis. You can either
regard this as meaning
• The randomisation you carried out guides the analysis
• The analysis you intend guides the randomisation
• Or both
• Either way, the idea is to avoid inconsistency
• Regarding something as being very important at the allocation stage but not
at the analysis stage is inconsistent
• Permits you not only to take account of things seen but also to make
an appropriate allowance for things unseen
• Die analogy is that it makes sure that the game is a fair one
Stephen Senn 2022
36. A simulating example
• I am going to simulate 200 clinical trials
• Trials are of a bronchodilator against placebo.
• Simple randomisation of 50 patients to each arm
• I shall have values at outcome and values at baseline
• Forced expiratory volume in one second (FEV1) in mL
• Parameter settings
• True mean under placebo 2200 mL
• Under bronchodilator 2500 mL
• Treatment effect is 300 mL
• SD at outcome and baseline is 150 mL
• Correlation is 0.7
Stephen Senn 2022
37. Point estimates and confidence intervals
Baseline values not available (like game 1)
Stephen Senn 2022
38. Point estimates and 95% confidence intervals
Baseline values available (Game 2)
Stephen Senn 2022
39. We tend to believe “the truth is in
there”, but sometimes it isn’t and
the danger is we will find it
anyway
Stephen Senn 2022
40. How analysis of covariance works
• This shows ANCOVA applied to
sample 170 of the 200 simulated
• There is an imbalance at
baseline
• I have adjusted for this by fitting
two parallel lines
• The difference between the two
estimates show how an outcome
value would change for a given
baseline value if treatments
were switched
Stephen Senn 2022
41. Lessons for big data
• We tend to treat observational data-sets as if they were badly
randomised parallel group trials but cluster-randomised trials might
be a better analogy
• True standard errors may be much bigger than estimated ones
• See Cox, Kartsonaki & Keogh (2018) and Xiao-Li Meng (2018)
• Design matters
• Beware of dreams in which mathematics triumphs over biology
• You can be rich in data but poor in information
Stephen Senn 2022
42. Data Filtering Some Examples
Finding
• Oscar winners lived longer than actors who
didn’t win an Oscar
• A 20 year follow-up study of women in an
English village found higher survival amongst
smokers than non-smokers
• Transplant receivers on highest doses of
cyclosporine had higher probability of graft
rejection than on lower doses
• Left-handers observed to die younger on
average than right-handers
• Obese infarct survivors have better prognosis
than non-obese
Possible Explanation
• The longer you live the greater your
chance of winning
• The smokers were from more recent
generations. They were much younger
than non-smokers
• The anticipated transplant rejection was
the cause of the dose being increased
• In an earlier era left-handers were forced
to become right-handers
• There are two kinds of infarct: very
serious which is independent of weight
and less serious linked to obesity.
Stephen Senn 2022
43. Morals
• What you don’t see can be important
• Where you have not been able to run trials, biases
can be very important
• For some purposes just piling on data does not really
help
• What helps are
• Careful design
• Thinking!
Stephen Senn 2022
44. A big data analyst is an expert at reaching
misleading conclusions with huge data sets,
whereas a statistician can do the same with
small ones
Stephen Senn 2022
45. References
Stephen Senn 2022
D. R. Cox, C. Kartsonaki and R. H. Keogh (2018) Big data: Some statistical issues. Stat Probab Lett, 111-
115.
X.-L. Meng (2018) Statistical paradises and paradoxes in big data (I): Law of large populations, big
data paradox, and the 2016 US presidential election. The Annals of Applied Statistics, 685-726.
S. J. Senn (2013) Seven myths of randomisation in clinical trials. Statistics in Medicine, 1439-1450.
S. Senn (2013) A Brief Note Regarding Randomization. Perspectives in biology and medicine, 452-453.
S. J. Senn (2019) The well-adjusted statistician. Applied Clinical Trials, June 18.
https://www.appliedclinicaltrialsonline.com/view/well-adjusted-statistician-analysis-covariance-
explained
S. Senn (2019) John Ashworth Nelder. 8 October 1924—7 August 2010: The Royal Society Publishing.
A number of blogs on my blog site are also relevant: http://www.senns.uk/Blogs.html