Название: Statistics in Nutrition and Dietetics
Автор: Michael Nelson
Издательство: John Wiley & Sons Limited
Жанр: Спорт, фитнес
isbn: 9781118930625
isbn:
The hypothesis that we formulate will determine what we choose to measure. If we take the time to discuss the formulation of our hypothesis with colleagues, we are more likely to develop a robust hypothesis and to choose the appropriate measurements. Failure to get the hypothesis right may result in the wrong measurements being taken, in which case all your efforts will be wasted. For example, if the hypothesis relates to the effect of diet on serum cholesterol, there may be a particular cholesterol fraction that is altered. If this is stated clearly in the hypothesis, then we must measure the relevant cholesterol fraction in order to provide appropriate evidence to test the hypothesis.
No finite amount of experimentation can ‘prove’ an exact hypothesis.
Suppose that we carry out a series of four studies with different samples, and we find that in each case our hypothesis is ‘proven’ (our findings are consistent with our beliefs). But what do we do if in a fifth study we get a different result which does not support the hypothesis? Do we ignore the unusual finding? Do we say, ‘It is the exception that proves the rule?’ Do we abandon the hypothesis? What would we have done if the first study which was carried out appeared not to support our hypothesis? Would we have abandoned the hypothesis, when all the subsequent studies would have suggested that it was true?
There are no simple answers to these questions. We can conclude that any system that we use to evaluate a hypothesis must take into account the possibility that there may be times when our hypothesis appears to be false when in fact it is true (and conversely, that it may appear to be true when in fact it is false). These potentially contradictory results may arise because of sampling variations (every sample drawn from the population will be different from the next, and because of sampling variation, not every set of observations will necessarily support a true hypothesis), and because our measurements can never be 100% accurate.
A finite amount of experimentation can disprove an exact hypothesis.
It is easier to disprove something than prove it. If we can devise a hypothesis which is the negation of what we believe to be true (rather than its opposite), and then disprove it, we could reasonably conclude that our hypothesis was true (that what we observe, for the moment, seems to be consistent with what we believe).
This negation of the hypothesis is called the ‘null’ hypothesis. The ability to refute the null hypothesis lies at the heart of our ability to develop knowledge. A good null hypothesis, therefore, is one which can be tested and refuted. If I can refute (disprove) my null hypothesis, then I will accept my hypothesis.
A theory which is not refutable by any conceivable event is non‐scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice. [1, p. 36]
Let us take an example. Suppose we want to know whether giving a mixture of three anti‐oxidant vitamins (β‐carotene, vitamin C, and vitamin E) will improve walking distance in patients with Peripheral Artery Disease (PAD), an atherosclerotic disease of the lower limbs. The hypothesis (which we denote by the symbol H1) would be:
H1: Giving anti‐oxidant vitamins A, C, and E as a dietary supplement will improve walking distance in patients with PAD.8
The null hypothesis (denoted by the symbol H0) would be:
H0: Giving anti‐oxidant vitamins A, C, and E as a dietary supplement will not improve walking distance in patients with PAD.
H0 is the negation of H1, suggesting that the vitamin supplements will make no difference. It is not the opposite, which would state that giving supplements reduces walking distance.
It is easier, in statistical terms, to set about disproving H0. If we can show that H0 is probably not true, there is a reasonable chance that our hypothesis is true. Box 1.2 summarizes the necessary steps. The statistical basis for taking this apparently convoluted approach will become apparent in Chapter 5.
BOX 1.2 Testing the hypothesis
1 Formulate the Hypothesis (H1)
2 Formulate the Null Hypothesis (H0)
3 Try to disprove the Null Hypothesis
1.4.3 Hypothesis Generating Versus Hypothesis Testing
Some studies are observational rather than experimental in nature. The purpose of these studies is often to help in the generation of hypotheses by looking in the data for relationships between different subgroups and between variables. Once the relationships have been described, it may be necessary to set up a new study which is designed to test a specific hypothesis and to establish causal relationships between the variables relating to exposure and outcome. For example, Ancel Keys observed that there was a strong association between the average amount of saturated fat consumed in a country and the rate of death from coronary heart disease: the more saturated fat consumed, the higher the death rate. Numerous studies were carried out subsequently to test the hypothesis that saturated fat causes heart disease. Some repeated Ancel Keys’ original design comparing values between countries, but with better use of the available data, including more countries. Other studies compared changes in saturated fat consumption over time with changes in coronary heart disease mortality. Yet other studies looked at the relationship between saturated fat consumption and risk of heart disease in individuals. Not all of the studies came to the same conclusions or supported the hypothesis. It took some time to understand why that was the case.
1.4.4 Design
The Dodecahedron: ‘If you hadn’t done this one properly, you might have gone the wrong way’.
When designing an experiment, you should do it in such a way that allows the null hypothesis to be disproved. The key is to introduce and protect a random element in the design.
Consider some research options for the study to test whether anti‐oxidant vitamins supplements improve walking distance in peripheral arterial disease (PAD). In the sequence below, each of the designs has a weakness, which can be improved upon by introducing and protecting further elements of randomization.
1 Choose the first 100 patients with PAD coming into the clinic, give them the treatment, observe the outcome.Patients may naturally improve with time, without any intervention at all. Alternatively, there may be aplacebo effect (patients show improvement simply as a result of having taken part in the study because they believe they are taking something that is beneficial and alter their behaviour accordingly), even if the treatment itself is ineffective.
This is a weak observational study. Introduce a control group which receives a placebo.
1 Allocate the СКАЧАТЬ