Название: Planning and Executing Credible Experiments
Автор: Robert J. Moffat
Издательство: John Wiley & Sons Limited
Жанр: Физика
isbn: 9781119532866
isbn:
In certain quantum physics experiments, however, mere observation alters the system. In these experiments, if an observation is made the system has one result; if no observation is made the system shows contrary results. These tests have been reproduced worldwide. If you are interested, we recommend Richard Muller's book (Muller 2016). Or search “theory of measurement in physics.”
2.3 Beware Measuring Without Understanding: Warnings from History
There is an unfortunate tendency, among engineers particularly, simply to measure everything which can be measured, report the results, and hope that someone, someday, will find the results useful. This is a deplorable state of affairs, even though it has a long and honored past. In many respects, it follows the scientific tradition which emerged from Europe in the nineteenth century, when the art of measurement expanded so rapidly in Western civilization.
William Thomson, Lord Kelvin, famously proclaimed: “When you can measure what you are talking about and express it in numbers then you have the beginning of knowledge.” That is still true today but with some limitations (Thomson 1883).
Thomson's enthusiasm for measurements should be interpreted in terms of the times in which he lived. The European scientists of that period were infatuated with measurement. Every new measurement technique developed was applied to every situation for which it seemed to fit. There was no storehouse of knowledge about the physical world. Each new series of measurements revealed order in another part of the physical world, and it appeared that every measurement answered some question, and every question could be answered if only enough measurements were made. That was true partly because there were so many unanswered questions and partly because most of the questions which were being asked at that time could be answered by scalars.
Since that era, experimental work has become more expensive and more complex. The questions that lead us into the lab today usually involve the behavior of systems with many components or processes with several simultaneous mechanisms. It is not easy to translate a “need to know” into an experiment under such complex conditions. The first problem is deciding what scalars (i.e. what measurable items) are important to the phenomenon being investigated. This step often takes place so fast and so early in a test program that its significance is overlooked. When you choose what to measure, you implicitly determine the relevance of the results.
The early years of the automobile industry provide at least one good example of the consequences of “leaping in” to measurements. As more and more vehicles took to the road, it became apparent that some lubricating oils were “better” than others, meaning that automobile engines ran longer or performed better when lubricated with those oils. No one knew which attributes of the oils were important and which were not, and so all the easily measured properties of the “good” oils were measured and tabulated. The result was a “profile of a good oil.” The oil companies then began trying to develop improved oils by tailoring their properties to match those of the “good oil profile.” The result was a large number of oils that had all the desired properties of a good oil except one: they didn't run well in engines!4
2.4 How Does Experimental Work Differ from Theory and Analysis?
The techniques of experimentation differ considerably from the techniques of analysis, by the nature of the two approaches. It is worthwhile to examine some of these differences.
2.4.1 Logical Mode
Analysis is deductive and deals with the manipulation of a model function. The typical problem is: Given a set of postulates, what is the outcome? Experiment is inductive and deals with the construction of models. The typical problem is: given a set of input data and the corresponding output data, what is the form of the model function that connects the output to the input? The analytical techniques are manipulative, whereas the experimental techniques are those of measurement and inference.
2.4.2 Persistence
An analysis is a persistent thing. It continues to exist on paper long after the analyst lays down his or her pencil. That sheet (or that computer program) can be given to a colleague for review: “Do you see any error in this?”
An experiment is a sequence of states that exist momentarily and then are gone forever. The experimenter is a spectator, watching the event – the only trace of the experiment is the data that have been recorded. If those data don’t accurately reflect what happened, you are out of luck.
An experiment can never be repeated – you can only repeat what you think you did. Taking data from an experiment is like taking notes from a speech. If you didn't get it when it was said, you just don't have it. That means that you can never relax in the lab. Any moment when your attention wanders is likely to be the moment when the results are “unusual.” Then, you will wonder, “Did I really see that?” [Please see “Positive Consequences of the Reproducibility Crisis” (Panel 2.1). The crisis, by way of the Ioannidis article, was mentioned in Chapter 1.]
The clock never stops ticking, and an instant in time can never be repeated. The only record of your experiment is in the data you recorded. If the results are hard to believe, you may well wish you had taken more detailed data. It is smart to analyze the data in real time, so you can see the results as they emerge. Then, when something strange happens in the experiment, you can immediately repeat the test point that gave you the strange result. One of the worst things you can do is to take data all day, shut down the rig, and then reduce the data. Generally, there is no way to tell whether unusual data should be believed or not, unless you spot the anomaly immediately and can repeat the set point before the peripheral conditions change.
2.4.3 Resolution
The experimental approach requires gathering enough input–output datasets so that the form of the model function can be determined with acceptable uncertainty. This is, at best, an approximate process, as can be seen by a simple example. Consider the differences between the analytical and the experimental approaches to the function y = sin(x). Analytically, given that function and an input set of values of x, the corresponding values of y can be determined to within any desired accuracy, by using the known behavior of the function y = sin(x). Consider now a “black box” which, when fed values of x, produces values of y. With what certainty can we claim that the model function (inside the box) is really y = sin(x)? Obviously, the certainty is limited by the accuracy of the input and the output. What uncertainty must we acknowledge when we claim that the model function (inside the box) is y = sin(x)? That depends on the accuracy of the input and the output data points and the number and spacing of the points. With a set of data having some specified number of significant figures in the input and the output, we can say only that the model function, “evaluated at these data points, does not differ from y = sin(x) by more than …,” or alternatively, “y = sin(x) within the accuracy of this experiment, at the points measured.”
That is about all we can be sure of because our understanding of the model function can be affected СКАЧАТЬ