"If you have only one watch, you always know exactly what time it is. If you have two watches, you are never quite sure..."
The fact is, when we take a single reading, whether from a thermometer, a wristwatch, or a fancy laboratory instrument, we tend to accept the readout without really thinking about its validity. People seem to do this even when they know the reading is inaccurate. Your fancy digital watch is probably off by a minute or two right now!
Try as you might, with the most expensive instruments, under the most ideal conditions, every measurement is subject to errors and inaccuracies. But what is worse, modern digital instruments convey such an aura of accuracy and reliability, that we forget this basic rule...
As a consequence of the above fact, all measurements should include an estimate of the accuracy conveyed by a given reading. The accuracy estimate is reflected in the error term.
The precision of an instrument reflects the number of significant digits in a reading;
The accuracy of an instrument reflects how close the reading is to the 'true' value measured.
Note that an accurate instrument is not necessarily precise, and instruments are often precise but far from accurate.
For example, you might read out time right down to the second, even though you know your watch is one minute slow. This reading is precise, but not accurate.
It makes little sense to quote values to high precision beyond the expected accuracy of the measurement. Without stating the estimated accuracy, such a reading cannot be used in serious computations. Worse, even by quoting the time down to the second, you have implied some accuracy which you cannot justify.
When can you trust a measurement? The best instrument is not necessarily the most expensive model, certainly not the one with the most digits in the display. Knowing how an instrument works can help in deciding how good are the measurements you make with it.
Consider an imaginary shopping trip to the local mall to purchase an outdoor thermometer. The shop display holds dozens of thermometers, and several different types. Some are the traditional 'liquid-in-glass-tube' types, others are the 'dial' type with a large pointer on a round scale, while still others are the newest electronic digital readout models. There are several examples of each type, yet, looking closely, they all report a slightly different value for the indoor temperature of the store. Which one to buy?
You might choose the one with the average temperature reading. Then again, they might all be reading high, or low. Even if you find one that gives a good indoor reading at +23 degrees Celsius, will it work well next winter when it is -23 degrees Celsius?
Would any of these thermometers work at -70 degrees Celsius? Could you use them to measure boiling water? We need to know more. Specifically, we would also like to know the operating range of each before using it elsewhere than inside the store. Thinking back to the EE laboratory, when you pick up a voltmeter, was it ever designed to work for the voltage you are trying to measure?
The three different thermometer types all work on different principles. Each has characteristics and behaviour which depend on how they determine temperature (how does the ambient temperature get converted in a reading in degrees Celsius?). And while we are on the subject, what is a degree Celsuis anyway?
The three thermometers illustrate that taking good measurements is more that writing down a reading on a scale. In the next sections, the design of typical laboratory instruments will be discussed. The question of where units originate will also be covered, along with the pressing question of how results should be correctly reported when a measurement is finally taken.