The ideal for a chronometer is that it should have a very steady, and therefore predictable, rate of loss or gain in the range of temperatures in which it is likely to be used. Nowadays, of course, the owner is more likely simply to wish to know how his or her chronometer performs in general and is not so interested in whether it meets the maker’s specifications at a number of fixed temperatures, without unacceptable lag on returning from a high to a low temperature. I have covered this in some detail in “*The Mariner’s Chronometer”, *and have suggested how* *the more ambitious owner could follow the procedures laid down by some makers.

The chronometer should first be regulated so that it gains or loses only a very few seconds a day, by adjusting the timing weights. In the days before readily available radio time signals, the instrument was often adjusted to have a small losing rate, so that corrections were additive, on the grounds that more arithmetical errors are made when subtracting than when adding. In those days, there was already much adding of six-figure logarithms and haversines in order to extract a position line from an observation, so the fewer opportunities for minor errors the better.

You need a time standard and most countries have radio time signals on the hour, but it is more helpful to have them every minute or even every second. WWV and WWVH provide signals from the US National Bureau of Standards every minute on 5, 10, 15 and 20 MHz and if they are audible they are very useful, though there is an uncertainty introduced by the transmission time of the signal which could take about 0.13 seconds to travel half way around the world. If you are reading this, then you have a computer with access to the internet and *time.is/UTC* is a very useful source of time, as it is possible to have it output the probable error in the time displayed on you computer’s clock by clicking on the red logo in the top left-hand corner. For example, it tells me that the time now displayed is “exact”, by which it means the error is – 0.051 seconds, plus or minus 0.034 seconds.

It is all very well to have such “exact” time, but how is one to relate it to the display on a chronometer which, if it is a mechanical one, displays it to the nearest half second and if quartz, is more likely to display it to the nearest second? In the days when the mariner relied on a signal gun or time ball, a half second was the best he could do – unless he had a stop watch-, commonplace now for a few dollars, but at one time expensive and hard-to-find. Let us suppose that we have a stop watch readable (i.e. having a precision of) 0.1 second and that we have checked its accuracy over, say, 30 minutes using our time source. Let us further suppose that our chronometer is indicating a time that is slow against the standard. We have only to start the watch at a given time and stop it when the chronometer indicates the same time to get the chronometer’s error at that time. But what about your reaction time? You can find out what it is approximately using your stop watch by starting the watch and deciding to stop it when the hand reaches a certain figure. My reaction time is fairly consistently 0.1 of a second (slow, of course). However, I am likely to be 0.1 second slow when starting the watch and 0.1 second slow when stopping it, so my reaction time has no bearing on the *interval* being timed. So it turns out to be when I start the watch at a particular time as shown by the standard and stop it 10 seconds later by the standard. The stop watch consistently shows no error, within the limits of its precision.

There may be residual errors due, say, to variations in the stop watch’s rate over small intervals and we can minimise such errors by letting the watch run for a fixed additional interval. Suppose a chronometer is showing about 4 seconds slow on UTC. We might for example start the watch at 16h, 55m 45 seconds and stop it when the chronometer indicates 16 h 56m 15 seconds, subtracting 30 seconds from the interval (about 34 seconds) shown on the watch. This way, the errors are distributed over 34 seconds instead of 4 seconds. Another way to minimise errors is to take several readings and average them. *If the errors are random* averaging 4 readings will halve the error, nine will reduce it to a third, 16 to a quarter and so on, so that the benefit of averaging rapidly loses its attraction. If you have checked that your reaction time is consistent and that the watch accurately times intervals, you may well be happy to do only three readings to check for gross errors.

Traditionally, makers took the rate at three descending temperatures daily over five days, e.g. 30, 15 and 5 Celsius and then back to the original temperature over a five days each (e.g.15 and 30 Celsius) The constancy of rate is the important factor so the average of each five day interval was calculated and then the deviation of each day from this mean. The mean of these deviations was counted as the important indicator of “goodness of rate”. The interested amateur might prefer instead to get an idea of how the chronometer performs at room temperature by calculating the mean deviation from the mean over 30 days. One of my chronometers that I rated last December had a mean rate over thirty days of -0.3 seconds/day and the mean deviation from the mean was 0.4 seconds. The statistically literate might prefer to know that the standard deviation (for n-1) was 0.64 seconds.

Robert Hageman(12:08:45) :Find your comments and information invaluable. Many thanks !