Hi Andreas!
Its Barry Setterfield here. It seems that your curve-fitting routine is missing something. Did you use all 163 data points or the 121 points that Montgomery preferred? Let me run through a couple of items. First of all we spent some considerable time on curve-fitting and data analysis at Flinders University Maths Department during 1986-1987. We had the Professor of Statistics helping us, and he was so impressed with the data trend that he requested a seminar for the whole Maths Department on the matter by Trevor Norman and myself.
However, it was not just us. Dr. David Malcolm, a researcher from Newcastle University, did his own analysis. He commented that any linear decay provided a better result than that with a constant speed of light assumption. The data residuals reduced from over 22,000 with a constant light speed, to under 2000 with a linear decay scenario.
I suggest that you also seriously examine the two analyses by Montgomery and Dolphin linked on our website. There they show conclusively that there is a statistically significant decay. Unfortunately, their graphs are not displayed on the links in our website, or you would see the force of their argument.
Let me come at this another way. Let us take the aberration method of measurement. In this case, we have a series of 63 determinations made up of hundreds of individual measurements taken from Pulkova Observatory using the same instruments over the period from 1740 to 1940. The period 1765 +/- 25 years gave an average value for lightspeed of about 300,555 km/s or 763 km/s above its current value. In 1865 +/- 25 years, the mean value obtained was 299,942 km/s, which was 150 km/s above its current value. In 1915 +/- 25 years the results were averaging 299,812 km/s or 20 km/s above its current value. Here are the same instruments being used over a 200 year period. Therefore, this is not instrumentation improvement or changes in technique. A very real decay has been measured.
A simple, brief analysis of this data set gives a t-statistic which indicates that the hypothesis that c has been constant can be rejected at the 93.9% confidence level. A least-squares linear fit to the data gives a decay of 4.83 km/s per year with acceptance of the decay correlation r = -0.947 at the 99% confidence level. Furthermore, these data reveal (as do others) that the decay is non-linear and flattening out. In addition, the method has a built-in systematic error which causes all values obtained to be systematically low compared with the actual value pertaining at the time.
In all, there have been 16 methods used to measure the speed of light, c. Each method as a class of observations has shown the decay in c. Furthermore, on every occasion that c has been measured by the same equipment at a later date, a lower value for c has been obtained. Each of Michelson's determinations revealed a decline from the previous values. And the first two were done with the same equipment, and the last two were done with the same equipment, and a significant decay was registerred each time the same equipment was used.
The decline in the measured value of c was the source of comment in the scientific press. Writing in the scientific journal Nature on 4th April 1931, p.522 Gheury de Bray wrote: If the velocity of light is constant, how is it that, INVARIABLY, new determinations give values which are lower than the last one obtained... There are twenty-two coincidences in favour of a decrease in the velocity of light, while there is not a single one against it (his emphasis).
I would like to suggest, Andreas, that it would be profitable to examine all the evidence in detail with an open mind before coming to some final conclusion. After all, the most rapid way to progress towards the truth of any situation in science is to examine the areas where there is a discrepancy or an anomaly between data and theory. This is probably not the method that will make you the most popular, but it is the method whereby science is assured of progress rather than stagnation.