Update 28 April 2014: Climate Dialogue summaries now online
The summary of the second Climate Dialogue discussion on long-term persistence is now online (see below). We have made two versions: a short and an extended version. We apologize for the delay in publishing the summary.
Both versions can also be downloaded as pdf documents:
Summary of the Climate Dialogue on Long-term persistence
Extended summary of the Climate Dialogue on Long-term persistence
In this second Climate Dialogue we focus on long-term persistence (LTP) and the consequences it has for trend significance.
We slightly changed the procedure compared to the first Climate Dialogue (which was about Arctic sea ice). This time we asked the invited experts to write a first reaction on the guest blogs of the others, describing their agreement and disagreement with it. We publish the guest blogs and these first reactions at the same time.
We apologise again for the long delay. As we explained in our first evaluation it turned out to be extremely difficult to find the right people (representing a range of views) for the dialogues we had in mind.
Climate Dialogue editorial staff
Rob van Dorland, KNMI
Marcel Crok, science writer
Welcome back to Climate Dialogue for our second dialogue. We slightly changed our procedure. This time we asked the invited experts to give a first reaction on the guest blogs of the two other discussants. We hope this will generate a guide for the following discussion.
First comments on the trend discussion by Armin Bunde:
I think that Bunde provides a nice overview of the statistics behind long-term persistence (LTP) and the mathematical theory. He also provides some examples which also suggests that there are a number of areas where processes exhibit LTP. However, I’m not convinced that the climate system necessarily fulfils all the assumptions made in the LTP analysis.
It is especially the proposition that any observed number xi in a LTP records depends on all previous points which as not established for geophysical processes. Lorenz1 published in 1962 the concept of non-deterministic flow, also known as ‘chaos’2.
The so-called ‘butterfly effect’, an aspect of ‘chaos’ theory, is well-established with meteorology, which means there is a fundamental limit to the predictability of future weather due to the fact that the system looses the memory of the initial state after a certain time period. The reason for this is that the future outcome is sensitive to infinitesimally small differences in the description of the state of the atmosphere. This sensitivity can be estimated through a set of lyapunov exponents.
Failure to predict long-range climate variability with statistical models suggests that there is little predictable precursory signal (memory) on time scales longer than months over large parts of the world. El Niños are notoriously hard to predict from one year the next.
Nevertheless, chaotic systems will look like LTP processes because the strange attractors describing the state of the system tends to reside in certain ‘regimes’ for some time before flipping over to another regime. So does it matter if a chaotic process look lie LTP but where the memory the initial conditions is ‘lost’ after some time? I haven’t really thought about this before.
It is true that e.g. temperature cannot be characterized by the AR1 process, and this is because we know a priori that it is not just noise. The temperature is physically forced by a number of processes, be it from changes in the Earth’s orbit around the sun, changes in the sun, changes in geology, changes in volcanic eruptions, or changes in the ocean currents. We can simulate such changes with climate models.
While the AR1 model may not necessarily be the best model, but it is difficult to know exactly what the noise looks like in the presence of forced signal. State-of-the-art detection and attribution work do not necessarily rely on the AR1 concept, but use results from climate models and error-covariance matrices based on the model results to evaluate trends, rather than simple AR(1) methods.
Although there are ways to deal with trends in data in the context of LTP analyses, there are draw-backs associated with uncertainties associated with the different models, e.g. specifying polynomial trends and their coefficients. Furthermore, a combination of a Fourier- transformation and a random number generators (‘phase scrambeling’) will produce numbers which mimic the statistical characteristics of the original data, but will not easily distinguish between signal and noise.
I would not be surprised if trend tests suggest that the long-term increase global mean sea surface temperatures do not qualify as statistically significant, and the reason for that is due to a number of ocean processes rather than the inertia that resides in the oceans. We know that ocean ciculation patterns such as the El Niño Southern Oscillation, the Atlantic Meridional Overturning circulation, and the Pacific decadal oscillation are responsible for slow variations with time scales of months to decades.
We can look at different quantities such as the ocean heat content and the global mean sea level and the trends are more prominent while such variations are small in comparison.
1. Lorenz, E. Deterministic nonperiodic flow. J. Atmospheric Sci. 20, 130–141 (1963).
2. Gleick, J. Chaos. (Cardinal, 1987).
First comments on the trend discussion by Demetris Koutsoyiannis:
Koutsoyiannis shares offers a definition of ‘climate’, which can be summerised as ‘what are the ranges and how often can I expect that a particular weather event to take place?’ However, climate is not just statistics, but also about physics: the flow of energy and transport of matter.
Modern climatology has drawn on experience from weather forecasting and physics in addition to statistics, and the success of daily operation weather forecasting provides convincing evidence suggesting that we do have a good understanding of the role various processes in atmosphere and the oceans play.
The question of whether 30 years is an appropriate time scale for defining climatic normals may seem a bit academic – it’s a practical time horizon in terms of the time scale of a generation or the life time of a construction. To a great extent, climatology evolved out from the need to provide society with a guidance on how to design infrastructure and plan agriculture. However, this is different to the discussion regarding climate change.
As far as I know, if is well-known that the real degrees of freedom is less than the number of observations due to persistence and auto-correlation. This has long been tacit knowledge, and the seeming persistence has to do with the chaotic nature of the weather system.
I disagree with him and think there is no static-climate assumption behind “weather vs. climate” dichotomy – it’s a question of probabilities and predicting the probability density function (pdf). Nobody says that the pdf needs to be constant. In fact, the definition of a climate change is just that: a shift in the pdf over time.
The question of whether we are now seeing a trend may be answered differently depending on ones assumption. Does it mean that the pdf for the temperature now is different from that of the past? If it is, then that is of relevance to society and it may call for actions to adapt. Is it something we ought to expect because this happens all the time – as Koutsoyiannis suggests? Alternatively, we may ask whether the recent 11 warmest temperatures would at all been possible without a forcing such as GHGs?
I think that we may be asking different questions.
Furthermore, we make different assumptions. I think that the following assertion is invalid in the context of geophysical processes: ‘by averaging to another scale, daily, monthly, annual, decadal, centennial, etc. we get other stochastic processes, not qualitatively different from the hourly one”. The reason is that different known physical processes take place with different preferred time scales.
I also do not see that the idea of long-term persistence (LTP) is omnipresent. Take classical statistical physics for instance– the classical statistics work quite well, and the concept of temperature is indeed an aggregate based on classical statistics in the absence of LTP.
For geophysical processes, chaos plays a role and may give an impression of LTP, and still the memory of the initial conditions is lost after a finite time interval.
While the case of the Nile is interesting, I do not think it is a valid analogy for the global mean temperature – the physical processes are just not the same.
The Nile river flow is influenced by the precipitation over one catchment, which again is determined by the transport of airborne moisture through the atmospheric circulation over the eastern African region. It is affected by monsoons and geography, in addition to the management of the river.
The global mean temperature, on the other hand, is affected balance between the energy received from the sun and loss to space. Furthermore, it is sensitive to the altitude where the heat escapes to space. This is to some extent linked to the global hydrological cycle, however.
Trends in the Earth’s mean temperature will have implication for the energy budget, as the natural system is constrained by the laws of physics. The Earth’s atmosphere and oceans represent a closed system in space where only energy enters and leaves and where the physics keeps the state in check. There are less restoring forces for river levels, and the physical constraints are not as limited for the rainfall over a limited region.
I too do not think that white noise is not really a realistic assumption, and I think that most of my colleagues agree. It is news to me that climate is determined on just one time scale. Where is this stated?
Another difference in opinion is that we know that autocorrelation function is not independent of time scale: at hourly scales, there is a 24-hr cycle; at a daily scale, there is a distinct 365-25-day cycle, and beyond the annual cycle, the presence of long-term variability becomes more vague.
There are some signatures of volcanic eruptions, El Niños, and decadal variations, but these are irregular rather than regular. At geological scales, there have been a number of ice ages, but there is no evidence suggesting that they have lasted more than a few million years. Thus we cannot assume that there will always be another natural mechanism acting on a bigger scale.
I also think that it is big mistake to use of Hust coefficient with the HadCRUT4 without distinguishing forced changes to internal changes – what is noise and what is signal? Or what is the probability that we would see similar high temperatures without forcings? In my mind, Koutsoyiannis mixes forced variations with noise, which muddles the understanding and produces misleading results.
First comments on Armin Bunde’s post
It is a great pleasure to contribute, together with you, in this dialogue and to make this comment on your post, following the suggestion of the CD Editorial Team to identify points of agreement and disagreement. I am particularly glad to report my agreement on the major issues with a few exceptions to which I will refer below. I endorse your statements:
I decided not to use mathematical equations in my post, so I welcome the inclusion of equations in yours. From first glance they seem consistent with mine, even though we use different notation. For example, I denote the Hurst coefficient by H (the only mathematical symbol that I used in my post) and it seems that it is identical to your α, while it is related to your γ by γ = 2(1 – H).
On the other hand, while I agree with your notion of the “finite size effect” I would not agree with treating it in terms of inequalities, as you do, because inequalities give the false impression that in some areas (for large sample size N or for small lag s, to use your own notation) your statistical estimations are perfectly safe. They never are, particularly because LTP magnifies the uncertainty and also introduces (negative) bias, sometimes substantial bias, in estimators which according to classical statistics are unbiased (e.g. that of the variance; see Koutsoyiannis, 2003, and Koutsoyiannis and Montanari, 2007). Instead of using inequalities to identify seemingly safe areas, I think it is better to explicitly take the bias into account in all cases. I will come again later to this.
In terms of your results for individual stations, as shown in your second figure, these are mostly consistent with mine; however from recent analyses on thousands of raingauges worldwide (Iliopoulou et al., 2013) it seems that precipitation records have an average H closer to 0.6 than to 0.5 which you report. It also seems the LTP is again more appropriate for them than the AR(1) model.
Now I am coming to your conclusions numbered (i) and (ii). First, while in my post I refer to the combined land and sea surface temperatures (the HadCrut4 data set as mentioned in the introductory entry by the CD Editorial Team), you examine separately the sea surface temperature and the land temperature. I have made analyses also for these (based on the HadSST2 and CRUTEM4 data respectively). So, I can agree with your conclusion (i), i.e.:
Indeed, the behaviour I see for SST is not different from that of my Figure 5, except that the climatic difference for the entire 134-year period (1879-2012) is 0.5°C (but if you limit it to the last 100 years it indeed becomes 0.6°C as you report).
Now coming to the land temperatures, again I find a similar behaviour as in my Figure 5, the only difference being that the few points going out of the critical values in this case refer to the lag 120 years rather than to the lag 90 years shown in the figure (the latter all remain below the critical values). This does not agree with your conclusion:
For the computational part of the disagreement please take into account that the standard deviation in the land temperature is by more than 50% higher than in the sea surface temperature. Also please recall our discussions when the paper by Koutsoyiannis and Montanari (2007) was published, in which we criticized your approach in Rybski et al. (2006) and, in particular the fact that you did not take into account the uncertainty/bias in the standard deviation into your calculations. I had the impression that you had agreed on that, but perhaps you have forgotten it by now as I infer from your list of references. 🙂
But there is a disagreement also on the logical part of your conclusion (ii). I believe if you accept that the sea surface temperature has strong LTP, then logically the land temperature will have too, so I cannot agree that the latter has “comparatively low persistence”. We are speaking about long-term persistence, which manifests itself in decadal, centennial, etc., time scales. I cannot imagine that, on the long term, the land would not be affected by the long-term fluctuations of the sea temperature. I believe climates on sea and land are not independent to each other—particularly on the long term.
Iliopoulou, T., S.M. Papalexiou, and D. Koutsoyiannis (2013), Assessment of the dependence structure of the annual rainfall using a large data set, European Geosciences Union General Assembly 2013, Geophysical Research Abstracts, Vol. 15, Vienna, EGU2013-5276, European Geosciences Union.
Koutsoyiannis, D. (2003), Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48 (1), 3–24.
Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
Rybski, D., A. Bunde, S. Havlin and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi: 10.1029/2005GL025591.
First comments on Rasmus Benestad’s post
Your introduction and particularly your reference to the paper by Cohn and Lins (2005) and to your post “Natually trendy?” reminded me of our discussion in your posr in RealClimate seven years ago. This was my first contribution in a blog and thanks to it and its subsequent reposting in ClimateAudit by Steve McIntyre, brought me in contact with many colleagues, including yourself, as well as Tim Cohn and Harry Lins. So, I thank you for that post.
I also welcome your recognition and explanation of LTP in your current post; these are useful for all of us (particularly for myself as I estimate that from now I will not have as many difficulties in publishing papers related to LTP as in the case of the paper by Koutsoyiannis and Montanari, 2007—see its “prehistory” in http://itia.ntua.gr/781/). 🙂
Most of all, I welcome your Figure 2, which supports all that I have been saying for years. The autocorrelation of synthetic climate, produced by a climate model without GHG change, becomes 0 for lag as small as 5 and keeps a value around 0 for all subsequent lags. As I have described in my post, with zero autocorrelation the climate would be flat. But a static climate has never been the case on Earth. In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.
I believe it is unfortunate that LTP has been commonly described in the literature in association with autocorrelation and as a result of memory mechanisms. It is the change, mostly irregular and unpredictable in deterministic terms, that produces the LTP. The high autocorrelation is just the reflection of change upon that mathematical concept. The first who understood that was Klemes (1974).
I guess there is difference in the way we are viewing change. Perhaps what I call change you view it as “variation” or “internal variability”. But I have difficulties to understand what you call change. You say:
A climate change happens when the weather statistics are shifted.
I hope you agree that these statistics are a human invention to describe Nature, not a natural property per se. Furthermore, their assigned values depend on assumptions, like the time scale of averaging, e.g. 10 or 30 years. Whatever these assumptions are, I cannot imagine any two, adjacent or not, time periods whose statistics (estimated from data, e.g. temporal average) would be the same. In other words, changes, call them shifts if you wish, occur all the time. This shows that your distinction of “variation” and “change” is ambiguous.
You are clearer when you distinguish “signal” from “noise” as you associate the former with “man-made climate change”. But I would never agree with your term “noise” to describe the natural change. Nature’s song cannot be called “noise”. Most importantly, your “signal” vs. “noise” dichotomy is something subjective, relying on incapable deterministic (climate) models and on, often misused or abused, statistics.
I found very interesting the argument you offer with respect to potential misuse of statistics:
Statistical LTP-noise models used for the detection of trends involve circular reasoning if adapted to measured data. Because this data embed both signal and noise.
I agree that circular reasoning can be a real risk. I also accept your deliberation that you need 70-90 years for a meaningful inference about LTP. Based on this, I can offer several ways to avoid the circular reasoning.
a. The HadCrut4 data set is 163 year long. So, let us exclude the last 63 years and try to estimate H based on the 100-year long period 1850-1949. The Hurst coefficient estimate becomes 0.93 instead of 0.94 of the entire period. Is LTP artificial then?
b. Look at Koutsoyiannis and Montanari (2007), Table 1. It examines several proxies for temperature for the last 500-2000 years and provides two sets of H estimates: One for the entire period covered by each of the proxies and one for the period 1400-1855, common for all proxies. Do you see any noteworthy difference (say, greater than 0.03) in the estimates of H between the two periods? Don’t these high values of H (0.86-0.93 for the period 1400-1855) indicate LTP? Can they be the result of anthropogenic origin? Does your “circular logic” argument apply to them?
c. Look at Markonis and Koutsoyiannis (2013), Fig. 9. This shows that a combination of proxies supports the presence of LTP with H > 0.92 for time scales up to 50 million years. Is this a result of your “signal” which you identify with “man-made climate change”?
Furthermore, if you trust only instrumental series you may look at the Nile example in my post, which I trust you can assume to be free of “man-made climate change” as it does not go beyond the 15th century. With respect to instrumental temperature records, you are right that most thermometer records do not go back in time longer than a century. However, there are some that they do—some exceptions as you say. As handy examples, I can offer those of Vienna (see Fig. 3 in Koutsoyiannis, 2011) and Berlin/Tempelhof (see Fig. 13 in Koutsoyiannis et al., 2007). Again the LTP is evident, even without considering the last period (for example, as I recall from the latter publication, the first one-third of the record, years 1756-1839, gives H = 0.83 while for the total period H = 0.77).
In any case, not only is the risk for circular reasoning a real one, but it also concerns much wider areas than LTP. As another example, consider the “Mexican Hat Fallacy” that I referred to in my post. The circular reasoning here is that I formulate a hypothesis after I have seen the data. Deterministic modelling can also be affected by circular reasoning. The hydrological community has given importance to avoiding circular logic. Thus, it has thus been a standard practice in modelling to follow the split-sample technique (Klemes, 1986). We split the available observations into two (sometimes three) segments. We use one segment for building and calibrating a model and the other one for validating it. Has such model validation technique been used in climate models? From my experience (Koutsoyiannis et al., 2008, 2011; Anagnostopoulos et al., 2010) I can only imagine a negative answer.
In the beginning of your post you present a graph that compares the HadCRUT4 data series with results of a regression model “based on greenhouse gases, ozone and changes in the sun” as you say. You do not give a citation to see the details. So, the natural question is: Did you use a split-sample technique for your model, or any similar validation technique, in which a data segment is kept out when building your model and calculating the regression parameters? If yes, then the danger for circular reasoning is minimal—but not zero, because in reality you have seen all the data. So, what do you say?
Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110.
Cohn, T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy, Geophys. Res. Lett., 32, L23402, doi: 10.1029/2005GL024476.
Klemes, V. (1974) The Hurst phenomenon: A puzzle?, Water Resources Research, 10 (4), 675-688.
Klemes, V. (1986), Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31(1), 13–24.
Koutsoyiannis, D. (2011), Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432.
Koutsoyiannis, D., A. Efstratiadis, and K. Georgakakos (2007), Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches, Journal of Hydrometeorology, 8 (3), 261–281.
Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, and A. Christofides (2008), On the credibility of climate predictions, Hydrological Sciences Journal, 53 (4), 671-684.
Koutsoyiannis, D., A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos, and N. Mamassis (2011), Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56 (7), 1334–1339.
Koutsoyiannis, D., and A. Montanari (2007), Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429, doi: 10.1029/2006WR005592.
Markonis, Y., and D. Koutsoyiannis (2013), Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207.
First comments on Benestad’s and Koutsoyiannis’ guest blogs
I actually can agree with most of what Demetris writes on LTP in his interesting and pedagogic blog. I only think that the tools he used (similar as ours in the 2006 paper by Rybski et al) are not the optimum tools. Among climate scientists, the best accepted tool is the exceedance probability (as I wrote in my blog) from which the significance of a trend can be derived. Unfortunately, since the exceedance probability for LTP records was not known before we published our main results in 2009 and 2011, climate scientists used the wrong assumption of an AR1 process to estimate the significance of a trend and considerably overestimated it this way. I am sure, when Demetris will use the exceedance probability and the analytical result we published in 2011, he will arrive at our conclusions, too.
In contrast, I cannot agree with a large fraction of Rasmus blog. My disagreement is not based on philosophical arguments but simply on mathematics and modern time series analysis. LTP is not an abstract issue, but a process with long memory that (for stationary records) decays in time by a simple power law. Accordingly, the ENSO, for example, is not an example for LTP. It is an example of a very complex and very difficult quasi ocillatory phenomenon which is difficult to forecast, but not LTP. I think it is very important in science and also in climate science to be precise, without being precise, modeling is impossible and progress hard to achieve.
It is very easy to generate LTP records numerically, one only needs to know how to generate Gaussian random numbers and how to make a Fourier transform. This way, one can very efficiently study theoretically the properties of LTP records. By doing this properly one will find that the autocorrelation function (ACF) as used by Rasmus unfortunately is not an appropriate tool to detect LTP in records with a length below 50 000. As we have shown in 2009 in Physical Review E, there are strong finite size effects which are even worse than anticipated by Rasmus. But this is NOT a problem of LTP, but only of the employed method. The second problem of the ACF can be seen also in Rasmus Fig. 1: It depends on the external trend. Therefore, if one only knows the ACF as tool for detecting LTP one may led to think as Rasmus does in his summary, that we do not really know what the LTP in the real world would be like without GHG forcing.
If I had worked 20 years ago in this field, I had agreed with this statement. But nowadays, there is a large number of methods available that is able to detect the natural fluctuations in the presence of simple monotonous trends. Two of them are the detrended fluctuation analysis (DFA) and the wavelet technique (WT). These are techniques which have been applied in many different disciplines, ranging from physiology via computer science to the financial markets and tested extensively. By combining them one can quantify LTP on time scales up to N/4 where N is the record length, and N should be above 500. This means, from a monthly record of 40y one can detect LTP when using the proper methods. Again, when using the ACF, 150y are not enough because of the tremendous finite size effects.
We appreciate today that monthly temperature data are LTP on all time scales, with Hurst exponents around 0.65 for continental and around 0.8 for island stations and sea surface temperatures. Long historical runs from AOCGMs are able to reproduce this behavior. Daily data additionally show short term persistence on scales up to 2 weeks which are averaged out in monthly data. Accordingly, the problem is NO LONGER to determine the Hurst exponent in the presence of anthropogenic warming, but to estimate the contribution of e.g GHG to the warming.
Within time series analysis, this can be done as I wrote above and in my blog. But as a consequence of the LTP in the temperature data, the error bars are very large, considerably larger than for short-term persistent records. But nevertheless, except for the global sea surface temperature, we have obtained strong evidence from this analysis that the present warming has an anthropogenic origin.
Rasmus, you say:
I will be glad to offer news to you, but I have to clarify that my phrase:
means that the climate CANNOT be determined by one scale.
Those who use a Markov/AR(1) model assume that it can be determined by one scale, not me. Note, in a Markov model the autocovariance for lag t is c(t) = b exp(-t/a) where a is the SINGLE time scale for that model. It is trivial to show that on a climatic time scale D > > a, the climate produced by a Markov process has variance Var[x] ~ a/D. This is just the same as in the white noise, where the climatic variance is inversely proportional to the time scale of averaging. Thus, those who assume a Markov behaviour for hydroclimatic processes also accept a white noise (random) behaviour at climatic scales.
To repeat it once more, it’s not me who assumes a Markov model, a random climate, or a single scale. On the contrary, I stress that all these assumptions are wrong.
So, I hope with these clarifications my “news” is more informative for you.
Thanks for your comments on my post, Demetris. I think we need to live with our disagreements on a few points – but that’s fine. This is what drives science forward. I think one objective was to try to identity the exact points where we diverge in our interpretations. Here is my take on that: let’s start with your this paragraph:
“In other words, the climate models are inconsistent with the real world climate, which is characterized by change on all time scales. The LTP is the stochastic representation of irregular change and is also reflected in the autocorrelation function with high values of autocorrelation. LTP has been a dominant characteristic of Nature (see my post as well as Armin’s). Your graph shows that the models need to assume an external (anthropogenic) agent to produce what has been the rule in Nature all the time.”
I do not see how you can claim that climate models are inconsistent with the real world climate. We know that they do reproduce the main important aspects of Earth’s climate, such as cirulation patterns, wind patterns, and the past temperature trends. We also know that these kinds of models taught us about the fundamentals about Chaos theory.
I also argue that we do not know if LTP really is stochastic, and my demonstration showed that in fact external forcings such as GHGs do introduce LTP characteristics. It does not help you with long time series when you do not know what is stochastic and what is forced.
Many processes in nature may look like LTP, and a good deal of it is due to physical processes and a chain of causality – not necessarily randomness. Autocorrelation functions can give you some indication about the degree of persistence, but cannot provide insights into the physics in isolation.
I want to ask if you consider non-linear chaos – which does not have a long-term memory – as an aspect of LTP – it does not satisfy that value x(t’) is influenced by all x(t < t'). If you think LTP includes chaos, then is arises from the internal physical processes. If LTP does not involve chaos, then I wonder how you'd distinguish between the two.
I appreciate Armin explanation, and it’s OK to disagree. I’m not convinced that there are methods that can distinguish noise and signal – or trends and fluctuations – when there is an external forcing. I do not believe that tere is anything magic about LTP, and as any other process, it is caused by physical processes and internal dynamics. Such as changes in the oceans and non-linear chaos.
I guess we could set up some double blind experiments, with systems where a forced trend of LTP were designed into the results. An mimic the analysis as I did for the ACF for two time series, one with and one without trends.
Question to both:
We know that the errorbars of the global mean temperature estimate T(2m) are greater when the sample of thermometer records from the globe was smaller in the realy part of the record. Hence the variance associated with T(2m) is greater in the early part of the record compared to the latter, due to greater statistical fluctuations. The reduction in the fluctuations associated with the sampling, however, must be considered as ‘artificial’ because it does not reflect the changes in the real world.
So the question: how do the LTP methods deal with imperfect data?
Also, we know that theforcing is a combination of the contributions from greenhouse gases (GHGs), volcanoes and solar forcing. They have different time scales, where volcanic eruptions leave an imprint which may last for a few years, whereas GHGs have long-lasting effects. We also expect the trends to change over time, and that splitting the temperature record into say ~60 year intervals will result in different segments where the frocings is different. How do the LTP methods account for the presence of such forcings if you a priori do not know exactly what they are?
I fully agree with your assessment that we disagree. I also fully agree with your statement.
The latter is an important agreement, given a recent opposite trend, i.e. towards consensus building, which unfortunately has affected climate science (and not only).
So, let us focus on our disagreements and illustrate them with a few numbers, when possible. You refer to my statement about climate models and you say:
Recall that my statement was based on your Figure 2, lower panel. I took your grey curve, which refers to non-changing conditions with respect to forcing. I was able to see that this is a perfectly Markovian curve, with autocorrelation ρ(t) = exp(-t/a), where a = 1.25 years (I guess your lag is in years, right?). Now, considering a climatic scale with averaging time scale D = 30 years or D/a = 24, we can infer a characteristic correlation ρ(D) = exp(-D/a) = 3.8 x 10^-11, so small that could be regarded as zero. As a result, even not equating it to zero, two consecutive values of climate (for two consecutive 30-year periods) are virtually uncorrelated (with exact calculations the correlation of two consecutive variables of your “climate” is of the order of 10^-2).
Yes, I contend that this behaviour, in which climate appears as an uncorrelated process, is inconsistent with the real world climate, in which consecutive variables are correlated. To summarize the results of these calculations, in three points:
(a) Your climate models produced a series whose stochastic structure at annual scale is Markovian with characteristic time scale a = 1.25 years.
(b) At a climatic scale of 30 years this corresponds to a climate uncorrelated in time.
(c) An uncorrelated climate (see my Figure 3 and Armin’s Figure 1, left panel) is static and inconsistent to real world climate, which exhibits correlation and is changing (see my Figure 2 and Armin’s Figure 1, right panel).
In my previous comment I replied to your sentence:
I wish to continue with the subsequent two, so that I have answered at least one paragraph of one of your comments. You say:
Right, I do not dispute that they reproduce the patterns you indicate, nor their usefulness as simulation tools. What I doubt is their ability to reproduce trends (as I explained above, they produce static climate) and the credibility of their predictions (or projections if you prefer the IPCC idiom) for the future.
Did they really teach us about chaos? What is your take of the following statement: Schmidt (2007, The physics of climate modeling, Physics Today, 60 (1), 72–73; emphasis added):
But if you imply that, irrespective of what climate models yield, the real climate is chaotic, then I agree with you.
You can see further information about my views and contributions with respect to climate and chaos in:
Scientific dialogue on climate: is it giving black eyes or opening closed eyes?, 2011 (it contains further discussion of the above quotation by Schmidt, 2007).
A random walk on water, 2010.
A toy model of climatic variability with scaling behaviour, 2006.
These publications show that LTP does involve chaos.
The grey curve represents a climate simulations with no forcings, and provides abenchmark to the one with forcings. We know a priori that this simulation is artificial in the sense that it will have different LTP behavious to the real world where real forcings are present. Hence, it’s a mistake to assume that this simulation is equivalent to that of the real world. Again, I think your problem is that you do not know what is signal or what is noise (you may call the latter music or whatever, but the normal term is noise).
You also misunderstand the way climate models work. Of course they simulate time evolutions which are chaotic – because they simulate weather fluctuations. When you regard the climate as a boundary problem, as Gavin does, then you do indeed see that climate becomes predictable and in that sense nonchaotic. Take this example: I do not know the weather for an exact day in December because of fundamental limitations due to chaos. However, I know that the weather statistics for the December month will show lower temperatures than now.
Moreover, the LTP issue concerns the time evolution and hence how the simulated weather changes from one dayt to the next. It concerns the ACF. The simulated weather in the climate models is chaotic. Such behaviour has been studied by numerous scholars, from simplified models to full-scale weather models. But the question of whether the weather is chaotic does not imply that the response to a systematic forcing (boundary condition) is chaotic.
So my point is that the weather evolution is chaotic both in the climate model simulations and the real world. My question to you then is whether you think that such chaos and LTP are fundamentally different, and how would you ditinguish between these?
Do you believe I had not under understood that? I feel I have to repeat what I said. If you do not feed these models with changing forcings, then they produce a static climate; that is clear from your graph. The real world climate has never been static. The anthropogenic GHG forcing was not present, say, during the Medieval Warm Period, but again the climate was changing. You had implied that, e.g. in your EGU talk Climatic and Hydrological perspectives on long-term changes: a northern European view (2008), where for example you stated:
Had the climate been static, it would not generate favourable climatic conditions for Vikings.
Of course models are not identical with natural processes. Of course models produce artificial simulations—this is what they should and can do. We should never confuse models with nature. But we should demand from models to reproduce in their simulations some important elements of reality. Climatic models whose simulations are characterized by short-lived autocorrelations and, as a result, produce a static climate as a rule, and need external agents to produce change, are not consistent with the real world climate in which change is the rule.
This is exactly my point too. But there are forcings in the real world, be it changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases. So my point is that LTP in this simulation without forcing will not be the same as in the real world where there is forcing. The forcing introduces LTP. We see this by comparing the two simulations – the black and the grey curves.
So perhaps the real world climate would be static in the absence of forcings. We cannot tell from observations alone.
As far as I know, the favourable climatic conditions for Vikings was limited to Greenland and the North-Atlantic. When we look at regional variations – as opposed to global – we see more pronounced variations linked to changes in the winds and ocean currents. The early Holocene was also influenced by a difference in Earth’s orbit, whereby the high north received more sunlight (forcing).
Could we agree on a few ideas?
In my view, a correct explanation implies that both the assumptions are valid and the mathematics is correct. It is possible to have a beautiful and valid mathematical model/formulae, but if the underlying assumptions are wrong, the answer will not be correct (unless one is extremely lucky). Furthermore, one can start from a set of valid assumptions, and then use incorrect matamatical equations. This too will lead to the wrong answer.
So I wonder if it’s useful to identify which aspect we are discussing here. I feel that both Demetris and Armin have some impressive mathematical framework (although I’m a bit concerned with the issue regarding LTP vs Chaos), but that their assumptions lead them on the wrong path.
Thank you Rasmus, Demetris and Armin, for three very informative essays on the statistical properties of global average temperature timeseries. In these, Rasmus argued that physical considerations are important in the interpretation of statistics. Demetris and Armin mainly discussed the application of statistical methodology to climate data. That leaves me curious how the latter two view the physical interpretation of their statistical analyses.
Rasmus showed (fig 2), using GCM data, that a deterministic trend causes long term persistence to increase. Demetris welcomed this figure as “supporting what he has been saying for years”, though apparently with a different interpretation that Rasmus, namely that the unforced climate as simulated by GCM’s (low LTP) is unrealistic of the real world (high LTP). However, as Rasmus pointed out, the real world has been impacted by natural and anthropogenic forcings, so one can’t compare an unforced model run with observations that are impacted by forcings: Of course they don’t agree! Also going further back in time (e.g. the past millennia), climate forcings (e.g. changes in the sun, volcanism, land use, greenhouse gases) likely played a substantial role in influencing global average temperature. This severely hampers the quantification of internal climate variability based purely on the presence of LTP, since forced changes increase this persistence.
Yet implicitly or explicitly, Demetris seems to equate the presence of LTP to natural internal (unforced) variability (also phrased as chaotic behavior). How do you square this interpretation with the fact that forced changes to climate also increase this persistence?
Rasmus asks very much the same question in his latest comment:
“Can we agree on that forcing introduces LTP?”
“Can we agree on that forcing is omnipresent for the real world climate?”
These are important questions in order to establish areas of agreement and disagreement. I invite Demetris and Armin to comment on these.
As to the physical interpretation: internal variability would in all likelihood mean a redistribution of energy within the earth system. That ought to result in some components of this system cooling down while the surface is warming up. This is however not what is being observed: Recently, also the deep oceans (at least down to 2000 m depth) have been observed to be warming (Balmaseda et al., 2013). The energy balance provided a powerful constraint to the global average temperature; that has been the case for past changes in earth’s climate and it’s currently still the case. Ignoring this aspect can lead to very strange interpretations (as I tried to show in my april 1st blogpost a couple of years ago, by applying statistical reasoning void of any physical underpinning to imaginary data of my body weight).
Another question would thus be: If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal variability, where does this increase in energy come from?
sorry that I could not answer earlier. There are several questions you raised, which also Rasmus raised, but most of them have found already an answer in the literauture. For example, The fact that the ACF is affected strongly by trends is not new at all, we have discussed it in our 1998 PRL extensively where we used detrending methods to determine the True LTP of temperature data. What we called FA at that time is a method equivalent to the ACF and strongly affected by trends. For a more extensive discussion, see our 2001 Physica A paper ( Kantelhardt et al). Scientists who do not have experience with LTP and are not aware of the better detection methods usually think that the deficit of the ACF is a deficit of LTP, but this is just wrong. I like Rasmus to read our early papers on LTP that were published in the last 15y in Phys. Rev. Lett, Phys. Rev. E, Journal of Geophys. Res. D, GRL as well as Nature Climate Change to become more familiar with LTP, its definition, the methods to detect it, its consequences on the occurrence of extremes. It is easy to find my refs, just go to Google Scholar and type in my name.
It is important to use the same definition of LTP, but from what Rasmus writes, I understand that he has something in mind which remarkably differs from the well established definition of LTP, which you can find also in the pioneering contributions of Benoit Mandelbrot. So, on this basis it is nearly impossible to have a meaningful discussion.
I would like to know from Rasmus, for example, what is the evidence for El Nino to be LTP (scientists usually consider this as a quasi oscillatory phenomenon) and does he consider then also other oscillatory (seasons) or quasioscillatory (sun spots) as LTP? If yes, there is a problem, since then he mixes between trends and fluctuations, which is the worsest one can do in this field.
But let me come to the question of the origin of LTP. One of the origins is certainly the coupling of the atmosphere to the oceans. But the natural forcings also play a role. We have shown in our 2002 PRL that models that only use GHG forcing cannot reproduce the proper LTP. We have discussed extensively the role of the different ,forcings in our 2004 GRL where we found that the natural forcings are important to reproduce the proper LTP. In contrast, GHG forcing did not contribute to the LTP when using the proper methods (NOT the ACF!). So we need the natural forcings for obtaining the correct LTP. We have shown this also in our 2008 JGR-D where we compared millenium runs with natural forcings and without. So we have answered your two questions already long time ago: Natural Forcing plays an important role for the LTP and is omnipresent in climate.
For describing quantitatively the resultung LTP we do NOT need, in contrast to the claim of Rasmus, understand in detail the role of the many different natural forcings. We just need to understand the mathematical structure of the LTP and how we can model it. Then we can use the methods that we developed in our 2011 PRE to quantify if a trend is natural or not.
Of course, it is nice to talk about the effect of the different forcings. But since everything is interwoven in a linear and even non linear way it is nearly impossible to separate the effects from the different forcings on the LTP in a satisfying manner. The pragmatic way is to learn what LTP is, to learn and even improve the detection methods that can separate the natural fluctuations from external deterministic trends and then use the method we developped to estimate the effect of the trend. This way is doable and is actually common in climatology. The difference is only that in previous attempts the natural fluctuations have been considered incorrectly as Short Term Persistent, which significantly overestimates the trend significance.
ps: If it is difficult to download my papers, I can email them to those who are interested in them.
Firstly, I’d like to thank Rob van Dorland, Marcel Crok, Bart Verheggen and the climate dialogue team for organising this discussion on what I consider to be an absolutely fundamental aspect of climate science. I’d also like to thank Rasmus Benestad, Armin Bunde and Demetris Koutsoyiannis for taking what must be a considerable amount of time to prepare their cases and contribute to the discussion. Much of what I would like to say has already been covered, but I would like to add a few brief comments.
Rasmus asked the following question and I’d like to add my own perspective:
I’m not sure “LTP methods” is quite how I would view the issue raised here. Although some methods may have an assumption of LTP in a data set, I think methods are tools to operate on the data, and the data carries the property “LTP” or “STP”. The more appropriate question should be, I think, what is the consequence of applying a method (any method) on a data set with LTP present. This may not be clear so I will give a simple example to help illustrate the point I am making.
We can calculate the sample standard deviation of a data set easily, and this has an exact value. And we can do this whether the data has short- or long-term persistence. We then often interpret this sample standard deviation as an estimate for the population standard deviation. In this case, there is some error, because the sample standard deviation will not exactly match the population standard deviation.
And we are all taught in statistics class that when calculating the sample standard deviation as an estimate of the population standard deviation, we should divide by (n-1) rather than (n) to ensure the estimate is not biased. But this is only true for estimates of standard deviation where the samples have sufficient independence (for example, either for white noise, or for autoregressive systems where the data set is much longer than the characteristic time constant). If we have a data set containing LTP, using the sample standard deviation to estimate the population standard deviation is biased even if we do divide by (n-1).
The example I give here is a standard method that can be applied to all data sets. It is not an “LTP” method or an “STP” method, it is simply a method. What we need to take great care with is the consequence of applying this method to an LTP or STP data set because the properties of the method may differ between the two. And in this regard, there is great danger with rules that we often take for granted – such as dividing by (n-1) to get an unbiased estimate – because those are things most easily missed, and can easily mislead.
To understand how various methods interact with LTP data sets can be relatively easily assessed, for example, through Monte Carlo simulations.
Bart, from your comment:
Firstly: I do not understand why you think internal variability would be likely to result in a redistribution of energy. Let me give you an example of why I think this is not a useful assumption.
A key factor in internal variability is cloud cover. I like to choose clouds as an example, because they are often used as examples when we teach courses on fractals; a cloud is a classic example of a natural fractal, from the fluffy bits around the edges, to the clumps, and the whole cloud itself (and this is, of course, a product of LTP). But we often just show a single cloud as an example, but the limits do not stop there. The fractal properties of clouds extend not just within a cloud, but spatially from cloud to cloud, then the cloud cover from region to region, and from continent to continent. Also temporally; from day to day, from year to year, from century to century, from epoch to epoch.
Changing cloud cover is a great example of LTP arising from internal variability in the climate system. But of course, clouds do not just move energy around; they change the earth’s albedo, and change the quantity of energy in the system. On all time scales, with a simple relationship governed by Hurst-Kolmogorov dynamics.
So I disagree with your assumption that internal variability would (likely) result in energy redistribution alone. I also suspect that if all external “forcings” could be held constant, the internal variability in the system would still see LTP-like temperature swings.
On the topic of: do we think forcings “cause” the LTP. I also doubt this. I have a signal processing background, and like to think of things in frequency space, and although this is not always the best space to understand LTP, I will use it as an example because I am comfortable with it – apologies for this!
Imagine we could have an extremely long data set of climatic conditions, with high resolution; a huge number of points. Now we look at the power spectral density of that data set. In log-log space, we will see perhaps thousands of individual peaks. The pattern of these peaks is important, and on data sets I have seen they all follow a consistent, simple pattern in which the magnitude of the peaks follow a simple straight line, with the lowest frequency peaks being highest and the highest frequency peaks being lowest (classic LTP behaviour).
If each of these peaks had an associated forcing – for example, an orbital explanation, or some other mechanism – I would expect each forcing to be different, and for the peaks not to line up. For example, we can estimate the forcing associated with the orbital parameters easily, and we should expect the magnitude of the peaks to follow the same order.
But in all cases I have seen – the climacogram in ref. 1 below being a great example – the peaks line up in a strict descending order. While it is theoretically possible that the “forcings” may have aligned themselves to do this, it is an extreme and unlikely coincidence. Which is I find the “forcing” explanation deeply unconvincing, and the “LTP” explanation far more credible.
ref 1. Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Demetris, Rasmus, Armin,
Thank you very much for your points of view and your responses to each other. Although it is worthwhile to discuss the performance of climate models, the key point of this discussion is the determination of the significance of observed trends(in global mean temperature) beyond what would be expected from internal variability only. We can distinguish the following opposite views:
1) Using statistical models only
2) Using statistical models in combination with constraints using physical knowledge of the climate system (e.g. internal variability, energy balance)
I think Demetris is going for option 1, while Rasmus favors option 2. I am not so sure about Armin’s opinion. Armin, can you make a statement on this subject?
A second and related point is the choice of the statistical model, in particular with respect to the distinction between trends and fluctuations. So, let’s take the method of analysis of the statistical properties applied to the global mean temperature as described in section 5 of the Markonis & Koutsoyiannis (2012) paper (hereafter MK2012):
In order to construct the climacogram MK2012 compute the standard deviations σ(k) by cutting the time series in N samples of length k (in years). As a test for their method let’s consider two time series: 1) an upward trend in temperatures (white line) and 2) a fluctuation with a two times stronger slope in the first half of the time interval and the same, but negative slope in the second half of the time interval (blue line, see figure).
Both time series have the same standard deviation for any value of k: σ(k)=Sqrt(1/12).bkN (N>>2), where b is he slope of the trend line. Therefore, the trend and the fluctuation project both onto the same curve in the climacogram and they are indistinguishable.
In terms of energy changes of the climate system (in case no external forcing is compensating) signal x(t) implies an increasing energy loss, while signal y(t) implies no net energy loss since at time t=kN the mean global temperature has returned to the initial value.
Therefore, I would like you to react on the next three statements in an attempt to focus the discussion:
1) The method of MK2012 doesn’t distinguish between trends and fluctuations. In other words, trends in global mean temperatures are considered to be fluctuations in the climacogram. Conclusions about the behavior of the climate system using this climacogram lack energy balance considerations.
2) The variation in time of the global mean temperature in the absence of external forcing causes an energy imbalance that works to restore the temperature to its equilibrium value (where outgoing energy equals incoming energy).
3) If energy considerations are taken into account, i.e. on the basis of estimated external forcings, significance levels of measured temperature trends will be met sooner than on the basis of the pure statistical method of MK (2012), because the latter is prone to misinterpreting forced changes as internal variability. In other words, the MK2012 method is not suitable for determining the significance of an observed trend.
I want to express my strong agreement with Rasmus and Bart’s request that Koutsoyiannis and Bunde need to address possible physical explanations of what they are claiming regarding “long-term persistence”. The Earth is not a purely mathematical system, it is a dynamical physical system with varying environmental conditions – anthropogenic effects among them. If you ignore differences in important physical variables at different times and run a purely mathematical analysis of the system, you are essentially producing garbage, just as would be true of any scientific analysis of an experiment with uncontrolled variables.
Look at the Nile river level example Koutsoyiannis uses for instance. Koutsoyiannis mentions “land use” among the century-level changes likely to affect water levels – if that could be a significant factor, then that’s an important non-natural element that makes any statistical analysis of the water level non-informative about *natural* climatic long-term persistence. There could be something really interesting in here – perhaps the physical explanation for the ups and downs is some long-term oceanic oscillation cycle and the land-use changes are not relevant. But you have to do a *quantitative analysis* of these different factors to determine that – just as climate scientists do quantitative analyses of “forcings” to decide what’s important at the scale of the climate as a whole. Without that underlying quantitative analysis of the physical parameters of the system, and some indication of confidence that you have isolated away large-scale non-stationary change (local land use, solar and volcanic forcings, etc.) I can’t see how anything concrete can be obtained from this sort of analysis at all.
Has this level of quantitative analysis been done? Or is there a better example of long-term persistence in climate-relevant observables that is more isolated from land use and other human and climate-forcing factors?
Or better yet – if standard climate models can’t reproduce this sort of long-term-persistence that they think they see – what sort of physical model *would* allow for it? What changes would be needed in models to allow for this sort of thing? Right now the arguments seem very hand-wavy and unconvincing to me at least.
Thanks to everyone for their time and comments.
The introduction mentions warming over the past 150 years in the same paragraph as referencing an IPCC conclusion ‘it is extremely unlikely (<5%) that recent global warming is due to internal variability alone'. However, it should be made clear that this attribution statement was specifically addressing 'global climate change of the past 50 years', which would be ~1955-2005, not the past 150 years. This is important because it ties in with Bart's comments regarding warming of the oceans – the reason a strong attribution statement was made specifically only for the last 50 years was the availability of observations relating to ocean warming below the surface.
With that in mind it seems important, if these statistical approaches are attempting to offer an alternative attribution framework from the IPCC's, that they address relevant climate change indicators other than near-surface and surface temperature.
I'd like to also emphasise Rasmus' point regarding the MWP in relation to an unforced preindustrial control run: the climatic conditions of the Medieval period did occur in the context of forcings – almost all natural of course: orbital, solar, volcanic, small variations in methane, CO2 and probably aerosols (the latter three could be regarded as biogeochemical feedbacks in this context). Therefore an unforced model run doesn't say much of anything about the model's ability to reproduce the climate of the Medieval period.
Thanks a lot for all the interesting and very informative posts.
I would think that besides statistical significance of temperature trends physical and chemical laws would be important. A ‘long-term memory’ in the climate system should also be based on the laws of nature. Dr. Benestad has a lot of attention for the physics of the climate system, but I’m missing that in the posts of Dr. Koutsoyiannis and Prof. Bunde.
So some general questions came to my mind.
Is it a mere coincidence that after the rising of greenhouse gas concentrations in the atmosphere temperatures seem to rise also, when physics tells us they should rise?
Or, is it a mere coincidence that e.g. the amount of sea ice in the world is dropping, that sea level is rising when physics tells us this should happen?
From the recent Pages2K paper and Marcott et al 2013 I conclude that during the second part of the Holocene global temperatures were gradually dropping until some 150 years ago when this decreasing trend changed into an increasing trend. Can this happen by chance when this seems very plausible from the change in radiative forcings?
What is the statistical possibility that all parameters seem to change in one direction, the direction that physics tells us? It is obvious that several parameters in the climate system are linked to temperature. Should all these parameter changes be taken into account when looking at relative magnitude of the three types of processes, internal variability, natural or anthropogenic forcings?
Until now I avoided reference to fundamentals of science, although several of Rasmus’s comments offered me this temptation. Instead, I tried to refer to some of my papers which provide my views on such fundamentals, for example the Random Walk on Water. But as I understand from Bart’s comment and particularly the link to his example on his Weight Gain Problem, my references did not work. It is thus unavoidable to clarify my views on some fundamental issues. I will try to clarify my positions in terms of several dichotomies, some of which have been used or implied by Rasmus and most recently by Bart.
1. Models vs. reality
In my view this is a true dichotomy.
My grandson has taught me that the virtual reality, e.g. in computer games, can be fascinating. Of course he and his playmates are able to distinguish it from the real-world reality; for example they are well aware that, in contrast to their computer games, reality does not offer a pool of additional lives if one dies.
On the other hand, in scientific conferences I have often seen graphs mixing up observational data of the past with model projections of the future and speakers presenting model projections as if they were reality. I have seen IPCC texts, scientific publications and policy documents speaking about the conditions in 2100 using “will” without adverbs like “likely”, “probably”, etc., e.g. “extreme events will become more frequent”.
Furthermore, it has been very common to use concepts of stochastics, i.e. concepts pertinent to models, as if they were real objects. Concepts like probability density function, autocorrelation function, stationarity, and many more, apply to the world of models, not to the real world. They build upon the concepts of a random variable, a stochastic process, an ensemble, etc. These are abstract mathematical objects, not objects of real life. Large-scale real-world processes, like the climatic processes, have a single life, a unique evolution, and are not repeatable. There are no ensembles (pools of many lives) in real world processes. The idea of an ensemble is a useful one, to define abstract concepts like stationarity, but we should be aware that is applies to models only.
2. Physics vs. statistics
In my view this is NOT a true dichotomy.
I think the language of physics is (or at least includes) mathematics. In mathematics we write 1 +2 = 3, 1 x 2 = 2 etc. Likewise, in physics, 1 kg + 2 kg = 3 kg or 1 kg x 2 m/s^2 = 2 N.
Sometimes addition and multiplication are not enough to study physical phenomena. Thus, we may use for instance differential equations. We may also use more abstract concepts, like random variables to represent uncertain quantities, or stochastic processes to represent uncertain quantities evolving in time. Further, we may use statistical methods to estimate these quantities from measurements. This does not mean that we departed from physics and we landed on another continent which is statistics or stochastics. We still live in physics. As long as we have the feeling that we are doing physics when we add certain numbers of kilograms, we may well have the same feeling whenever these numbers of kilograms are uncertain and we opted to treat them as random variables.
As we accept that the regular addition should be made correctly, we should also use correct mathematics when we use random variables. For example, if x, y and z are random variables related by x + y = z, we should be aware that Var[z] = Var[x] + Var[y] + 2 Cov [x,y].This is different to regular addition. Neglecting the last term, the covariance, may result in dramatic errors.
Furthermore, we may give these peculiar quantities, the covariances, a sound physical meaning. This is the case for example in Reynolds stresses in fluid flows, which are none other than covariances of fluid velocities.
Finally, we may build sound physical theories based on statistics. For example, the big progress in thermodynamics, which also constitutes the foundation of climatology, happened when statistical thermophysics was able to dominate over the funny notion of the caloric fluid.
In other words, statistics is physics. For simple physical problems, in which quantities are exact, statistical considerations are not necessary. In complex systems statistics within physics is indispensable.
3. Signal vs. noise
In my view this is NOT a true dichotomy in geophysical sciences, while it is meaningful in electrical engineering and telecommunications.
Excepting observation errors, everything we see in climate is signal. The climate evolution is consistent with physical laws and is influenced by numerous factors, whether these are internal to what we call climate system or external forcings. To isolate one of them and call its effect “signal” may be misleading in view of the nonlinear chaotic behaviour of the system (see also dichotomy 5 below).
My reasoning as to why I regard this dichotomy as misleading has been exposed in earlier comments. To repeat it as briefly as possible: Let us assume that it is possible to remove the “signal” (the anthropogenic influence) from “noise” (whatever this is). Would the stochastic properties of the “noise” be different from those of the whole climate? As I wrote extensively earlier, the properties of the “noise” alone can be easily seen in older periods, those not affected by the “signal”. And it seems that stochastic properties remained unaltered. In contrast, climate models, which allegedly can distinguish “signal” and “noise” yield a “noise” which is fully inconsistent with past climate.
4. Trends vs. fluctuations
In my view this is NOT a true dichotomy.
I am not aware of a rigorous definition of the term “trend”. I think it is used in a loose and colloquial manner. Indeed, loosely speaking and with reference to my Figure 2 above, we could say that between AD 640 and 780 there was a falling trend in Nile’s water level. However, in view of the longer 849-year record, we may rather say that this was part of a long-term fluctuation. On the other hand people who lived some decades after the recording of the Nile River level had started, i.e., in the 7th-8th century, may have called this an unprecedented trend, may have worried that it would continue in the future, that it would have catastrophic effects, etc.
From a scientific point of view, if we do not provide rigorous definitions of the terms we use, it may be difficult to discuss in a constructive manner.
5. Linearity vs. nonlinearity
In my view this is a true dichotomy.
It is possible that our understanding of complex natural phenomena has been influenced by that of simple systems, particularly those which can be effectively modelled by linear differential equations. In those systems, solutions corresponding to different causes/perturbations can be added together to form the solution corresponding to the combined effect of all causes. Reversely, we can allocate a weight, with respect to the combined effect, to each of the causes.
In more complex systems (yet the most common ones) whose study needs to abandon linear models, the contribution of each cause or forcing is not straightforward. A colleague from Mexico offered me this illustration: “A typical example that I give to my students, because it is (unfortunately…) well-understood here in Mexico, is the following: If someone is being machine-gunned by two people at the same time, it is objectively impossible to quantify the contribution of each killer to that person’s death, since the wounds caused by each one of the two would have killed the person anyway, even in the absence of the other killer”. If we wish to go backward to the causes of the causes, examining for instance how these killers acquired the machine guns, how it happened to be in that place at that time etc., things become even trickier.
Even worse than this, in chaotic systems described by nonlinear equations, the notion of a cause may lose its meaning as even the slightest perturbation may lead, after some time, to a totally different system trajectory (cf. the butterfly effect).
6. Stochastic vs. deterministic models
In my view this is a true dichotomy.
According to my view, things do not happen spontaneously. Also natural phenomena are not infected by a virus of randomness, which after infection turns them from deterministic to random. There is no such virus. Rather, physical laws hold true all the time.
Whenever we are able to use deterministic models to describe nature and to find solutions which are in good agreement with reality, we don’t need any stochastic descriptions. We use the latter only when the deterministic solutions fail.
Let us consider Bart’s Weight Gain Problem. The problem may have different versions. For example, one version would be to explain why he gained weight during the last decade. If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.
Another version of the problem would appear if he tried to predict the future. In this case such a deterministic model may not work. Perhaps he would then think of constructing a macroscopic, rather than detailed, model. He would perhaps recognize that, in principle, whatever the effort he puts, the model will not be accurate. Consequently, he may think of changing the representation of his weight from an exact variable to a random variable. Using a representation by a random variable will not mean that his weight was infected by the virus of randomness or that the law of mass conservation ceased to hold. A random variable is just an uncertain variable. Nothing more and nothing less than this. The modelling framework has now become stochastic. Once he uses a stochastic framework, he may also use the stochastic toolbox, which contains tools such as averages, variances, covariances, autocorrelations, power spectra, and many more. He may try to infer the stochastic properties of the future after fitting the stochastic model on stochastic properties (not necessarily specific values) of the past. He may further try to find and study time series of other people’s weights perhaps with greater ages, in order to see whether age does matter or not in the evolution of weight, etc. All these may help more than the law of mass conservation in the modelling, although we can be sure that this law will always hold true.
In a few words stochastic modelling is modelling under uncertainty. It is not denial of physical laws. With reference to dichotomy 3 above, stochastics describes the signal and does not need a decomposition “signal + noise” to work.
Dear colleagues, you may feel free not to adopt my views, to think that they are heretic or to characterize them however you wish. However, I hope you will recognize that I cannot contribute in discussing questions that distinguish statistics from physics; I cannot accept that using a stochastic modelling approach violates or contrasts physics; I cannot offer too much in a discussion about separation of signals and noises; and I cannot offer too much in answering questions that are formulated using undefined terms and concepts.
Thanks very much for the excellent example. However, I believe instead of illustrating some weakness in Markonis and Koutsoyiannis (2013) as you may imply, it illustrates that restricting our vision and making improper use of stochastics can be dangerous.
From your graph I assume that what you call “fluctuation” is a periodic pattern with period kN and what you call “trend” is a line that goes unaltered in the future and the past. Without having the future and the past in mind, we cannot call your “fluctuation” fluctuation; it can well be two consecutive trends, continuing in the past and the future by extrapolating the two segments as straight lines.
So you chose a time window equal to one period (kN) to view the two cases and you find that the two yield the same climacograms, which is correct. If one wished to restrict the vision further, one could perhaps take a time window of size kN/2 and see two straight-line trends, without any hint for fluctuation.
But what I propose is the opposite, to widen our vision to, say, an order of magnitude farther (e.g. 10 k N). Or even better try to see how the two cases behave asymptotically, as time tends to infinity.
We will see that the “trend” case gives a constant climacogram (nb., here, as in Markonis and Koutsoyiannis 2013, I speak about the temporally averaged process, not the cumulative one), not changing with time scale k. For small time scales, the “fluctuation” case will also yield a fairly constant climacogram. However, for large time scale, not only will it give a decreasing climacogram, but the rate of decrease will be very steep, much steeper than in the case of white noise. I have not made the exact calculations for this case, but I guess the behaviour will be virtually the same with what you see in Figure 8 of Markonis and Koutsoyiannis, the series labelled “harmonic”. That is, I guess we will have an envelope curve with a slope equal to -1 (vs. -0.5 for the white noise).
However, there additional problems here. For the “trend” case we have abused the notion of standard deviation and hence that of climacogram. Indeed, taking some points on a line (straight or whatever) we can calculate a numerical value of a standard deviation using the classical statistical formula. But does this represent anything in a stochastic theoretical framework? The answer is negative. The process is simply nonstationary, so we don’t have the right to treat the points of the time series as if they were generated by a stationary and ergodic process and to use the statistical formula that applies to stationary processes.
Another point is that we would not need at all to treat this case in a stochastic framework. It would suffice to describe it deterministically, fully avoiding reference to standard deviation.
A final point is that, if a “trend” like this would be a realistic representation of the climate, we would not be here to discuss. Loosely speaking (as in my previous comment) local trends appeared throughout all Earth history, but these were parts of fluctuations. A consistent trend would lead to a runaway behaviour. That is why it is better to use stationary descriptions within stochastic models of nature (see additional justification in Koutsoyiannis 2006, 2011).
The “fluctuation” case, could, loosely speaking, classify as a stationary process, under the terms explained in Appendix 1 in Markonis and Koutsoyiannis (2013). However, again we would not need to use stochastics if the “fluctuation” was regular and hence describable in deterministic terms. The need for stochastic description arises when the period and amplitude of fluctuation vary, and when we have many scales of fluctuations. The synthesis of many scales of fluctuations results in a process with LTP, as described in the section “A physical explanation” in Koutsoyiannis (2002).
My conclusion is let us not restrict our vision. Note, Markonis and Koutsoyiannis used the widest possible time windows for the available instrumental and proxy records.
Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47 (4), 573–595, 2002.
Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 324, 239–254, 2006.
Koutsoyiannis, D., Hurst-Kolmogorov dynamics and uncertainty, Journal of the American Water Resources Association, 47 (3), 481–495, 2011.
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Thanks for your clarifications, Demetris. I do note however that you didn’t answer the questions posed by Rasmus and myself:
Can we agree that radiative forcing (be it natural or anthropogenic) introduces LTP?
Can we agree that radiative forcing (be it natural or anthropogenic) is omnipresent for the real world climate?
If you deem a substantial portion of recent (past ~150 years) global warming to be due to internal (unforced) variability, where does this increase in energy come from?
Could you please briefly comment on these specific questions?
Regarding the different dichotomities that you mention: I certainly agree that physics and statistics are not a dichotomy: Both are needed to make sense of the climate system (and of the world around us in general). That is also the point that Rasmus is making, if I understood him correctly. In your reply to Armin, you questioned a statistical result (a strong difference in LTP of land and sea data) based on physical reasoning. That is also what I am doing when I question the dominance of unforced internal variability in explaining the current warming (which is what I interpret your position to be), since I find it inconsistent with basic energy balance considerations (hence the third question above).
Regarding my weight gain analogy, you wrote “If he kept detailed records of inputs (how many brownies, chocolate fudge cakes, etc. he ate) and outputs, he might be able to make a detailed deterministic model to describe the evolution of his weight.” However, based purely on statistical analyses, the evolution of the timeseries was interpreted to be due to stochastic variations. This would presumably still be the case even if I had detailed records of all inputs and outputs (since the argument was made purely on statistical grounds). Now with the climate system, we do have some (imperfect) idea about energy inputs and outputs (see e.g. Trenberth et al or Hansen et al). These indicate that the energy content of the whole climate system has increased (i.e. radiative forcing acting on the system).
How could this not have influenced the global average temperature of the planet?
I do not avoid answering your questions—or other people’s ones. I am just subject to energy and time constraints. There is a lot of interesting stuff to read and comment on. As you see, I am working to contribute, but I have limitations (and additional duties). Thus I must put priorities. My first priority is that we should understand each other on general principles before we can discuss more specific issues. This is not easy. For example, when, after my extensive essay on dichotomies, in your last comment you say “the evolution of the timeseries was interpreted to be due to stochastic variations”, I feel that I have again to stress my general view: There is no such thing as “stochastic variation”. The variation is real, it was caused by a real cause, e.g. because you ate too much chocolate. In this respect I would never call it noise. It is signal.
Stochastic can be just the model that we use to describe the real variation. And as I said, such type of model is useful only if the deterministic description fails. Whenever one is obliged to use terms like average, variance, autocorrelation, significance testing etc., he must be aware that, most probably, he has already departed from a deterministic description and resorted to stochastic description.
A stochastic model should fully respect all laws and be consistent with all available information. If we everything is known then the stochastic model should reduce to the deterministic one. If there is uncertainty, then the stochastic model can describe it in terms of probability.
I am a little surprised that people are claiming that LTP is somehow “statistics only”, given that Demetris has published a paper  in the peer-reviewed literature demonstrating how LTP arises from the principle of entropy maximisation, and more importantly, both Armin and Demetris have demonstrated that LTP is a far better match to observations than STP. Also, it is rather unhelpful to just assert that it needs a physical justification, given that one exists, if people wish to question the physical basis, it would be helpful if they would explain which aspect of the physical basis they disagree with; I can see three possible areas, either the principle of entropy maximisation, the analysis Demetris conducted, or whether the necessary conditions (constraints) outlined by Demetris in his paper apply. See ref 1. below.
I will add some more detail now, based on Rob van Dorland’s quote below, as it is at least specific enough to address the points raised.
I do not see how Demetris’ view can be classed as “option 1”. Let me compare the two approaches.
Rasmus is using GCMs as the basis. These are deterministic models built on a dynamical core of numerical solution to the fluid mechanics problem of the earth’s climate, overlaid with other relationships (insolation, ice and clouds, aerosols etc), some of which are based on analytical theory, others approximations or parameterisations, and others again missing. Rasmus has run these models and demonstrated that STP internal variability is seen, as shown in his plots above.
Demetris has used a different approach. He has developed a stochastic model, based on the principle of entropy maximisation, combined with some simple, testable constraints, and identified under which constraints we expect to see white noise variability, STP variability, and LTP variability. We can see from the principles and constraints that we expect climate to exhibit LTP internal variability.
So both Demetris and Rasmus have applied physical laws and constraints of the system. Both have expectations of internal variability from these constraints. But Rasmus has concluded STP internal variability, Demetris has concluded LTP internal variability.
We can turn to the data to determine which of these theories is consistent with the real, observed climate. Which is something Armin (and many others, such as Cohn, Lins, Montanari, Halley, Rybski, von Storch, etc. etc.) has done for us. And the conclusion is that we see LTP present in the real climate system. This shows us Rasmus’ model is wrong, and falsified. It does not mean Demetris’ model is correct; but it seems the best model we have today.
The second point Rob makes regards energy balance. Any time series with a finite, defined population mean has a point of “energy balance”. So, for example, we can reject the idea that the climate is a random walk, since a random walk does not have a finite defined population mean, so is inconsistent with the principle of conservation of energy.
But LTP can be shown, analytically, to have a finite, defined population mean. So there is an “energy balance” present in a system with LTP internal variability. STP also has a finite, defined population mean, so this also passes the energy balance test. The difference being that the sample mean is a poor estimator of the population mean for an LTP series (for much the same reasons as my discussion of standard deviation above).
So I strongly disagree that adoption of LTP represents option 1 above. The adoption of LTP has a physical basis; it is a stochastic model that is consistent with the constraints of the climate system; the expectation operator yields a meaningful energy balance term. Clearly and unambiguously, LTP falls into option “2” above.
. Koutsoyiannis, D., Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011.
I’m trying to focus the discussion, and do not in any way mean to imply that you’re “avoiding to answer the questions”.
The dichotomy that is at the centre of this discussion is that between internal (unforced) variability and forced changes (where the latter could be natural or anthropogenic). And what LTP could tell us about this distinction (if anything).
Internal variability usually involves a redistribution of energy within the climate system (where the ensuing changes in surface temperature cause an energy imbalance which works to restore the system back to its equilibrium value where outgoing energy equals incoming energy; Rob’s second point in an earlier comment). Climate forcings involve a change in the energy balance (e.g. from changes in the sun, the albedo or the greenhouse gas loading), which causes the earth’s temperature to change.
One apparent disagreement about this distinction is whether the presence of LTP is related to the degree of internal variability. Rasmus argued that this is not necessarily the case, since forcings also contribute to LTP and because forcings are omnipresent in the temperature data.
I would be curious as to your and Armin’s view on this, time permitting of course.
Bart, you say:
And I have already explained above why this assumption that internal variability can only redistribute energy is a flawed assumption. Clouds, as just one example (there are many more), exhibit LTP and sensitivity to initial conditions, so can only meaningfully be described as internal variability, can change the albedo of the planet. There is then nothing to “restore the system back to its equilibrium”, there is just the future trajectory of the climate, which will continue to exhibit LTP and will continue to be highly nonlinear and sensitive to initial conditions. And of course the climate will continue to have a finite, defined population mean (the “energy balance”) which will continue to be difficult to estimate from the sample mean.
I can understand that you would prefer Demetris’ answer to mine, as he is the expert on this topic and I am not, but it does become frustrating when the same fundamental misconceptions have to be addressed over and over again, especially when we are all limited by the time and effort we can put into this discussion.
Spencer Stevens writes “Changing cloud cover is a great example of LTP arising from internal variability in the climate system.”
Spencer takes zero notice of Svensmark’s hypothesis, and the recent work done at CERN labs, of the potential connection bewteen GCRs and cloud cover. He states absolutely that “Changing cloud cover” is a “great example” of “internal variability”. Ignoring alternative hypotheses, which have the support of some empirical data, is not my idea of what the scientific method is all about.
Koutsoyiannis appears to agree he has little to contribute to substantive discussion here:
I’m sorry to hear this; dialogue is pretty difficult when one party completely refuses to respond to critical issues like these. Science is about explaining things we observe, not just sitting back and watching the world do whatever it does. The existence of long-term persistence in an observable implies some underlying cause that has slow variation: either an internal physical variable of the system under study that changes slowly, or an external parameter with similarly slow change. It is critical for our understanding to know which of these is the cause, and deeper analysis of the system has to be done to ferret that out.
Just to illustrate with the Nile river level example: let’s say we have two contrasting possible fundamental explanations for the long-term persistence seen there:
(1) changes in human use of the water and land around the river that vary over centuries
(2) variations in ocean currents and behavior also on a century-scale, with an impact on climate and rain-fall over the Nile basin.
If (1) can be eliminated as a cause, leaving us only (2), or if there is other evidence supporting (2) (tree rings or other rain-fall evidence in the area that matches the river level changes, for example) then that is a strong piece of evidence for real internal long-term-persistence in the climate system, something which models evidently don’t capture.
But if (1) is the explanation, which can be looked at for example by examining patterns of settlement and culture during the periods of river level changes, then this example has nothing to do with long-term-persistence in the climate as a whole, since the changes are caused by local human activity.
If you haven’t done the work to distinguish between (1) and (2), then observing LTP in the Nile river example tells us nothing useful about the climate, it is just as much garbage as a scientific experiment in the lab under conditions where important variables are not being controlled.
Jos Hagelaars writes “the amount of sea ice in the world is dropping,” This statement does not seem to accord with the most recent data we have. See http://arctic.atmos.uiuc.edu/cryosphere/. I think it would be a good idea for those who comment on this blog to ensure that their statements are in accordance with the most recent empirical data
My point of showing a “trend” and a “fluctuation” over a limited time span is that by applying your method of constructing a climacogram you will lose information on the signature (time evolution) of the investigated time series.
What you are actually doing is cutting time series into N pieces (averages) of length k years and determine the standard deviation of the distribution. For the same distribution, you may reconstruct time series with different signature by putting the pieces back in a different order. This implies that you lose information on the signature.
Time series (of global mean temperature) with different signatures may have different effects on the energy budgets of the climate system. The energy considerations are then mathematically speaking constraining the degrees of freedom, you find with your statistical method.
So, can you confirm that by using your statistical method (MK2012)
1) you lose information on the signature of the time series
2) by adding energy considerations to your method, you will lose less information about the signature of the time series of global mean temperature
Yes, of course, by using the standard deviation you lose information in comparison to the complete catalogue of values in the time series. However, using standard deviations at a continuum of time-scales (the climacogram) you lose less information, or if you allow me to put it differently, you lose that information that you want to lose.
I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the real-world system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important. For example, in describing a mole of a gas we usually do not care about the positions and momenta of the 6 x 10^23 molecules it contains and prefer to describe the system it in terms of macroscopic quantities like the pressure and temperature, which by the way are statistical quantities. Certainly, the use of temperature and pressure is associated to loss of information, but I think this is intentional, isn’t it.
Stochastic models are macroscopic descriptions. Now, the climacogram is one-to-one related to the autocovariance function and the power spectrum of a stochastic process. Thus, if you assume a model for the power spectrum of a process, you have a model for the climacogram; no more and no less information with respect to the modelled power spectrum.
The loss of information in stochastic modelling can be controlled. I believe if you can quantify the “signatures”, the “energy budgets”, the “energy considerations” and the “degrees of freedom”, you refer to, they can be represented in stochastic modelling. Of course this will need some work, but I think in principle it is possible.
If we accept that these quantities (the quantified “signatures”, “budgets” etc.) are somewhat reflected in the data, they are already there in the existing analysis. But if you can provide me with additional, quantified constraints, I will think how these could be taken into account.
I don’t think you’re understanding Rasmus’ point. As far as I can tell he’s not at all arguing that the real world exhibits only “STP internal variability”. He’s saying that the statistical behaviour being labelled “LTP” is an emergent property of forced GCM realisations, and that the effect is much weaker when looking at unforced GCM realisations. That is, LTP is an expected consequence of forcing. As you’ve mentioned them, this was the primary finding of Rybski, Bunde and Von Storch (2008) when looking at forced simulations (though apparently not including orbital forcing) of the past 1000 years versus unforced control runs.
Given that there is always forcing going on “LTP” is always something which should be noticable in real records of global or large scale temperature. However, this is behaviour which is anticipated by GCMs, so I can’t see in what sense they would be falsified by that notion. I think this is why Rasmus talks about the potential for circular reasoning (though I’m not sure it’s the correct phrase in this circumstance): GHG forcing induces strong LTP behaviour, strong LTP signal is identified in climate records, GHG forcing is dismissed as a climate driver due to the presence of strong LTP behaviour.
From another of your comments:
If each of these peaks had an associated forcing – for example, an orbital explanation, or some other mechanism – I would expect each forcing to be different, and for the peaks not to line up.
What are you using to inform your expectations for the consequences of forcing?
Arthur, you say:
I am sorry that you feel like this. Perhaps Rob, Marcel and Bart made a bad choice by inviting a person who has little to contribute. On the other hand, as I see from his comment, Bart agreed that physics and statistics are not a dichotomy and he did not express disagreements on what I wrote for the other dichotomies, so what is your point?
Please note that I am an engineer by profession and I would be unprofessional if I sat back and watched. Do you feel that I am sitting back and watching right now, or am I trying to interact with you and the other people involved in the climate dialogue?
Of course it has. But a slow variation at a single time scale does not suffice to generate LTP. A single time scale of variation results in Markovian dependence. You need to have many scales of change to get LTP. I refer you to my post above; I devoted effort and time to write it, and I hope it would not be waste of time if you read it. In addition, I can refer you to some of my publications already mentioned above, investigating the relation of LTP with extremal entropy production, or the changes over a continuum of scales.
It is not difficult to infer that the dominant mechanism is the second because we speak about the 7th-15th centuries, during which the exploitation of the Nile was minimal, in comparison to that of today. The dams have been built only in the 20th century. Note, we have also a flow record of the Nile of the modern period (starting at 1870). The record is naturalized, meaning that hydrologists were able from water budget data to find what the natural flow discharge was, taking into account human withdrawals etc. The modern naturalized record verifies LTP. We have also records in other sites within the Nile basin, which again verify LTP.
If this is garbage to you, I am sorry; the only thing I can do is to invite you to make a better analysis and shed more light.
Demetris Koutsoyiannis – thanks for responding to my comment. However, you attack a straw man, not me, when you state:
– all I stated was that LTP implies an “underlying cause that has slow variation”, I said nothing about time-scale other than the word “slow”, and certainly had no intention of implying a single time-scale, or even a single cause – there has to be at least one causative element though. It is unnecessary diversions like this one that derail getting to the actual heart of matters; you would do better to focus on things that are actually important and respond to what people actually say.
That said, I would like to express appreciation for you concretely responding on the questions I had regarding Nile river flow. Although there was no quantitative component to your response, and no citations on the topic, it sounds like qualitatively the issue of human land- and water-use change over the time period in question can be dismissed. If that’s true, the results are not garbage, so good.
So, then, what are possible underlying explanations for LTP in the Nile river flow case, or if there’s a better example, that? I have read enough about “maximum entropy” and the like to know such claims provide little to no physical insight to me. Buzz words are not explanations.
Is there any evidence for at least one thing in the physical system of river structure, precipitation, etc. that shows the sort of long-term-persistent slow-change behavior that could explain these observations? If precipitation changes are causative, do we have precipitation records that can be compared and show similar (and preferably matching) persistence over the century-level and longer time scales? And what could explain the long-term memory regarding precipitation?
Of course over 100’s of thousands of years we have the orbitally-induced ice age changes which certainly explain slow change at that level – and we know from longer term temperature records there has been a general cooling trend for the past 11,000 or so years (until recently), presumably orbital-related as well. Those are the sort of forcing changes Rasmus discusses. We have volcanic and solar forcing changes on shorter time-scales. It’s very unclear from what you’ve done that you have any evidence of LTP originating from anything outside of these external factors – it would be very nice if you could make a clear statement on whether or not that’s true.
Armin Bunde – I’m interested that you state (in a comment):
– do I understand this to mean that historical runs from climate models DO show LTP “on all time scales” with these sort of Hurst exponents? Demetris Koutsoyiannis seems to think that’s not possible. What’s the reason for this discrepancy? Is it the issue of whether or not external forcings are being varied (a historical run presumably uses historical values of the forcings)?
I have no words to thank you for your comments. I could not reach the clarity (not to mention the level of English) of your comments. In particular, I warmly endorse your last comment assessing that my opinion may not be classed as “option 1″ and clarifying that random walk cannot stand as a physically realistic model, while LTP can. I trust Bart, who seems to dislike unit-root processes, will appreciate the latter feature.
Not only does your comment make my life easier, in terms of effort to reply, but it also gives me a warm feeling of sharing similar ideas with a knowledgeable colleague, who, notably, has a signal processing background that I do not have.
Demetris, in your reply to Spencer you state that LTP can stand as a physically realistic model. What do you mean by that? What is your physical interpretation of the presence of LTP? Is a physical interpretation based solely on the presence of LTP possible at all? I do note that Spencer equates LTP with internal variability (rather than being the combined result of internal variability and radiative forcing). What is your view on that?
You state that I dislike unit root processes. That would be non-sensical, as LTP and unit roots are just statistical features of a timeseries. What I aluded to in my april first post is that certain interpretations of stastical features may be unphysical.
First, I am sorry if my failed attempt to inject a little bit of humour in the discussion annoyed you and if you found it non-sensical. Therefore, I’ll try to be serious in this comment.
Second, I would formulate your statement in a different way: Strictly speaking, LTP and unit roots are not features of a time series but of a stochastic model. A time series can be different things, e.g. a series of observations of a natural process, a series of outputs of a deterministic or a stochastic model etc. The time series alone cannot tell us that it exhibits LTP, STP, unit-root behaviour, or whatever. If it could, we would not make this discussion.
If we do not know the exact dynamics that gave rise to a certain time series, different (infinitely many) stochastic models could be used to represent it. Statistical analysis of the time series will then tell us which of the models, e.g. an LTP model, an STP model, a unit-root model, etc., is more consistent with the statistical properties of the time series. Such comparisons are unavoidably inductive procedures, not strict mathematical proofs.
Third, I fully agree with you that certain interpretations (and models) may be unphysical. This implies an additional, very powerful means to reject some of the models, the deductive logic.
As a first example, let us examine a unit root process. I must admit that, I dislike even the term “unit root” which over-stresses a minor mathematical feature of a process. The important characteristic of such process is that it is nonstationary. A nonstationary process in which the mean and/or the standard deviation tends to infinity with increasing time, can be rejected on the basis of deduction. No need to compare the statistical properties of the time series with the theoretical properties of the model.
The simplest case of such a nonstationary process is the random walk process, closely related to the Brownian motion. Spencer has explained why we should reject it:
I also quote another explanation by Lubos Motl from his blog:
Lubos adds some “damping” to the random walk to make it more physically realistic. He thus gets a Markovian process. This is indeed more realistic but it classifies as a STP process. An STP process is not easy to reject on a deductive basis. However, as I explained in my main post, a Markovian process has some theoretical problems; quoting my main post:
For these reasons, explained in more detail in my main post, among nonstationary, stationary STP, and stationary LTP processes, the less unphysical are the stationary LTP.
So dear Bart, does my main post and the additional explanations in my above give answers to your following questions:
If not, could you also see some of the papers I give in my references, particularly those I mention in my reply to Arthur (also cited in Spencer’s comments)?
I left one sentence of your comment unanswered. I will answer this soon (I have something else to do in the meantime).
Bart, in your latest comment you ask some interesting questions that may help to get to the root of the differences in our understanding of this topic. I’ll try to give some reasonably clear answers, although again these are my thoughts and you may be more interested in what Demetris has to say.
LTP is, in my view, a consequence of the dynamics (i.e., the physics) of the climate system. But LTP is pervasive. That is, once LTP is imprinted on a system, as we move to longer and longer timescales (e.g. climate), LTP will affect everything.
So, I don’t think I understand quite what you mean by “is a physical interpretation based solely on the presence of LTP possible”, it is a slightly strange way to phrase the problem. I think once LTP is present, it becomes necessary to interpret climate through the presence of LTP, since LTP is so pervasive.
I have used a number of terms in this discussion (internal variability, radiative forcing, energy balance), often with scare quotes (!). In doing so, I am trying to link concepts between conventional climate science and how these things manifest themselves in a non-linear system with Hurst-Kolmogorov (LTP) dynamics.
Internal variability I think can be a useful definition, but I perhaps need to be clear on what I mean. When I refer to internal variability I mean anything which is part of the non-linear, interacting climate system. If something can be both affected by climate, and affect the climate, and be sensitive to initial conditions, then I consider it to be a part of internal variability. Because of the interaction and sensitivity to initial conditions, the horizon of predictability of internal variability is very limited.
Any factor that can affect climate, but climate does not affect it, I think could be reasonably considered as an external factor. The orbit of the earth is a good example. We have a good understanding of the earths orbit; we have good evidence that it will be quite predictable out to perhaps a hundred million years from now, and that prediction does not require knowledge of climate. So I think this can be considered an external factor. Note that the change in external factor will influence the climate, and send the climate on a different trajectory to that which climate would have followed had the external factor not changed.
I have talked about radiative forcing but it is not, I think, a terribly useful concept for climate diagnostics. Radiative forcing as I understand it is simply a change in irradiance. In this sense, it is simple and measurable value – I can place a pyrometer outside my door on a sunny day, and if it clouds over I can measure a change in irradiance of the order of hundreds of watts per square metre.
In this sense I need to clarify my comment. Of course, there will be a change in irradiance, because there are changes in irradiance all of the time. What I refer to when I talk about radiative forcing here is really what we might consider the external controls that climate scientists believe drive changes in equilibrium, such as solar output or GHGs. If we make no “artificial” changes to these, and just allow the climate to run, I think we would still see LTP (but with changes of irradiance happening all of the time).
I think the confusion is increased because although radiative forcing is in a sense measurable, it is a very poor climate diagnostic and it is often used (misused?) with the assumption that climate is linearly related to a suite of radiative forcings at the climatic scale. I think this assumption is problematic and can easily mislead. It becomes even worse when some changes of irradiance are declared as “feedbacks” rather than forcings when, in practice, essentially all internal variables are “feedbacks” in the context of a complex, non-linear system exhibiting LTP.
Demetris, you are too kind, and despite being a second language, your English is often better than mine (I see many grammatical errors on re-reading my own posts). But your insight into physical processes is far better than mine and I am still very much learning from your work, and it is a privilege to do so.
Arthur, this is unfair, Demetris has not stated it is “not possible” to get LTP from models. Indeed, Demetris has published a “toy” model of climate that exhibits LTP.
What we have talked about are specific cases, e.g. the example Rasmus kindly provided, which is an example of a model which clearly does not exhibit LTP.
Bart, you say:
I am not sure if Spencer equates LTP with internal variability. In my reading he says that internal variability can lead to LTP and he uses for that a nice example, the cloud cover.
I have tried to investigate the question whether a system could exhibit LTP if its inputs are constant. You may read two papers about this question ( 2006 and 2010). As you can see, even nonsensically constant input can give rise to LTP.
This brings us to Rasmus’s questions, which you also quoted in one of your other comments:
My answer to these would be: I agree that (changing) forcing can introduce LTP and that it is omnipresent. But LTP can also emerge from the internal dynamics alone as the above examples show. Actually, I believe it is the internal dynamics that determine whether or not LTP would emerge. As a counterexample, I can imagine a hypothetical system with strong damping/stabilizing mechanisms, in which even an incoming signal exhibiting LTP would not be manifested in the system state. For the Earth system, the radiative forcing itself, being a balance of incoming and outgoing radiation depends on the internal variability.
Thanks for answering the questions as posed by Rasmus (and repeated by me): “I agree that (changing) forcing can introduce LTP and that it is omnipresent.”
From this I conclude that you agree that the presence of LTP is not necessarily indicative of internal variability being dominant (since forcing also introduce LTP).
Is this a correct interpretation of your views?
Regarding physics vs statistics: They are both useful and needed to make sense of the world around us, hence there is no dichotomy. Statistics in the context of climate science is a tool to understand the physics. My repeated questions were an attempt to have you clarify what your statistical analyses mean in physical terms. Your answers re forcings and LTP (as quoted above) and about conservation of energy (earlier) are steps in that direction which I warmly welcome.
Dear Bart, as I wrote above
Please see also the publications I mentioned which demonstrate that the internal variability is sufficient to introduce LTP, even without external variability.
Otherwise, I welcome your recognition of statistics as a tool to understand physics.
Also, see Armin’s excellent comment about it.
Thanks for clarifying the nature of the different stochastic models; very useful to read. I appreciate that you accept that conservation of energy prevents the climate from behaving as a random walk. That’s a strong physical constraint and it’s important to be clear about its implications; thanks for adding this clarity from your part too.
You further state that stationary LTP is the least unphysical process by which to interpret the evolution of global avg temperature. However, that doesn’t tell me anything about the physical processes which you deem responsible for the observed changes in temperature. Are these changes predominantly due to a redistribution of energy within the climate system or due to (natural and anthropogenic) climate forcings or due to something else (e.g. fast feedbacks wondering off)? Does LTP imply anything about the warming being governed by unforced vs forced changes, or by anthropogenic vs natural factors? I’m hoping for an answer in physical, not statistical terms.
Thanks for joining in again. I appreciate you answering the questions posed earlier by Rasmus and myself: “Natural Forcing plays an important role for the LTP and is omnipresent in climate”. Is it correct to deduce from this that the presence of LTP cannot by itself distinguish between unforced and forced changes in climate? (i.e. LTP is not necessarily indicative of internal (unforced) variability)
You claim that natural forcings contribute to LTP while GHG forcing does not. That seems in contradiction to what Rasmus claimed (his fig 2). Hopefully Rasmus can respond with his view on this.
I am somewhat surprised that a trend (e.g. from GHG forcing or from whatever other cause) would not contribute to LTP; how could it not? Since the analysis method is purely based on the statistical behavior, I would guess that it’s purely coincidental that LTP is not increased by one (GHG) forcing whereas it is increased by other (natural) forcings. After all, if the GHG were from natural origin, it would have the same (according to you negligible) impact on LTP. So from that perspective the presence of LTP would not be a strong indication of anthropogenic versus natural causation, right? Moreover, human forcing is not limited to GHG, but also includes aerosol forcing and land use (albedo) forcing (both of which are thought to be net negative, i.e. moderating the warming from GHG). I’m unclear as to how that factors into your analysis and interpretation (but that’s perhaps a detail).
Could you expand on your view of LTP in light of Armin’s comments? Where does your definition differ from or agree with his? I am also curious to hear some more details re your fig 2: you mention only the in- and exclusion of GHG forcing (black and grey line, resp.). However, does the black line include all known (natural and anthropogenic) forcings and the grey line none of these (i.e. only the showing the modeled internal variability)?
I’d like to ask Armin Bunde a couple of questions concerning his trend significance statements for sea surface temperature:
1) When you calculate significance from your statistical LTP model, are you effectively asking the question: “Could the observed trend have occurred simply as a result of LTP even in the absense of contemporaneous forcing?”
2) If that is the case presumably there would be a trend magnitude which would be significant? Can you say what that would be?
3) Can you say whether it is the rate of change or the absolute amount of change which is more important to gaining significance in your model? For example, would a 1.2ºC SST warming over 200 years (a continuation of the past 100-year trend) remain insignificant? Would a 1.2ºC SST trend over the next 100 years be significant?
4) Do you accept Demetris Koutsoyiannis point that separating land and sea trends in the way you have is not physically realistic? For reference see Compo and Sardeshmukh (2009) and a blog post by Isaac Held for a concise summary.
5) Last, but definitely not least: The graph at the top of Rasmus Benestad’s opening blog shows a pretty standard idea of expected climate evolution in response to known forcings. The scale of the plotted response is somewhat arbitrary but it is clear that the proportional shape and timing of the observed temperature evolution is a very good match for our expectations based on prior physical understanding of historical forcings.
Unless you want to say differently (that would be the first sub-question) I can’t see a reason why a trend induced by LTP would favour the path seen in observed global average temperature. So under an LTP statistical model the observed temperature evolution would simply be one possibility out of hundreds of very different paths, with the same probability of occurrance as any other.
You mention that GCMs reproduce the LTP behaviour described, but they also unanimously expect increasing SSTs over the historical period in which we have observed increasing SSTs. If GCMs do reproduce LTP behaviour and you’re using LTP to inform your significance calculations, surely the unanimity of trend direction in GCM historical runs should play a role in our understanding of trend significance.
Regarding the nature of internal variability, I wrote that “in all likelihood” or “usually” it involves a redistribution of energy within the earth system. I included these caveats to not 100% exclude the possibility of spontaneous changes to e.g. planetary albedo or other factors that directly impact the energy balance (rather than via the surface temperature). The reason that I think such processes are very unlikely is that cloudiness and humidity are fast feedbacks in the climate system: They respond quickly and strongly to changes in temperature. I do not know of a plausible mechanism to spontaneously increase or decrease the earth’s cloudiness over multi-decadal timescales. Moreover, nights having warmed more than days precludes the main mechanism being due to a change in albedo (or solar irradiation for that matter).
From your recent comment I gather that what in climate science is known as a feedback, you classify as internal variability.
You write: ” I think once LTP is present, it becomes necessary to interpret climate through the presence of LTP, since LTP is so pervasive.” For trend significance, yes, if possible. Though the difficulty there is that we do not know the LTP characteristics of the unforced climate (since forcings are omnipresent in the data), so circular reasoning is an important pitfall.
You also write: “If we make no “artificial” changes to these, and just allow the climate to run, I think we would still see LTP (but with changes of irradiance happening all of the time).”
I think the omnipresence of forcings makes this impossible to know for sure. Rasmus showed that a GCM without forcings does not exhibit LTP. To what extent this is indicative of the real world, we don’t know. Comparing it to real world data would require taking into account that the latter is impacted by forcings. The introductory text of this blog refers to an attempt to do so by comparing the power spectra of models and observations (fig 9.7 in AR4). It can imho not be done by a straight comparison of the amount of LTP in real world data (impacted by forcings) and model data (without forcings): apples and oranges.
Thank you very much for your answers on my questions. These may lead to the quintessence of differences in insights. But let’s start where I agree with you.
You stated: “I think it is a general practice in modelling of physical phenomena not to care about the details of the system components. In other words, in modelling we always lose information in comparison to the real-world system. We try to find which the essential elements of the system are and we try to represent those, ignoring the less important.”
I agree with this statement. Although I find it a little bit surprising that you say in your comments of 1 May 2013 8:07 pm that “Models vs. reality is a true dichotomy”. It contradicts in my view.
But let’s focus on two kind of models: Atmospheric Ocean General Circulation Models (AOGCMs) and your statistical model (described in MK2012). AOGCMs are the most comprehensive models climate scientists are using in the sense that most (relevant) physical processes following physical laws are incorporated. Every gridbox of the model obeys the first law of thermodynamics, i.e. conservation of energy. This is in my view an essential element.
It implies on the macroscopic scale that if there is a positive radiative forcing acting for a long time (e.g. the 30% increase in total solar irradiance in the course of the history of the earth), the climate system will gain energy until the outgoing energy flux of the system is in equilibrium with the influx. This will be reflected in global mean temperature changes.
How about your model? You process time series (of global mean temperatures) as if they consist of multiple fluctuations at all time scales. By ignoring the possibilities of (long lasting) trends (clearly the case of the brightness increase of the sun) energy is not conserved due to reasons I explained in my last comment.
In my view physics should be an essential part of any (climate)model. Statistics can be tool for analyzing observations or model data, it’s not the same as physical laws (as you stated in your comments of 1 May 2013 8:07 pm: “In other words, statistics is physics.”).
So, I am very interested to hear whether it is possible that under a myriad of external forcings (natural and in the last centuries also anthropogenic) conservation of energy is not violated in a model which includes fluctuations only.
Rasmus, Arthur and others rightly point out that LTP must be derived from the physics of the earth’s climate system. However, the earth’s climate system is known to be chaotic. What is the difference between LTP and a chaotic system that exhibits variation on many different time scales.
We do know that various locations on the planet exhibit long-term variation in climate, such as the MWP in the North Atlantic region. Regional variation on the century time scale (probably of greater magnitude that recent global changes) certainly has taken place – driven by the laws of physics. There is disagreement about the global magnitude of this variation, but physics certainly causes phenomena that exhibit LTP on the century time scale in regions. If restoration of normal conditions simply required transporting less or more heat up to the characteristic emission altitude, such deviations wouldn’t persist for long.
Before 2000, most climate models required flux adjustment (holding the modeled climate near present day climate) during spin up or the system (driven by physics) would settle into a quasi-equilibrium state far from today’s. Doesn’t the whole spin-up process (with and without flux adjustment) suggest that variation on decade to century time scales can be driven by physics. For that matter, “committed warming” taking place over decades is another example of physics.
The first paragraph of your last comment gave me the impression that we understand each other better now, but then the second part made me worry:
I believe if something does not tell you or me anything about the observed changes, etc., is irrelevant with our discussion. What you or I understand as physical processes depends on your or my understanding (nb., understanding is subjective). Physical processes are not just those explained by Newton’s laws. Some people are able to understand systems governed by the Second Law of thermodynamics (which by the way is a statistical mechanical law) and classify them as physical systems. Some are even able to understand, and classify as physical systems, even quantum systems governed by Schrödinger equation, in which the concept of probability is central. Some even dare speak about flows of probability, and imply a law of conservation of probability. Most of these concepts I have difficulties to understand, but I would not characterize them unphysical. I would avoid applying the Procrustean idea that physical is only what fits in my own mind. I think the climate system is quite difficult to study and I find that naïve attempts to describe (and understand) it in deterministic terms can only lead to deadlocks. Therefore I welcome any ideas from the stochastic toolbox, such as probability and its fluxes, the principle of maximum entropy, Bayesian statistics, and the like. All these become physical as long as they are applied to physical systems.
Furthermore, I cannot understand how these two extracts from your own comments can be consistent to each other:
Quote 1 ( earlier comment)
Quote 2 ( last comment)
But I hope the explanations given in the last excellent comment by Spencer answer your questions “in physical terms”.
I certainly do not attack you or your straw man, whose existence I was not aware of. My impression is that we are doing dialogue, as we are prompted by the title of this forum, right? As “dialogue” happens to be a Greek word, I believe I have a good understanding of its meaning. Well, climate is also a Greek word, but certainly this is more difficult to understand 🙂
You are right that I did not provide citations covering all my assertions about Nile. So here is a paper which examines the modern record of the Nile: Medium-range flow prediction for the Nile: a comparison of stochastic and deterministic methods. You may see in Table 2 that the Hurst coefficient of the annual series is 0.85, which indicates LTP. I hope after the scheduled event on Harold Hurst additional citations will be available (in particular, about other sites in the Nile basin). Note, that I have published about the Nilometer record several times, by Figure 2 published in my main post above is for a longer period and is an original figure (as well as all other figures)—not copied from previous publications.
About the question what “physical insight” is, please refer to my answer to Bart above. But I appreciate that you “have read enough about maximum entropy”, and I respect your wish to see another example. So here it is: Uncertainty assessment of future hydroclimatic predictions: A comparison of probabilistic and scenario-based approaches”.
This is about a catchment which is important for the Athens water supply system. As you will see, LTP is present in both rainfall and temperature and is further amplified in river flow because of the interaction of the river with groundwater processes as well as because of the changes in the human withdrawals of water (the latter is of minor importance).
With reference to modelling principles, which I referred to in my first comment to Rasmus, please notice the following extract from that paper:
You may also notice in the last panel of Fig. 12 how poor the performance of climate models is, even after downscaling, and how stable the future projections, obtained by using climate model projections, are. Of course in the system management we have ignored GCM results as detailed in another paper, Hurst-Kolmogorov dynamics and uncertainty.
@Jim Cripwell 2013-05-02 13:44:16
From the website you link to:
The slope of the red line, the daily global sea ice anomaly, is -40000 sq km/year.
The global sea ice extent is dropping.
Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question 🙂
In other words, my reply whether statistics is physics is already there as well as in many of my other comments and in many of my publication, and there is little to add. Of course you may feel free to banish statistical thermophysics and quantum physics from physics, but I will not follow you. Assuming that you banish them, can you prove the ideal gas law (PV = nRT = NkT) without using statistics?
As per myself, given that I regard statistics as physics, I can use some tools like the law of large numbers, the central limit theorem and the principle of maximum entropy as powerful tools to make inference in physics. These say that you do not need to study the details in order to know the macroscopic behaviour of a system comprising myriads of variables. This is the case in the example I gave before: in a monoatomic gas you have 6 (variables/molecule) x 6 x 10^23 (molecules/mol) = a great many myriads of variables, and you do not care to know exactly even a single one of them. For, you can infer the behaviour of the system using the three above probability laws. The conservation of momentum and energy are put as constraints for the entire system, neglecting the details of energy exchange between each of the molecules.
The situation is quite similar in climate. Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?
Rasmus said at the end of his guest blog:
“There may be some irony here: The warming ‘hiatus’ during last decade is due to LPT-noise [15,16]. However, when the undulations from such natural processes mask the GHG-driven trend, it may in fact suggest a high climate sensitivity – because such natural variations would not be so pronounced without a strong amplification from feedback mechanisms. Figure 3 shows that such natural variations in the climate models are more pronounced for the models with stronger transient climate response (TCR, a rough indicator for climate sensitivity).”
I’m wondering what Armin and Demetris think of this statement. Do you agree, and if not, why not?
A much improved dialog.
That being said statistical analysis of a time series is fraught with traps for the unphysical and dependent on the nature of the data itself. A nice example of this is the comment by Hendry and Prentis, on Beenstock, Reingewertz, and Paldor’s, “Polynomial cointegration tests of the anthropogenic impact on global warming”. For example entropy maximization is subject to energetic constraints, which, in turn are subject to energy flows into and out of the system and between components of the system, things which climate models at least attempt to follow. A model which simply maximizes entropy in the system as a whole, will underestimate entropy for the system as a whole because the sum of the entropy gains/losses (the earth is an open system after all) in the several components, oceans, atmosphere, biosphere etc. has to be larger.
It is true that 1 + 1 = 2, but 1. kg + 1. kg = 2. kg is a very different beast. For the former you need to know about integers. For the latter about scales, standards, physics, real numbers and other stuff.
Actually, although there has been a small gain in sea ice extent in the Antarctic, the loss in the Arctic has counterbalanced that, and it is worse if you look at sea ice volume. For example
All three invited participants agree that radiative forcing can introduce LTP and that it is omnipresent. It follows that the presence of LTP can not be used to distinguish forced from unforced changes in global avg temperature. The omnipresence of both unforced and forced changes means that it’s very difficult (if not impossible) to know the LTP signature of each. Therefore, LTP by itself doesn’t seem provide insight into the causal relationships of change. It is however relevant for trend significance, but frought with challenges since the unforced LTP signature is not known.
I would also like to put the issue forward that was voiced by Lennart van der Linde in the public comments, echoing Rasmus: The stronger natural variability, the stronger spontaneous changes in the earth system dynamics are amplified, hence the stronger climate sensitivity. This limits the fraction of warming that can be attributed to natural variability, since it would simultaneously enhance the fraction attributable to a given amount of radiative forcing.
Firstly many thanks for responding to me down here in the cheap seats :). The structure of the debate does sometimes leave me feeling a little disconnected from the main debate (although I am grateful for Demetris’ references to my comments to introduce them).
I find your response here a little confusing. Both you and Rob put forward arguments against LTP on the basis of an equilibrium and a restoration to that equilibrium. In fact, if the assumptions of a static equilibrium and restoration are correct, this would be a strong case for STP fluctuations in climate. So it is an interesting point of discussion.
However, the fact is that the equilibrium is not static, and imprinted from the dynamics of the system, as albedo is an internal variable of the climate system. If the system dynamics are LTP, then LTP will be imprinted on the equilibrium point. If the dynamics are STP, then the equilibrium would be STP.
So the question of “usually” or “likely” becomes irrelevant. Either the equilibrium is static, or a function of the internal dynamics. Since we appear to be agreed that the internal dynamics, the discussion has to move on to those internal dynamics. And this, of course, is why I choose clouds, because clouds are popular examples of fractals; but you seem to argue clouds are not good examples of fractals, because they have a characteristic time constant.
I was very careful with my definitions of radiative forcing, internal variability and external factors. They are all, I believe, objective definitions, and therefore scientifically meaningful. I am aware of the distinction that climate scientists make between “radiative forcings” and “feedbacks”. Unfortunately the basis for this distinction is based on a metric which does not exist for LTP systems.
The distinction between a “forcing” and a “feedback” is typically made on the basis of a time constant. For example, you refer to a “fast feedback”, with reference to a speed (and therefore a time). In this example from RealClimate, they state:
The reason this is a problem is that for a system with LTP dynamics, there is no defined time constant. In fact, by introducing a time constant, we immediately insert an assumption of STP into our analysis. Under these circumstances, it is no surprise that we conclude STP dynamics in the climate, because we have assumed it to begin with; the argument is tautological.
Even worse the RealClimate article supports this assumption with a run from a GCM – which as Rasmus has already shown, does not produce the LTP behaviour we see in observations, as discussed by Demetris. This is compounded by a second error. As discussed above, the equilibrium is not static. But in their GCM test, they remove the water vapour content far from the equilibrium and assess how quickly it returns to the equilibrium in a short timescale. But this is an error also: the LTP is imprinted on the movement of the equilibrium point (amongst other things), and if you move far from that equilibrium point, you would hide that behaviour.
In summary: in assuming that water vapour has a time constant, and is a “fast feedback”, we introduce STP into our assumptions and as such finding STP in our conclusions is uninteresting and tells us nothing about whether STP or LTP is a better model for climate. However I will add a second post detailing why I strongly disagree with your specific physical claims regarding humidity.
Bart, you say,
Bart, it is not difficult to find that humidity and cloudiness are a function not just of temperature, and not linearly of temperature, but a non-linear function of the complete climate state. Although, to be fair, I am not a hydrologist; but we do have an expert hydrologist on the panel, and I am sure he will correct errors that I make!
Although humidity and cloudiness are related to temperature, it is immediately obvious from observations that there is more than just this that influences these parameters. While very cold places must be dry – the driest locations on earth are the deserts of Antarctica. However, if we look at a list of driest places in the world, they are far from uniformly cold – death valley in the US, or Aoulef in Algeria are both extremely dry, and extremely hot. Yet tropical rainforests are hot yet humid.
What we see is that the simplistic view that humidity responds quickly to temperature and therefore does not fluctuate on a multi-decadal scale is quite wrong. The humidity and cloudiness is a function of many things – soil moisture content, evaporation, evapotranspiration, precipitation, etc. etc. And land use, and therefore things like soil moisture content are far from constant on the multi-decadal timescale, and therefore humidity and cloudiness are far from constant on a multi-decadal timescale.
Now you could argue things like land use changes these are forcings. But they are unpredictable forcings, sensitive to initial conditions, that exhibit LTP. And we know that even without human influence, desert regions etc are far from static. They change at all timescales, naturally.
Sadly, the assumptions that humidity and cloudiness have a characteristic time constant, and respond narrowly to temperature on a short time frame in a manner that is linearly separable from the rest of climate, are assumptions are not consistent with what we know about the earths climate system.
Bart, you say:
I confess I am astonished at this claim. It makes no sense to me.
There is no “forced LTP” and “unforced LTP”. There is the LTP that is present in the climate system. And that LTP has defined characteristics, parameters which define what we might term “natural” climate variability.
The uncertainty in estimating these parameters is driven not by the difficulty of separating two things which are not different, but really in estimating the parameters of the LTP from historical data which is either of limited length (e.g. instrumental records) or with confounding factors or artefacts (e.g. paleoclimate reconstructions).
It is quite possible to detect changes in climate that exceed the range of typical behaviour, and even assign a probability to it – as both Demetris and Armin have done here.
I think I have said enough for now, but as a parting note I would like to echo part of Armin’s very nice comment here, which expresses the heart of the issue more succinctly than my efforts:
“Believe it or not, when I was writing my reply to Bart above I had not refreshed my browser and I had not seen your comment. So, my interpretation of your comment is that you guessed what I was writing as an answer to Bart and wrote a comment with a relevant question”
Yes, that’s quite a coincidence, I tend to believe you. Your answer contains interesting information about physics and statistics.
In my view there is – to speak with your words – a dichotomy between statistics, derived from fundamental physics, and statistics, used as an analyzing tool. Let’s call it statistics1 and statistics2, respectively.
For example molecular velocities following the Maxwell-Boltzmann distribution, photon energies following the Bose-Einstein distribution, the gas-law linking temperature, pressure and volume (as you mentioned), uncertainty principle of Heisenberg in quantum mechanics (also mentioned in your earlier comment). All of these can be derived from fundamental physics. These distributions and/or probabilities are fundamental properties of nature.
For example decomposing time series into harmonics (Fourier transformation) or into other functions (e.g. Laplace transformation), wavelet analysis and regression. These are not based on fundamental physics, but are mathematical tools for analyzing experimental results and/or observations. Your climacogram is also an analyzing tool as you have characterized it yourself in the abstract of your paper with Markonis (MK2012):
“In our analysis, we use a simple tool, the climacogram, which is the logarithmic plot of standard deviation versus time scale, and its slope can be used to identify the presence of HK dynamics”
All of these analyzing techniques are based on assumptions and have their limitations. They can be useful, but are by no means fundaments of nature. Therefore statistics1 is fundamentally different from statistics2.
Can you agree with the distinction I make between statistics1 and statistics2?
To respond to your questions:
“Why do you think that my approach should violate the conservation of energy? Can you prove that? Or can you give an indication that it should violate it?”
As explained in my earlier comment, your approach doesn’t say anything about conservation of energy. Considering system Earth, this is an important constraint. Including this you might be able to distinguish between forced and unforced climate (global mean temperature) change. As you already mentioned the climacogram cannot make the distinction. My guess is that taking energy constraints into account will affect the conclusions on trend significance. So it’s not a matter of violation, but of omission instead.
Eli, you give me a reference. What a way to distort the data!!! They show graphs for the Arctic and Antarctic BOTH at Aug 2012. In other words, the height of summer in the Arctic and the height of winter in the Antarctic. The rest of the article is similar scientific garbage.
“If you torture data long enough, it will confess.” Roland Coase.
I think we’ve reached an agreement on the issue whether forcings affect LTP, if I understand you right. And forcing is omnipresent.
If we say that trends too have a LTP nature in addition to long-term variability, then we still haven’t managed to distinguish the two.
Can we agree that any use of LTP in hypothesis testing has to distinguish between intrinsic (“noise”) and externally forced LTP before we can say whether a trend is part of the natural internal variability?
You may not think it “is nearly impossible to have a meaningful discussion”, but I think you’re wrong.
Firstly, climate analysis and trend testing is not just a question of LTP. Furthermore, one always need to make a set of assumptions before making sense out of the mathematics, even when applying LTP models.
It is easy to produce a bunch of meaningles numbers even with the most elegant mathematical model. for instance the mathematical structure of the LTP.
Can you really demonstrated this? I’m sorry, I haven’t read your paper, although the papers I’ve read on LTP so far have not really convinced me. Too idealistic and not sufficiently practical. THese papers also neglect a large scope of additional and relevant information.
In order to really understand LTP, you need to unveil the physical mechanisms which are at play.
All correlations can give you misleading numbers when based on a finite sample, and if there is a persistence, the greater the danger of getting an accidentally good fit. With low-level chaos and different (unpredictable) weather regiimes, I wonder if LTP assumtpions and models may be tricked.
One way to test you methods is to carry out a large set of double blind tests with data samples for which the answers already are known. For instance from climate/weather models (“surrogate” data or “pseudo-reality).
Let me respond to some of your points:
1. Models vs. reality – I think we agree that models are not the real world, but they are nevertheless useful concepts. They are very useful for providing ‘pseudo-reality’ against which statistical models can be tested, mainly because we can make decisions about the simulated world. E.g. forcing ofr no forcing.
2. Physics vs. statistics – I agree that physics and statistics both are important ways of understanding the universe. I also think that physical laws constrain possible outcomes and this must be reflected in the statistics.
4. Trends vs. fluctuations – I agree that the concept ’trend’ is not very definite, and it’s important to state the scientific question more clearly. The question that I think we are dealing with is whether the current global warming (“trend”) could arise from no forcing – or just natural forcings.
5. Linearity vs. nonlinearity – The difference between linearity and non-linearity is not contriverisal. You could add other more relevant phenomena, such as tropical cyclones. On the other hand, there are aspects which are more linear, and we know that there are some types of forcings which cause a somewhat linear response. Take the seasonal cycle for instance. It is fairly clear what effect what effect changes in the incoming solar energy has on the temperature statistics. We also see the effect from volcanoes, although it may be more complicated. Slow changes in the Earth’s orbit around the sun also appears to produce a response which is fairly clear. And we know that the greenhouse gases has an effect – we can look to the other planets in our solar system. There may be non-linear effects from the oceans-atmosphere causing decadal variations, and the question is whether these are sufficient for explaining the exceptional warming that we see now. Data from the past suggest this warming is hihly unusual, and if it was a spontaneous freak event, it would still be quite unlikely.
6. Stochastic vs. deterministic models – on the quantum physics level, everything is stochasic, but the ‘law of statistics’ make the classical-scale physics considerably more deterministic. Still, the presence of chaotic dynamics make some processes impossible to predict on a longer time scale. I also agree that not all stochastic models violate the laws of physics, but there are many stochastic models which do. Furthermore, there may be some stochastic models which provide useful predictions for certain scientific questions and still misleading results for others.
Models describing the mean surface temperature on a planet must account for the energy budget, thermodynamics, and dynamics. There are some models which are non-stationary such as “random walk” models which may be hard to reconcile with the planets energy balance and hydrostatic stability. In such circumstances, there is a true dichotomy – but in many cases there is no dichotomy.
The way I view this is that any trend will affect the auto-correlations. In addition, there is a question about how the climate system reacts to even a linear trend forcing. We agree that there is a presence of internal dynamics and there are feedbacks which results in inter-annual and decadal varioations. In a chaotic system, the changes in external conditions may imply changes in the systems evolution at points of bifurcation.
I’ve earlier stated that I believe that the weather is ‘chaotic’ whereas the climate is not. Just like the indicidual days the nexte year are unpredictable whereas the seasonal statistcs are fairly well known (seasonal froecasts try to say something about how much the future will diverge from the ‘normal’ state). This, however, makes an assumption about time scales. We know that there are fluctuations in the global mean temperature, but these are distinct to the slow changes.
In other words, I’m not convinced that GHG forcing does not contribute to LTP.
Just to comment on ‘unphysical models‘ to clriy my position.
In my view, a model is ‘unphysical’ if it violates one or more laws of physics. A model may have some aspects of physics (e.g. maximising entropy), but if it violates other physical contraints such as the conservation of mass, energy, or charge on the classical scale, then it’s ‘unphysical’.
If the global mean temperature undergoes large excursions, then this will clearly have an impact on other parts of the climate system, as it implies that heat is shifted around. Conservation of energy implies that.
When it comes to LTP, trends, and internal variations, it may be useful to look at sea level pressure (SLP) or
SLP indices. We know a priory that the SLP is not likely to have any long-term trend (or perhaps just a tiny bit due to changes in the atmopsphere composition) as the barometric pressure is a measure of the mass of air
above the point of measurement. If we can rule out the possibility that the atmosphere increases in mass, then SLP is only expected to exhibit LTP but no trend.
It’s a bit speculative – agreed – that the SLP should have similar LTP as the global mean temperature.
Another diagnostics could be the global mean sea level change or the total ocean heat content. See the recent post on RealClimate.org. The question is whether this measure shows more of a trend and less decadal variability. How do the LTP detection differ in all these three cases and wht can we learn from that? As I stateed in my opening part, I think it’s a great mistake to look at only one indicator – and in a sense, this is where one ‘unphysical’ property resides.
I’d like to second Rob van Dorland’s well-stated question about “statistics1” vs “statistics2”. When Demetris Koutsoyiannis says there is “no dichotomy” between physics and statistics, I have no idea what he is talking about. The two words have different meanings. Just because there are statistical approaches within physics (I have taught a Statistical Mechanics class), that’s no more significant than noticing there is such a thing as ‘chemical physics’, and yet chemistry and physics are two different subjects. Mathematics and physics mean different things, and mathematical physics is a particular domain of its own. Doing statistics doesn’t mean you’re doing physics, and an explanation via statistical model is not necessarily at all a physical explanation. The ancient world’s epicycles allowed precise modeling of planetary motions, but those mathematical tools did not provide the physical explanatory (and predictive) power that came with Newton’s inverse square law of gravitation.
So what is the physical explanation for LTP in the Earth system? Since it is not seen in unforced climate models, it must be some other physical element. One possibility, discussed by Bart, Rob, Rasmus, etc above, is that the observed LTP comes from the forcings, and a good piece of evidence for that is that forced climate models evidently DO show LTP (as Armin Bunde notes, as I queried above). What other option is there? Spencer’s clouds and humidity don’t help, because climate models *do* include clouds and humidity changes. Perhaps there is something wrong in the way those models handle it – what specifically would need to be changed to fix that problem? Or perhaps it is something about ocean dynamics that climate models get wrong? Or perhaps it is a whole range of things? But the reluctance of Demetris and Spencer to distinguish between forced and unforced changes suggests that no plausible such physical model for unforced LTP has ever been found.
I agree with the Statistics_1 vs Statistics_2 statement. For example, the Ornstein-Uhlenbeck (O-U) process is the physics model behind the statistical AutoRegressive model. The O-U process will show persistence and provide a physical explanation for what may be happening, and, yet, that is but a simplified explanation of a more elaborate climate model.
So I suggest that before one starts suggesting H-K dynamics, they go through the first steps of looking at the basic statistical physics, which could include MaxEnt, etc, if applicable. This will lay the foundation for departures from first-order physics models and statistical mechanics.
sorry for answering late. For answering your question, we have to note that we distinguish between natural fluctuations and trends. When looking at a LTP curve, we cannot say a priory what is trend and what is LTP. When analysing LTP in the INPROPER way, which still is one of the best liked way among many climate scientists, namely by the ACF, then we also cannot distinguish and this is the reason why Rasmus comes to the wrong conclusion that external trends contribute to the LTP.
The real crux is that many of our colleagues just have problems in reading and understanding literature on LTP that requires some mathematical understanding, and so they cannot appreciate the enormous progress done in this field in the past 15y. By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic. When using the PROPER methods like DFA and WT, you can indeed quantify the natural LTP in a record also in the presence of an external monotonous trend!
Of course, if you use the improper ACF as Rasmus did, you cannot!!
When testing to what extent GHG is responsible for LTP, we found it is not, please have a look at our 2004 GRL, where we also specified the methods.
It is very unfortunate that Rasmus does not seem to be able to read this and our other articles on LTP. I am sure he would appreciate them and see that LTP is not a beautiful and needless kind of theory made by strange theoreticians to make real climate scientists the life harder (forgive me, when I am joking a bit). If he would read them, he would certainly understand (1) from our 2009 PRE that the ACF is a poor tool for analysing LTP in a short record (we have specified the ACF analytically as a function of the record length and the correlation exponent, he only needs to read the formula), (2) from our 1998 PRL, 2001 Physica A, and several other papers including our 2013 Nature Climate Change he would appreciate that there are much better tools (DFA and WT) than the ACF and he would see also the consequences of LTP, e.g., the clustering of extremes. (3)
Finally, when reading our 2009 GRL and the more extensive 2011 PRE he would highly appreciate that it is easy to determine the significance of a trend in a long-term correlated record, because we have specified analytic formulas for the significance as well as for the confidence intervals. This way, he would recognize that natural LTP is not kind of a strange idea of theoreticians, but is of real use.
The WRONG alternative, and when reading the nonspecific comments of Rasmus I think he may favourate that one, is to assume that natural climate variability is an AR1 process. Indeed, climate scientists liked it in the past, because it is easy to handle, easy to generate, and a significance analysis was not too difficult to do. Indeed, mathematicians in the 1930s developped already tools for this. What these climate scientists do not know, is that an LTP process can also be generated very easily (one only needs to know Fourier Transforms) and that it is even easier than for the STP to perform a significance analysis. But I am sure, this will change in the next years.
Finally, Rasmus will recognize that ENSO is not an example for LTP, in the same way as other quasi-oscillatory phenomena cannot be described as LTP
When reading this again, I cannot exclude the possibility that for Rasmus, LTP is STP plus trend, but this would be a serious mistake.
thank you. I apologize that I only can answer today. I have written a quite lengthy answer to the post of Bart from May 3, where I comment quite a lot on your ideas. After reading this, I am convinced that reading and trying to understand our papers really would help you much in this issue. Please do this!
You will see that in some of them, highly recognized climate scientists like HJ Schellnhuber and H v Storch contributed. You will learn new methods, you will see quite practically, by real formulas, how poor the ACF really is, and you will see how easily in LTP records the significance of external trends can be estimated, in the same way as for STP records, but easier. Everything very practical! The only thing you have to do, is to read these articles!
We scientists are, of course, open to new ideas, and so it must be a great pleasure for you to get
acquainted will all this new stuff. Just take the chance!!
I just read a few of Armin Bunde’s papers, and I’m quite confused…
For example, this 2005 PRL – http://prl.aps.org/abstract/PRL/v94/i4/e048701 – uses the MBH’99 Northern Hemisphere temperature reconstruction as one of the examples of long-term persistence, among several other long-term climate-related records (including the Nile river level one Demetris Koutsoyiannis cited). The LTP power-law exponent, gamma, in the case of MBH’99 was very low – just 0.1, indicating very slow decay of the correlations. This was with their “DFA2” approach, which detrends by removing a quadratic polynomial fit from the data. However, removing a quadratic polynomial (or any other polynomial trend) is quite different from a removal of external forcings from the picture. We know that the long-term temperature record has been subject to a series of changes in forcings associated with Earth’s orbit and the sun (sunspot records provide at least some long-term record of that change), as well as greenhouse gas changes. Those have not followed any simple polynomial pattern – in fact they have fluctuations on a wide range of timescales in themselves.
So while DFA certainly removes the effect of any steady trend from the data, I don’t see how it can be claimed to isolate “natural” behavior as distinct from externally forced behavior. Yes, you could call variations in orbit and sun “natural” if you want – but you could call variations in greenhouse gases just as “natural” (humans are after all part of this world). The interesting question that the LTP analysis appears to address, but which I don’t see how it could, is how the Earth system behaves in isolation from changes in external or human influences. Because that isolation has not been done in this sort of analysis – at least I don’t see it.
What I think we’re after is something you might call the Green’s function of the Earth – how does the Earth in itself respond over time to an initial delta-function perturbation? Does that response, after initial transients, decay exponentially (AR-1) or does it decay more slowly with a long tail (LTP)? Climate models evidently show exponential decay. What are they missing if the real answer is LTP? And how can we actually come up with any conclusive evidence from observations that LTP is the right answer? I don’t see it in the discussion here.
The undulations from natural variability (or LPT) do not necessarily indicate high climate sensitivity. While high climate sensitivity attribute to GHGs does require strong amplification from feedback mechanisms, natural causes do not. Read the section about circular reasoning and models. Just because a model with high climate sensitivity shows more pronounced natural variability, that does not necessarily equate it with reality. Also, separating natural from manmade causes may not be possible, as described earlier, as they may be influencing each other.
Thank you for your response. I have already had some discussions on LTP, but if you can recommend one particular paper of yours, I’ll of course read it.
In the meanwhile, I will urge you to use all the information available, and not just rely on one time series and it’s sole LTP characteristics. That is in my view ‘unphysical’ because we know that the various geophysical data are interlinked in the cliamte system. I will like to stress that we need to take a comprehensive view when we want to address questins such as whether the global warming we now witness is due to natural fluctuations (LTP) or if it is indeed forced by GHGs.
I realise that highly recognized climate scientists like HJ Schellnhuber and H v Storch have contributed contributed to LTP work, and that this topic is very interesting. So far, I think these findings suggest possible properties, rather than the exclusion of causalities. Here the practical question which we originally addressed was whether we can decide if the current trends are ‘unnatural’.
Yes, you can use some ‘noise models’ (e.g. FARIMA, fGN, or whatever) and fit it to the data, and say that, given such properties, the trends could easily have happened by chance. My doubts on this strategy is about the design of the analytical strategy.
My point regarding the simple ACF is that the forcing also will induce LTP properties. You can of course try to remove the trend, but then you will have to make assumptions. You will need to provide convincing demonstrations that shows an objective method that is not biased towards one answer.
Maybe it’s easy to test the strategy against pseudo-data: results from climate models subject to different forcings. I’d be curious to know what results wou’d get then.
I also appreciate that LTP is present in many different physical processes, but we know that there are many types of physical processes with different properies. We know that the Earth’s temperature is not self similar or power law, an both the diurnal cyce and annual cycle have well-defined temporal scales. However, we expect a convoluted response to these.
If you can provide some indication as to what mechanisms would be responsible for LTP for the global temperature, that could shed some light on our differences. Again, I’d suggest subjecting e.g. the mean sea-level pressure to similar tests for LTP, as we expect there to be no trend as the atmospheric mass is constant (more or less). We could also subject different ocean data to the same type of test.
You wrote “Natural Forcing plays an important role for the LTP and is omnipresent in climate.” and later on you wrote “By defintion, an external trend does not contribute to the LTP of a record. The LTP is natural, the trend is external and deterministic.”
I have difficulties reconciling these two statements. Does radiative forcing (e.g. from a change in solar irradiance) contribute to LTP? From the former cite above I gather your answer would be yes, from the latter I gather you would answer no.
If yes, why would a natural forcing (like a change in solar irradiance, or like Milankowitch forcing) contribute to LTP and anthropogenic forcing (like a change in GHG or aerosols, which could also change -though typically over longer time scales- due to natural processes) not?
I’ve tracked down Arxiv versions of papers to which Armin has referred (I think):
– Volcanic forcing improves Atmosphere-Ocean Coupled General Circulation Model scaling performance
– Power-Law Persistence in the Atmosphere: Analysis and Applications
The basis for Armin’s statement about GHG and LTP can, I think, be best traced to Figures 3 and 4 of the volcanic forcing paper, though the other paper is also relevant. The figures show results of Armin’s fluctuation analysis applied to outputs from a GCM (NCAR PCM) from a variety of scenario runs involving different forcing types, including no forcing (i.e. the control run).
Looking at the GHG-only run for figure 3 we can see the derived LTP exponent is not “improved” – not closer to the typical 0.65 value from observations – compared to the control run. I would suggest this is the source of Armin’s contention that GHG forcing does not contribute to LTP. By contrast volcanic forcing, as the paper title suggests, tends to shift the exponent upwards and closer to observed values. However, it isn’t a natural vs. anthropogenic thing: your example of solar irradiance is even less LTP-potent than GHGs, according to this analysis.
I think there is some physical basis for this state of affairs. Stouffer et al. 2004 showed that halving CO2 induced a longer response timescale than doubling CO2 in a GCM, so there could be some timescale assymetry for cooling versus warming. The characteristics of volcanic-induced forcing could also play a significant role – there is an abrupt large negative forcing followed by an abrupt positive forcing. The transient effect at the surface appears to be done within a decade but that doesn’t change the fact that a cold pulse has effectively been injected into the oceans, which we can reasonably expect to exert an influence over a longer timescale.
Lennart, you said:
I thought it was clear from what I wrote earlier that I fully disagree with such statements. There is no “LTP-noise”. LTP is property of the real world climate, which emerges from its dynamics. Well, this dynamics may be difficult to infer and express deterministically in its details, but it has some macroscopic characteristics. The macroscopic characteristics are consistent with the Hurst-Kolmogorov (stochastic) dynamics.
All remaining parts of the statement are hypotheses stemming from a belief that climate models tell us the truth. For example, how do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” so, let’s invent some “noise” to rectify this.
But as I clearly described above and clarified later, the “noise” properties of the climate models are inconsistent with LTP.
Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.
If they need to add some “noise” in their deterministic models to match reality, they should have taken care so that their “noise” be compatible with their own models first. To this aim, they could run their models in “unforced” conditions, infer the “noise” properties from there, and then use this “noise”. Can a Markovian noise with characteristic time scale of 1.25 years explain a climatic ‘hiatus’? I do not think so.
To me climate is by definition a stochastic concept (see my main post), its physics is describable only in stochastic terms, and the foundation and tools for decent climate modelling is offered by stochastics. A first step in such modelling is to explore, based on instrumental data, proxies, and other quantified information, the stochastic behaviour of the real climate. I hope my publications, some of which I referred to in above comments and my initial post, have contributed to the latter step.
I have been looking at several climate blogs to see if this Climate Dialogue discussion has attracted the interest of bloggers and climate discussers. I would say the coverage is thin: only Bishop Hill devoted an entry to this discussion, while, perhaps coincidentally, William Briggs discussed the Mexican Hat Fallacy, also discussed in my main post above. But I saw a few comments in other independent posts, among which I think one is interesting. Before I provide the link to this comment, I will tell two stories which I recalled after seeing some of the comments in blogs (including in this forum).
Once I was explaining to a colleague that in the Navier-Stokes equations (which, by the way, are used to model water flows, the weather, the ocean currents, etc.) the turbulent stresses are stochastic quantities (covariances of velocities). The colleague told me something like: Look, you may be right but I do not care. As a student in the university I learned some of this stuff, differential equations stuff and stochastic stuff. Now I have repelled them from my mind as I do not need anything more than the high school physics background. Actually, whenever I am not able to downgrade my explanations to elementary school level, people do not believe me.
Another colleague, in a discussion about climate which started as scientific and ended up as political, told me: I do not care whether climate change is real or not. Even if it was not, we should have to invent it, in order to save the planet from the various threats. (By the way, my own view is that, of course, climate change is real—climate had been changing all the time—and that we should beware saviors).
Now the comment I wish to refer to is by Dr. Robert G. Brown posted at Watts Up With That?. He offers some interesting (advanced level) physical insights and then (after his phrase in which, very rightly, puts two words in bold) also discusses political aspects of the climate agenda.
“I have been looking at several climate blogs to see if this Climate Dialogue discussion has attracted the interest of bloggers and climate discussers. I would say the coverage is thin:”
Given that the first discussion was a complete train wreck, this is not astounding. Even here the two on one nature of the “dialog” has made for difficulties, but this is a great improvement
The Orthodox Easter break gave me the chance to try to see this discussion from a more macroscopic point of view; some statistics about word usage (shown in the graph below) helped me.
I looked again at the title of the forum entry and I verified that it is “Long-term persistence and trend significance”. Both constituents of the title are statistical terms, yet from the beginning of the dialogue I felt that some of the discussers view statistics as disjointed from physics and identify climate with physics.
No doubt that climate fully obeys physical laws, but, as I wrote in my introductory post, it happens that climate is based on statistics even in its very definition, so by depreciating statistics we also depreciate the scientific basis of climatology. More generally, in complex systems, to express/derive physical laws we need statistics. I am happy that, finally, my persistence on this thesis resulted in recognition, by most of the discussers, of statistics as essential part of physics. I let aside the neologisms of Statistics_1 and Statistics_2 and the implied new dichotomy—I take it as a joke. Of course in every scientific problem there is good and bad use of statistics, mathematics, logic. But eventually the correct use will prevail.
Another interesting point I noted in the discussion is a tactic like this: if something is not consistent with our ideas, let us call it unphysical, that is, violating physical laws. Of course, if a theory or analysis violates physical laws, then it should be rejected. But we have to prove which law it violates and how. A stochastic model whose fitting is based on data cannot be pronounced unphysical just because it did not consider explicitly a specific physical law, e.g. conservation of energy. Inasmuch as conservation of energy is reflected in the data, it is indirectly respected by the model as well. Unless we convict also the data (e.g. those used in my calculations or other) and call them unphysical because they are not consistent with what we trust as being physical (in this case what the climate models are telling us).
Reading this discussion one would perhaps develop an impression that conservation of energy is the only important physical law and that it can explain everything. Of course— I repeated it several times—it is an important law, it is never violated, but on the other hand it cannot explain everything. It does not explain for example, why my room has roughly uniform temperature. Infinitely many non-uniform options would not violate the conservation of energy—they would have the same total energy content (for example if half of the room was below freezing point and the other half much warmer). The most powerful laws in physics are the variational principles rather than the conservation laws: Fermat’s principle (for determining the path of light), the principle of extremal action (for determining the trajectories of simple physical systems), the principle of maximum entropy (for more complex systems). It is the latter principle which explains the uniformity of temperature in my room, not merely the conservation of energy. Conservation of energy offers just one equation for the interacting parts. In contrast, the variational principle offers as many equations as there are unknowns. That is why I believe we should employ variational principles in climate. In particular, the one I believe is most relevant in a changing climate is the principle of extremal entropy production. This gives rise to the LTP, as I explained in my paper I already mentioned several times.
This brings be to another point of this macroscopic overview. Of course, the stuff contained in this forum is not any formal peer reviewed publication. But it is better if the arguments put can be supported by peer reviewed publications and if the discussers read these publications and refer to them. Each of the discussers has his own publications. I have tried to refer to some of my own several times, but perhaps I was not convincing. I am afraid that Armin had the same feeling when he wrote to Rasmus:
Looking at the word cloud above I think it remains one of the most popular issues, I did not cover in this brief overview: that of forcing. But I think there is nothing to add to what I wrote earlier in other comments except that I fully endorse Spencer’s comment, from which I wish to quote this part:
“[H]ow do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” […] Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.”
Maybe you’ve seen Guemas et al 2013? They claim to make a “retrospective prediction” of the ‘hiatus’:
“Our results hence point at the key role of the ocean heat uptake in the recent warming slowdown. The ability to predict retrospectively this slowdown not only strengthens our confidence in the robustness of our climate models, but also enhances the socio-economic relevance of operational decadal climate predictions.”
Is this the kind of prediction you’re asking for above, or do you mean something else? How convincing do you find this ‘retrospective prediction’?
Also Meehl et al 2011 may be relevant, who claim to find “Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods”:
“The model provides a plausible depiction of processes in the climate system causing the hiatus periods, and indicates that a hiatus period is a relatively common climate phenomenon and may be linked to La Niña-like conditions.”
How would you comment on this conclusion?
Dear Rasmus (and Bart),
sorry for answering late, I just did not have time before. Rasmus, from your comments I really can see that you must get more familiar with LTP and with the distinction between “external trends” and “natural
fluctuations” as well as with the classical techniques based on exceeding probabilities and confidence intervals to evaluate if some temperature increase is significant or not. Significant means, it cannot be explained by the natural fluctuations in the system. You know, these things are right at the heart of this Climate Dialogue, and in order to have a meaningful discussion we must make sure that we share the same basic knowledge.
One paper to read is certainly not enough here, this is like ” a drop on a hot stone”. May be, you start with our introductory review from 2012 in Acta Geophysica and with the SI in our recent Nature Climate Change, coauthored by Hans von Storch. The references are in my blog. Then you should, for a deeper insight into the methods which are essential in this field (it is here the same as in physics, the appropriate techniques and methods are essential!!) read the 2001 Physica A article on Dentrended Fluctuation Analysis. If you want to go further, you even can read our 2002 article in Physica A, again by Kantelhardt et al, on Multifractality. This article will soon exceed the limit of 1000 citations and may be you like it, but it contains a large mathematics part. But you know, mathematics is the heart of science, so you should not worry.
After that, I suggest you to read four of our articles with HJ Schellnhuber, i.e. our joint PRLs (PRL is the most prestigious physics journal as you certainly know) from 1998, 2002, and 2004 (Comment) as well as the paper by Eichner et al. in PRE. In addition, and this article is very important for you and I mentioned it already several times, you should look at our 2004 GRL, where we discuss, for the 2nd time after our 2002 PRL, to which extent the different forcings contribute to the (quasi-universal) persistence law for continental temperatures. We show that with GHG forcing alone, as in our 2002 PRL, the persistence law cannot be reproduced by the AOGCM, but the natural forcings are essential to get it right, in particular volcanic forcings seemed to play an important point. You see, these are quite PRACTICAL PAPERS, but involve some kind of mathematics.
Having read these papers, you can pass to our 2005 PRL, where we could show that LTP implies a clustering of extreme events, and could show that the clustering of extremes that we PREDICTED on the basis of LTP indeed can be seen in climate records. Again, this paper involves mathematics, but is again VERY PRACTICAL, isnt`t it? After that, you will be in the right mood to look at our 2009 PRE on the ACF. This paper is more demanding in mathematics, but you do not need go through it in detail, just look at the formulas for the ACF and how it depends on the system size. After reading this article, I am sure, you will not continue to analyse data dy the ACF, because the finite size ffects are drastic and hide the proper behavior. So you see, this is also a VERY PRACTICAL paper, which will help you tremendously.
Finally then, I suggest you to look at our 2009 GRL or better our 2011 PRE which is at the heart of this ClimateDialogue. The paper is also mathematically demanding, but not too much, but since we are scientists, this does not create problems for us, since mathematics is the language of science. I hope you agree. (Otherwise, we were philosophers). Reading all these papers will take you some time, but this will be a very good investment!!! Just do it!!! If you have specific questions, not philosophical ones, I will be more than happy to help you. Also, if you cant get all papers from the Internet, just tell me and I will email them to you. Getting familiar with LTP and the significance of anthropogenic trends is enormously important and at the heart of this CD and the PREREQUISITE for a meaningful discussion. So take the chance!!
Now let me come to the confusion that arises when you and Bart talk about LTP. First of all, LTP is specific and canbe described by mathematics in a well defined manner, unlike your El Nino example which for sure is not LTP. If I want to find LTP there, I have to analyse the time intervals between ENSO events (you know they are well defined) and then check if these intervals are LTP. This cannot be done satisactorily because we have far less than 100 intervals. I said in my first response to you alreadsy, that since we are scientists and not philosophers, we have to be specific, otherwise no progress in this complex field.
Now, Bart and Rasmus, what do we understand as natural fluctuations of the climate? For sure not climate without natural forcings. Natural forcings belong to the climate and make the persistence law right (see our 2004 GRL). I wrote already before that climate is highly complex, linear and nonlinear, and everything is interwoven. The natural forcings are not responsible for external deterministic trends, just for the persistence, and one does not need to ask what are the single contributions of the various forcings to temperature changes, since everything is interwoven and probably impossible to separate. It is very naiv to think that mathematical techniques could or should be able to distinguish between the various forcings. So, the natural forcings together with the unforced climate system (which is certainly not white noise) make the ups and downs of temperatures which you can see easily when you look at the data with a sliding average, and we do not need actually to specify where they come from. These ups and down that look like mountains and valleys on all time scales are synonymous for LTP. I find it intriguing that we can classify these fluctuations, in a very satisfying manner, by a singlenumber, the Hurst exponent or correlation exponent. Some climatologists, due to a lack of mathematical understanding, think that these fluctuations can be desribed by an AR1 process, but this is simply nonsense and in disagreement with all facts.
Now, in addition to the natural forcings, there are anthropogenic forcings, mainly by GHG but also urban effects must be considered. The question is now, what is the effect of these forcings on temperature. This is a highly relevant question and rgards the climate sensitivity. I am not expert in this, but from my colleagues who are experts I learnt that this is a difficult and not fully settled question, in particular when the temperature evolution of the past 15y areconsidered.
A PRAGMATIC way to approach this problem is what we are doing. We assume there are natural fluctuations (among others driven by the natural forcings) and a probably anthropogenic monotonously increasing part which is kind of deterministic and which we call external trend. You see, instead of vague speculations and philosophical discussions that lead to nowhere we put this simple assumption. This pragmatic procedure is not unusual in climate science, it is actually being used in most articles that are concerned with temperature increases and significane estimations,and you should be aware of it.
Now, the big mistake made by our colleagues is that they used IMPERFECT methods like you do, namely the ACF or the power spectrum, to conclude that the natural fluctuations defined in the way above, can be described by an AR1 process. Then they used known mathematical techniques to estimate the signifance of a trend. This crucial mistake appeared also in the IPCC report since the authors were, unlike you after reading our papers, not aware of the LTP of the climate. They assumed STP and thus got the trend estimations wrong by overestimating the significance. Our 2009 GRL and 2011 PRE show how to do these estimation for LTP records right.
Now I am getting tired. I hope you will enjoy our articles,
Thank you Paul, I like your idea about the role of the volcanoes
Arthur, thank you for your Comment. I think I answered it in my quite lengthy reply to Rasmus.
Please have a look at it. DFA and all the other methods can only eliminate monotonous trends.
We now prefer DFA2 because it eliminates linear trends in the original data, together with WT2.
If you are more interested, have a look at our 2001 Physica A paper or the SI in our
2013 Nature Climate Change.
“[I]n addition to the natural forcings, there are anthropogenic forcings, mainly by GHG but also urban effects must be considered. The question is now, what is the effect of these forcings on temperature. This is a highly relevant question and regards the climate sensitivity. I am not expert in this, but from my colleagues who are experts I learnt that this is a difficult and not fully settled question, in particular when the temperature evolution of the past 15y are considered.
A PRAGMATIC way to approach this problem is what we are doing. We assume there are natural fluctuations (among others driven by the natural forcings) and a probably anthropogenic monotonously increasing part which is kind of deterministic and which we call external trend.”
Demetris said earlier:
“[H]ow do we know about “GHG-driven trend”? Because the climate models tell us so. But there is a problem here as they did not predict the “warming ‘hiatus’ during last decade” […] Those who believe that climate evolution can be described in deterministic terms, should have provided us with models that predict the ‘hiatus’ in deterministic terms.”
Do you agree with this statement? If not, why not? If you do agree, could you also comment on the two studies I cited in my above comment to Demetris (Guemas et al 2013; Meehl et al 2011)?
I note that the editors initially laid out a series of questions which have been mostly indirectly answered in the discussion here, but I haven’t see direct answers. Let me repeat the questions and my view on where we are at this point:
– which is two questions, and the phrase “climate change” isn’t defined (“climate always changes”!) I think all are agreed on at least roughly the meaning of LTP – power-law decay of (detrended or otherwise filtered) correlations in observed quantities. It is relevant for detection of climate change simply because a number of different climate-relevant datasets show some form of LTP, although the power laws (eg. Hurst exponents) differ between differing observables for reasons that are not at all clear. The existence of LTP has to be factored into analysis of the significance of any deviations in observables from historical patterns, as it changes the underlying statistics, at least in principle. However, as Rasmus points out, trying to detect climate change from observations properly should include ALL the observations – there are many things besides air surface temperatures, all with trends pointing to recent warming. If LTP decreases the significance of one or two of these, it doesn’t make much difference to the overall picture – a proper Bayesian analysis is probably the best approach taking into account all the evidence, if we really want to decide whether or not we have enough evidence to “detect” climate change.
Again two questions. On the first, Rasmus Benestad answered no, I’m not sure on the others. To me “detection” should be based on Bayesian arguments that should include any other explanatory factors we have (including physics), so no, it can’t be just statistics. For example, global temperature trends are much clearer if you factor out the phase of the ENSO cycle from annual temperature values; just looking at the statistics of temperatures without including our knowledge of ENSO (or volcanos, or other recent forcing factors we can measure) is throwing out useful information. So, if you can “detect” purely with statistics, great, but “detection” ought to include all that we know, if there’s any uncertainty from the pure statistics side.
The second question goes right to the heart of this debate we seem to have been having in the comments on “forced vs unforced” change. I note the editors specifically mentioned this regarding attribution at the start:
But there’s been essentially no further discussion of attribution in the comments. It seems very clear from what Bunde and Koutsoyiannis have stated that the observed LTP is *NOT* a sign of unforced (internal) variability, but convolutes both unforced and historical forced changes. There is no way to get around that. In fact, from the discussion it looks like volcanic activity is a major component of the observed LTP in temperatures. So an LTP statistical model bears *NO* relation to our understanding of internal variability.
LTP has been argued for here, but I’m afraid this is very unconvincing because present-day forcings and conditions like ENSO are not being included in the analysis. See above discussion on “detection”. If you insist on using a purely statistical model for detection you’re tying your hands behind your back – but I suppose some people like to do this.
See above discussion – I don’t think any meaningful inference can be made about LTP regarding internal variability for Earth as a whole because the external influences largely drive changes in the variables and are simply not being controlled or accounted for in the statistical analysis. For any time series that is properly controlled, for a Hurst exponent of 0.65 or so to be distinguished from a random walk 0.5 value, you would likely need a range of at least 3 orders of magnitude in time scales – that is, 1000 years for annual data (though that depends on how uncertain/variable the data is too). I’m doubtful of any claims regarding shorter time series than that unless the observed exponent is a lot higher.
This question seems to assume positive answers to previous questions, which I don’t think are justified. Nevertheless it’s been answered several times here that some time series do show significant warming even when including LTP.
Benestad I believe answered this qualitatively:
Koutsoyiannis showed a number of graphs that appear to agree:
Bunde doesn’t seem to have addressed this question, only looking at trends (and not at combined land-ocean, but separately at ocean and land temperatures); however he stated a high degree of significance in the warming trend over land.
This is an odd question which I don’t think anybody addressed – maybe because all agree that climate change IS “detected” by these statistical methods after all. Maybe the editors want to rephrase it, given the answers provided for the others…
Thanks for your comment, for pointing out these publications and for your questions. I may not be the right person to judge these publications; for example my knowledge about deep-ocean heat uptake is zero. In addition, my university does not subscribe to Nature.Com journals and I do not have access to these publications. So, my reply will be general.
In brief, based on the information you give about these studies, my answer to your question “Is this the kind of prediction you’re asking for …?” is negative. I believe that retrospect studies are useful to explore possible explanations of observed phenomena. But it may be dangerous to believe that a skill in retrospect explanation (I would not call it “prediction”) should imply a prediction skill. I will give a very relevant example from a report by Philip J. Klotzbach and William M. Gray, entitled “Extended Range Forecast of Atlantic Seasonal Hurricane Activity and Landfall Strike Probability for 2012” (7 December 2011). The abstract reads (emphasis added):
Also, a retrospect explanation that seems to have some skill in numerical terms does not imply that the explanation given is correct. Infinitely many models could be fitted, with good performance, to a limited data set in retrospect. Even in blogs and in news we often (perhaps on a monthly or weekly basis) see diverse model fits that explain the global temperature evolution based on various explanatory variables (of solar, atmospheric or ocean origin), or explain various other phenomena based on global warming. In some cases we also see future predictions based on these models. I will not criticize any specific of them; rather I will give a funny counterexample: if you google “proof of global warming”, you will find images implying global temperature being an “explanatory variable” of a hilarious “dependent variable”. I believe there may indeed be significant correlation between the two variables this counterexample refers to, but of course this is just a joke and is put as such, I guess. On the other hand, there is no shortage of studies of similar type but pretending to be serious and claiming causative relationships between global warming and numerous aspects of nature and life.
I think that to take an analysis of this type seriously, a minimum prerequisite is to contain validation of the hypothesis made. I have explained this in my initial comment to Rasmus, with respect to his model presented in his Figure 1. I am copying here what I wrote to Rasmus, also adding hyperlinks to the references I used for your convenience.
So, since you have seen the studies you point out, may I ask you two questions? Have these studies followed a split-sample technique, with a separate validation period? Do they also provide a future prediction to enable validation/falsification in a few years from now? If the replies to both questions are positive, then the papers are worth of respect. Whether or not they tell the truth is another issue; we’ll know it later.
Thanks for your reply and your question. Since I also only have access to the abstracts, I passed your question on to Virginie Guemas herself.
I hope she’ll reply. I did find the first page of her paper on the net, where she talks a bit about her methods:
Maybe this can already (partly) answer your question?
The full paper of Meehl et al 2011 is here:
But I suppose this one is less relevant to your question, since they only studied projections for the rest of this century?
For your information: I’m just an interested layman, not a professional scientist, so the technical details of this discussion are beyond my comprehension. I’m only trying to understand the implications of the differing contributions as well as I can, so that’s why I ask my questions.
Apart from your really off-topic comment of May 8, 2013 at 7:29 pm, which gives, however, some nice insight in your motivation, I would like to respond to your on-topic comment of May 8, 2013 at 7:57 pm. (which I consider as a reply on my comment of May 5, 2013 at 12:59 pm.)
It really surprised me, that you didn’t get my point. You have a statistical method to analyze time series and applied this to the earth’s climate. I don’t think anyone is disputing that this is just one way of looking at the behavior of the climate system. There are, however, more ways to investigate climate, e.g. considering physical laws (and yes, this includes also fundamental physical properties which can only be expressed in terms of distributions or probabilities – but again, this is fundamentally different from statistical analyzing tools, you are talking about).
So, please, consider your method as one of the pieces of the (climate) puzzle. The more pieces you gather, the clearer view you get of the complete picture of the puzzle. In other words, if you have more independent information you can exclude (some) options you had with just one piece/method. I think in the public comments Arthur Smith gave an excellent example to illustrate how more information can lead to more constraints (https://mwenb.nl/long-term-persistence-and-trend-significance/#comment-454). He stated:
In my view (and relevant for this discussion) this means that if you describe the movements of planets and sun in terms of mathematical formulae, you can do that with any assumption on the center of rotation. If you add physics (here: Newton’s laws of gravitation), there is only one possibility left: all planets rotate around the centre of gravity (which happens to be inside the sphere of the sun).
Let’s return to the climate system. You say that in order to reveal the behavior of the climate system it is sufficient to consider the climacogram, because all the physics is contained in the investigated timeseries. In my view this is analogue of saying that all physics is contained in the movement of the planets and the sun. In other words: the results of the climacogram can be considered as the epicycles of climate. We should be looking for physical laws to limit the possibilities.
In order to clarify possible differences in view, I would very much like you to briefly answer the following questions:
1. Do you claim that your statistical tool (climacogram) has a similar status as statistics derived from fundamental physics? Or in other words that the climacogram is a fundamental property of nature?
2. Do you recognize that the climate system can be externally forced? For instance if the sun becomes brighter then it will affect the global energy balance?
3. Do you agree that if there is change in the energy balance then physical processes in the climate system will be influenced? And more specifically, these might affect global mean temperatures?
4. Do you agree that time series of global mean temperature have deterministic as well as chaotic elements?
5. Do you agree that by including physical contraints you get a better picture of behavior of the climate system than by only considering the climacogram?
6. As you confirmed that information is lost in the climacogram analyzing tool (as I showed this concerns information on the signature of the investigated time series, e.g. no distinction between fluctuations and increasing signals), do you agree that adding physical insights may account for the lost information?
Arthur’s example of epicycles is an excellent one, so thanks for drawing again attention on it. First, if I remember well, the epicycle model is not only one of the ancient world; even Copernicus, who revived Aristarchus’s (3rd century BC) heliocentric model, used epicycles in his model.
It is useful to think why this model was introduced and prevailed for so many years. It is usually asserted that the reason was that the ancient Greeks regarded the circle as a perfect shape, so that Nature could not follow anything else than this. This may be part of the truth but not the whole truth. Now, from the information we have from the Antikythera Mechanism, the ancient Greek analog computer to model the planetary motions and eclipses, we can infer that this very model may have affected the physical insight. For it is relatively easy to materialize epicycles using gears (the constituent elements of the Mechanism) than to materialize ellipses.
Whatever the reason for the prevailing of epicycles was, a metaphysical view about Nature or an effect of the then available computer model, the example teaches us not to develop fixations about Nature, neither to adhere to available models.
Now my replies to your questions:
No the climacogram is not a fundamental property of nature. Variability and change are. Climacogram is a stochastic means to describe them.
Furthermore, what you call “statistics derived from fundamental physics” may be the other way round, i.e. physics derived from fundamental statistics. But it is even better to say it in a more symmetric manner: The combined use of fundamental physics and fundamental probability enables description of complex physical systems. For example, if you take the principle of maximum entropy in its pure probabilistic formulation and the laws of conservation of momentum and energy, then you get a convenient and incredibly simple description of the pressure and temperature in your room.
Of course it can. But the global energy balance depends also on the internal dynamics.
Of course it will. But the internal dynamics are able to cause change as well.
No I do not agree. The deterministic and chaotic elements are not a dichotomy. Better to say deterministic chaotic. The deterministic chaotic elements do not exclude a probabilistic description thereof. Rather they necessitate it. Once again, see my Random Walk on Water.
If the solution violates a physical constraint, yes, you should include it in a second step. But if your solution respects the constraint, then by explicitly adding it you won’t gain anything. You will find the same solution. For example, if you use the principle of least action to derive the equations of motion of a body, it is not necessary to include the conservation of mechanical energy; rather the latter will be derived as a result of the least action.
Physical insights are always welcome—but it depends on what you call physical insights. As a counterexample, in hydrology there used to be a view that by cutting a catchment into numerous pieces and applying first principles on each piece you would be able to make a model that does not need data and calibration. This reductionist thinking, which was named “physically-based modelling” is receding now as it was gradually understood that merely first principles cannot provide a decent model and that the smaller the pieces, the bigger the requirements for data and calibration.
Just a short comment on the interesting review by Arthur Smith.
Regarding the length of a record, that you need to distinguish between white noise and LTP with H=0.65: For this simple distinction, you need certainly much less than 500 data points, which is about 40y monthly. It is much more difficult to distiguish between LTP and an STP process. We have found recently that 600 data points (50y monthly data) are sufficient when using DFA2. The larger the Hurst exponent is, the easier the distinction.
Regarding ENSO and La Nina and other cycles: These are not LTP processes but contribute also to LTP. When analysing temperature data you only eliminate the seasonal cycle but not the others.
Regarding the warmest years: The estimation of the probability has been given by Zorita et al. quantitatively, as I wrote earlier.
Regarding deterministic trends:
In the global temperature, for example, the trend is highly significant on both 50y and 100y scales . The Hurst exponent here is close to1.
You can find these values in our 2011 PRE and in our 2012 review.
Armin Bunde said May 10, 2013 at 5:24 pm
“Regarding the length of a record, that you need to distinguish between white noise and LTP with H=0.65: For this simple distinction, you need certainly much less than 500 data points, which is about 40y monthly. It is much more difficult to distiguish between LTP and an STP process. We have found recently that 600 data points (50y monthly data) are sufficient when using DFA2. The larger the Hurst exponent is, the easier the distinction.”
This, of course, is a strong argument against any value placed on a single dimensional time series, but that, of course, while the basis of many statistically based arguments which have been published, is merely one averaged output from a physical model. Serious attribution studies require judgements about the entirety of the outputs. Of course, no GCM is perfect, but one gains confidence in the averaged results such as global temperature time series by looking at the many well known results which emerge from the GCMs, such as circulation patterns. No such thing can be said about one dimensional statistical models. Certainly one can find multiple statistical explanations for just about any one dimensional time series of finite length. (Also it is well known that the global temperature records do not display white noise, so that is a bit of a red herring)
thank you for pointing me out these references. I am not an expert in this topic, but I find the arguments in the nature paper convincing.
Regarding GHG I may not fully agree with Demetris: We cannot show in our analysis of instrumental temperature data that GHG are
responsible for the anomalously strong temperature increase that we see and that we find is significant, but It is my working hypothesis.
Armin, thanks for the clarification.
In reply to Demetris’ question Virginie Guemas writes:
“We use a general circulation model which comprises millions of lines of codes and thousands of parameters. Each parameter is tuned on a restricted observation campaign and then we run the model and see how well it performs. In this sense, we apply some techinque similar to the split-sample but we actually underdetermine contrary to the split-sample technique.”
She’s working on predictions for the future, but expects them to not be very good yet.
Maybe Demetris has further comments after reading the paper that Guemas was kind enough to send us a copy of?
Today I stumbled upon a paper published this week in Digital Signal Processing, which I found on-topic: Navarro et al. (2013). Some may find this paper too technical, statistical, or even off-topic. Till now, I generally avoided being too technical in my comments, but, since Armin has raised several technical issues, this gives me the opportunity to speak about a few of them.
The above paper supports what Armin had said about the appropriateness of the DFA method for identifying LTP properties. More specifically, the paper studied lognormally distributed data and concluded that three methods had best performance, namely DFA (detrended fluctuation analysis), DWT (discrete wavelet transform) and LSSD (least squares based on standard deviation). Quoting from the abstract:
Coming to what Armin has said about appropriate methods for identifying LTP and estimating parameters, I fully agree with him that the empirical autocorrelation function distorts the LTP properties and should be totally avoided. The reason is that empirical autocorrelation is highly biased as shown in my 2003 paper and graphically illustrated in slide 15 of a 2010 presentation.
I also agree with Armin that the periodogram/empirical power spectrum is not an appropriate method. The reason is that it is too rough (spiked), whereas common smoothing techniques distort the information. For those who are familiar with spectrum and prefer to view phenomena in the frequency domain, I have recently (2013) proposed a pseudospectrum based on the climacogram, which has similar (or same) asymptotic slopes with the spectrum while avoiding its caveats.
My colleague Hristos Tyralis and I have also tested roughly all of the related statistical methods and reported our results in a recent (2011) paper. Indeed DFA did not perform bad as shown in our Table 1 (we use the name “Var. of residuals”). However, it is not one of those we recommend (sorry about that, Armin). As we show in this paper, the Hurst parameter and the standard deviation are correlated to each other and therefore their estimation cannot be done separately. Thus, the method of preference should respect this correlation. DFA does not have this property and treats the estimate of standard deviation as if it was unbiased, when in fact it is highly biased (I have mentioned this problem above in my first comment to Armin, asserting that this has also affected their results in Rybski et al.).
The three methods we recommend in Tyralis and Koutsoyiannis (2011; Table 2) fully account for the interdependency of the parameters and also have the best performance. Not surprisingly, the maximum likelihood method (as streamlined in the paper in full analytical manner, without using approximations) provides best estimates. Yet it has three caveats: (a) as a fully parametric method, it is dependent on the marginal distribution function of the process, (b) it is computationally demanding, and (c) it does not provide graphical diagnostic means to assess the model suitability. The first problem can be tackled easily by normalizing (by an appropriate nonlinear transformation) the data before application. This will deal with the spiked patterns mentioned by Navarro et al. (2013); for example, for their lognormal data, first one should apply a logarithmic transformation to the data.
Such normalization is also advisable (albeit not necessary) for the next two methods, both of which are based on the climacogram: the LSSD (already mentioned) and the LSV (least squares based on variance). These two are almost equivalent, they are simple, economic and fast: they do not use any concept additional to standard deviation or variance, respectively, whose statistical behaviours are well known. They are also transparent. Thus, they provide a diagnostic tool, which is the comparison of empirical and theoretical climacograms, very easily constructed.
Lennard, since you quoted Virginie’s reply, for completeness I am posting my reply to her (also copied to you).
You make an assertment about climate models with which I disagree:
Often, people compare the observed global mean temperature with the average of the results from many different climate model simulations, which are not corresponding quantities. That would be like comparing this years spring temperatures with climatology – of course you’d expect the day-to-day values to fluctuate about the normal values. And likewisem you’de expect the year-to-year variations in the real world to fluctuate about the slow trend due to ‘noise’ – or internal variations driven by the system’s non-linear dynamics.
The paper by Easterling and Wehner (2009; GRL; DOI:10.1029/2009GL037810) provides a good discussion on this topic. We can also examine the 10-year interval from 92 CMIP5 simulations (RCP4.5) and we see that there are indeed some models which indicate decades over which the global mean temperature do not increase. This is explained more on Realclimate.org.
The figure above provides an example, where the black line is HadCRUT4. The red lines are the model simulations for which the temperature increases over 2002-2012, whereas the blue ones show those which decrease.
I must state that it’s failry meaningless to use such short intervals to say anything about long-term trends – just as Easterling and Wehner state.
Perhaps, and this may be perhaps because all the LTP is due to external forcing. But you have not demonstrated this, Demetris, and I’m not so sure that you’re right. The models do after all embody a non-linear dynamical system which simulate slow variations due to ocean-atmosphere coupling. We also know that they simulate chaotic weather.
I do not need to rely on climate models, but they are handy for testing out the different hpotheses. We know that these models do have some merit in e.g. predicting the ENSO phenomenon, and they reproduce most othe the phenomena that are oserved in the real world.
Another way to shed more light on our differences is for you to explain what mechanisms are involved in setting LTP in nature. If you cannot pinpoint the exact physics, I will regard the statement as speculations rather than facts.
I think the following quote reveals that we are on different wavelength:
Please read the paper by Easterling and Wehner which I cited above. I do not think you have understood the tacit understanding that the climate evolution is due both to chaotic variations as well as forced long-term trends.
I agree there is a degree of stochastic element in our climate, but there is also a substantial degree of deteminism, depending on your scientific question and the scales you look at. For instance, I can confidently say that the mean December-February temperature in Oslo will be substantially lower than the June-August mean in 2015. But I cannot yet say what weather we will get on July 14 this year. These two statements represent two different scientific questions, and both are quite trrivial. Nevertheless, they can illustrate the fact that our climate is not just stochastic, and that this observations is supported be real measurements.
Please allow me to comment this quote:
The idea that anything that violates the laws of physics is ‘unphysical’ is fairly straight-forward. Measured data are not unphysical, but the assumptions you make when analysing them may be inconsistent with physics. You can always find a mathematical framework for fitting a set of data, but that does not mean that this mathematical framework represents meaningful physics. One example is Fourier expansion and the Dirichlet condition.
In our situation, the global mean temperature does not exist in isolation, but is one aspect of a more comprehensive climate system where processes are interconnected. Moreover, the temperature is a heat measure and plays a role for energy fluxes and evaporation. There is much more relevant information about the global mean available, and I think you reach misguided conclusions by just by looking at the LTP-behaviour of the time series and neglecting all the other knowledge. You cannot just look at the statistics, but need to consider both statistics and physics. You also need to consider other independent measurements.
LTP….long-term persistence…a highly important topic! We can agree on the
(1) The longer the time frame for the model being persistant, the better
the model quality. What means “long”? A millenium is insufficient, because centennial
cycles may be excluded, which require at least 2, or better 3 millenia in order to be
identified. Models, based on a 150 year time span (1750-2000) are pure hogwash claiming
LTP for millenia….
(2) The idea of applying statistics in order to filter out a trend WITHOUT knowing the
underlying driving forces of the trend is a dream of statisticians…for example the recent
Marcott et al paper, trying to filter out a hockey stick out of spaghetti graphs:
http://i49.tinypic.com/lbogh.jpg or http://47.tiny pic.com/2uylgh3.png
This is not even pseudoscience, but the low of the lowest end….
The real climate drives are the five macrodrivers, as given in paper in (3).
(3)The notion of a linear trend line….another hogwash. There cannot be a linear
trend in climate, see 27-37 ka BP or the Holocene temp evolution in
The temp evolution is non-lineary….but curvelineary.
(4)A non-lineary and CURVILINEARY trend is THE real temp trend. For this reason, the
present temp plateau since 15 years is in force. Not in force is the
lineary trend of 20 years,1980-2000, which does NOT continue with a 0.2 C increase
per decade, as the claimed in IPCC in ar4, wg1, chapter2. The IPCC claim of 0.2 C
increase per decade is an outright LIE by climate villains.
(5)The real trend is sinoidal, as shown in above quoted paper.
(6)The long term persistence analysis over the ENTIRE Holocene shows the detailed
natural 60-year Nicola Scafetta cycle for over 10,000 years. Fabrication of hockey
sticks trends exclude this most important 60-year cycle (described today as
quasi-cycle of PDO and AMO-cycles).
Therefore, lineary temp trend lines are climate confusion pure.
Best regards, JSei.
Thank you for your reply. I’d very much like to read your paper ‘Long-term correlations in earth sciences’, but it’s behind a pay-wall (even though Norway is a rich country, it does not mean that science is splashing with money). Could you please make it available for us to read?
I see that you have a great number of papers, but I also expect that you shall be able to explain your points without me and other having to read all your papers – please remember that we have many other things to do, and as long as you have not convinced that LTP is ’the magic bullet’, you cannot expect others to spend all their time following your example. also, I think you underestimate my competence – just because we come from different angles. I do appreciate the mathematics, and I do have an understanding of the meaning of the Hurst exponent.
I notice that your position is:
My take on this is that it depends on your scientific question/hypothesis. Also, we do have climate models and can carry out numerical experiments to explore the different effects. And if there is a deterministic response, you can use regression techniques to look for ‘fingerprints’.
One of your comments caught my interest:
I think it’s well appreciated that extreme events often come in clusters, and you can for instance see the flood mark and the years of flooding on old English towns. An alternative explanation is that the weather tends to follow a strange attractor (low-level chaos) which too leads to such clustering. Hence, by implying the possibility that there is LTP behind the clustering does not exclude other explanations.
I presently think one major weakness in your reasoning is
This cannot be true if the weather evolution is chaotic, where the weather system loses the memory of the initial state after some bifurcation point. You also need to examine the Lyapunov exponents to compare with alternative theories.
Another weakness may arise in running Monte-Carlo simulations for LTP processes, as you assume that the random number generators are perfect. They have improved substantially over the recent years, but I’m not sure if they are free from generating their own patterns.
You do not always always need sophisticated mathematics (I do like the math) to spot profound differences between the the auto-correlation C(s). And we can look at other data than the global mean temperature, for which we expect a forced trend to be present. For instance, we do not expect there to be a trend in the sea level pressure (SLP), but we do expect that it should exhibit similar internal variability as the temperature (at least on regional scales).
Another experiment can be to look at the different components of the global mean temperature. If we use standardised values and look at the hemispheric differences, we expect to see variations caused by geographical shifts and ocean over-turning. We can also examine the difference between the tropics and the higher latitudes, as the lower panels below. We see that C(s) changes profoundly when we look at these geographical differences, rather than the global mean for which we expect a trend. Also note the strong fluctuations in the early part of the record which are due to a smaller data coverage (a higher degree of statistical fluctuation). Thus, this geophysical record is probably not homogeneous.
Whatever. Also this and this
I think there are still a number of fundamental misunderstandings at the moment of the position of those who favour LTP type natural variability. I suspect I will now add to the confusion, but I’ll try and explain as best I can my view of it.
Firstly, with regard to determinism, stochastics and chaos. Systems exhibiting chaotic behaviour are intrinsically deterministic – they follow a trajectory, and the deterministic trajectory can be measured after the event (and even predicted a very short way out). But the extreme senstivity to initial conditions and exponential divergence of trajectories arbitrarily close to one another in phase space means that beyond a time horizon we cannot deterministically predict the trajectory of the system. This uncertainty of the future is the essence of why randomness arises from deterministic systems.
We can characterise this uncertainty using stochastic, rather than deterministic, modelling. But the stochastic modelling is not “just statistics” – it is imprinted on the very physics of the system, through the chaotic solution to the equations governing the system in question. And again, my signal processing background means I tend to prefer to picture this as a power spectral density function, although others prefer autocorrelation functions, they amount to the same thing – a picture of the uncertainty.
This uncertainty is what we represent by the stochastic function. Rob asks where we see deterministic and chaotic behaviour; of course the answer is that climate exhibits both of these simultaneously, as the two are related. Rasmus asks if it is possible LTP proponents are mistaking LTP for chaotic behaviour; but the point of LTP is to characterise the unpredictable component of the chaotic solution to the equations governing the system.
The discussion has been interesting, but I think the discussion is being severely held back by these misunderstandings. It is only natural that such misunderstanding/miscommunications happen, but I think it is necessary to resolve them before the discussion can move forward; as Demetris notes, his lecture and associated paper “A Random Walk on Water” may help to bridge some of these gaps.
I have an additional comment on the discussion earlier regarding ENSO, but it touches on a point made by Rasmus in a recent comment. I once again will selfishly discuss stochastic models from a power spectral density (PSD) perspective, as this is more intuitive to me due to my signal processing background.
A brief recap: the PSD function of different stochastic models. White noise has PSD independent of frequency. STP or autoregressive systems have a flat PSD up to a characteristic frequency, then a decaying spectrum beyond that. LTP have a PSD inversely proportional to frequency, up to a slope of (1/f). A random walk has a PSD of (1/f^2).
People have noted that ENSO does not appear to exhibit LTP. I would agree with this, but only because of the way it is analysed. We force LTP data to become STP. Let me use the classic ENSO index, the SOI, as an example. It is the difference between the sea level pressure at Darwin and Tahiti.
An important feature of ENSO is that even though models can create ENSO-like behaviour, ENSO is not deterministically predictable. A valiant effort was made a few years ago to make predictions of ENSO in the 3-6 month time window; and I am most grateful to the scientists involved for standing up and admitting that their predictions were without skill beyond a naive baseline. So, a stochastic model is the only meaningful model; but what are we modelling, and what should we use?
As discussed, LTP proponents argue the internal dynamics of the climate system exhibit LTP behaviour, and sea level pressure is no different. And it is also important to understand that the LTP exists both spatially and temporally. So, we expect a PSD of (1/f^a) where a is greater than 0 and up to 1. But what happens when we take two samples from a system and difference them?
From a signal processing perspective, a difference like this is the simplest form of high-pass filter. Such a filter will have a spatial bandwidth governed by the separation of the points. Fluctuations with low frequency will be rejected by the filter, and fluctuations above the filter break point will pass through unchanged. The filter gain characteristics are proportional to the frequency below the break point, and unity above.
An advantage of working in frequency space is that we can determine the resultant power spectral density from the filter frequency response and the PSD of the initial system prior to the filter. Before the filter break point, we multiply the LTP PSD (~1/f) by the filter gain (~f) and we find a constant PSD output from the filter. Above the filter breakpoint, the LTP PSD (~1/f) is multiplied by unity (~1), and we retain the 1/f relationship beyond this point. (NB. I could really do with adding some graphics here to aid this explanation. I may try later.)
The PSD I have described post filter is exactly the PSD of an STP system. But it is the action of the filter – the method of analysis – that has created the STP behaviour, not something that is an intrinsic part of the system. The exact same is true for the NH-SH example Rasmus provides.
So from a first principles analysis, the result from ENSO and the differencing Rasmus carries out is exactly what we would expect from a system exhibiting LTP. STP requires a single fixed frequency to happen; this does not occur in nature, but it often does – perhaps in ways not always intuitive to the user – from the analysis we apply in nature. In this case, the single characteristic frequency of ENSO and the NH-SH difference comes from the effective filter we are applying to the data, not from the data itself.
The first question we asked was “What exactly is long-term persistence?”
After two weeks of interesting discussions even this basic question doesn’t seem to be answered satisfactorily.
In our introduction we wrote:
I searched the different blog posts and comments for remarks about what is LTP. Here are some relevant fragments.
Later in two different comments (here and here) Armin wrote:
I agree that it is important to first agree on the definition of LTP. As we can see above a lot of different things have been said about LTP. Armin (and I suppose Demetris agrees) said that LTP is well defined, already by Mandelbrot. Armin and/or Demetris, what is the formal definition of LTP?
Thanks again for the great comments. I fully agree with the first one and, from first glance, with the second too, although I need some more time to assimilate the latter.
To your former comment I wish to add two references, which I think could be very useful for those who wish to penetrate on stochastic aspects of physics, are not too much adhered to stereotypes and can devote some time in reading.
These are two books that provide the mathematical basis for real physics of complex systems. I must notify that they are very dense and need to be read several times for a good result:
Michael C. Mackey, Time’s Arrow: The Origins of Thermodynamic Behavior, Dover, 1992 (a small one: 158 pages).
Andrzej Lasota and Michael C. Mackey, Chaos, Fractals and Noise: Stochastic Aspects of Dynamics, Springer-Verlag, 1994 (a big one: 459 pages).
I would like to elaborate a little more about what you show in your figure 5 and the conclusions you draw from it.
In our introduction we mentioned the IPCC AR4 definition of “detection”: “Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”
Would you say that the method you followed in Koutsoyiannis/Montanari (2007) is your preferred answer to the phrase “in some defined statistical sense”?
Is “detection” purely a matter of statistics for you?
Does your results mean that in your view “detection” has not yet taken place, although it comes close?
If you indeed conclude that detection has not taken place yet, does this mean for you that the effect of GHG’s on the climate (temperature) is relatively weak? I ask this with the following in mind: I suppose an external force on the climate could be so big or fast that a significant change on e.g. the global temperature is quickly achieved even when you take LTP into account. The impact of a big meteorite for example causing major cooling on the earth. So in theory the increase in greenhouse gases could also have this effect, do you agree? Does the fact that despite the relatively rapid increase in GHG’s there is no significant change yet in the global temperature, mean that the effect of GHG’s is relatively weak?
As an aside: did you ever analyse the CO2 record? I can imagine that the rise from 280 to now 400 ppm is very significant even when LTP is taken into account (I suppose this time series also has a Hurst parameter of at least 0.6).
In his first comment on your guest blog Demetris wrote:
As far as I know/remember I haven’t seen a response from you on this statement. So I am interested to hear your reaction.
I read in one of your papers that the fact that you find a significant change in the land temperatures does not necessarily mean that greenhouse gases are the cause. It could also be the Urban Heat Island effect for example.
This topic also reminded me of an interesting paper by Compo and Sardeshmukh a couple of years ago. This paper shows that the land warming is following the ocean warming (If I remember well, the ocean temperatures were prescribed in their model). This comes close to what Koutsoyiannis is saying here as well, that there is or should be a close connection (especially on the longer term) between ocean and land temperatures.
Marcel, you ask:
Actually, I have given a formal definition already. Quoting from my initial post:
So, whenever 0.5 < H < 1 we have LTP. There are other equivalent definitions: replace in the above “variance” with “autocovariance” and “time scale” with “time lag” and you will have one. Another equivalent definition has been offered by Spencer, in terms of the power spectral density (PSD); he said:
where f is frequency and a = 2H – 1.
All these are equivalent. One should have in mind that they refer to asymptotic properties, i.e. they should be valid for arbitrarily large time scale or time lag, and for arbitrarily small frequency. For this reason in more formal writing we use the concept of limit (“lim”) for the definition.
My preference is the definition based on variance (or standard deviation) because it is the most economical and simplest, as well as because it provides the best interpretation, that related to variability/change (rather than “memory” etc.)
More explanations about LTP and change, hopefully in simple words, can be found in my paper Hydrology and Change just published (today; in preprint format). If anybody is interested to see it and he/she does not have access to the journal (linked above), he/she can email me to send a copy of the preprint.
PS. Here is the abstract of the paper:
Do you accept the formal definition given by Demetris in his latest comment and in his original post?
In your post you wrote:
If the “signal” refers to “manmade climate change”, this suggests that time series before let’s say 1900 only have noise. Is that how you see it?
In his first comments to you Demetris wrote:
At least before there was “manmade climate change”, LTP seemed to be the norm. Do you agree with that?
Also the examples of Demetris show there hasn’t been a huge change in LTP since GHG’s started to rise. Do you accept this?
Thanks for asking questions most relevant to the topic of the dialogue. You say:
Definitely yes. I explained the reasons in my introductory post:
As I noted, the method needs some further elaboration to include the uncertainty in the estimation of H.
I would say it is primarily a statistical problem, but I would not use the advert “purely”. Besides, as we wrote in Koutsoyiannis Montanari (2007), even the very presence of LTP should not be discussed using merely statistical arguments.
Yes, I believe it has not taken place. Whether it comes close: It is likely. I believe the present dialogue should have been made a decade ago. If LTP was studied more, we would perhaps know more things now. However, as we write in Koutsoyiannis, Montanari, Lins and Cohn: Climate, hydrology and freshwater: towards an interactive incorporation of hydrological experience into climate research (2009):
May I add that detection of a change through statistical significance is not the only thing that matters. The magnitude of the change is even more relevant. The observed climate warning is 0.6°C in 134 years. Assuming that this is statistically significant and a 0.5°C warming is not, does significance make a big difference? Thus, it is important to compare the observed change to what would be a normally expected change.
[Note: It is interesting to see how the observed 0.6°C climate warming in 134 years, correspond to what people have in mind about current warming. My student’s age is about 20 years. In each of my new classes I ask the question to students: how much do you believe the global temperature has increased in the last 15 years (since you started to go to school)? A typical answer is 5°C with the minimum usually being 2°C.]
Yes, I believe it is relatively weak, so weak that we cannot conclude with certainty about quantification of causative relationships between GHG and temperature changes. In a perpetually varying climate system, GHG and temperature are not connected by a linear, one-way and one-to-one, relationship. I believe climate models and the thinking behind them have resulted in oversimplifying views and misleading results. As far as climate models are not able to reproduce a climate that (a) is chaotic and (b) exhibits LTP, we should avoid basing conclusions on them.
No I have not analysed CO2 data. From paleoclimatic graphs I can guess that CO2 concentration exhibits LTP, too, with a high H—as well as that it is correlated to temperature, but not with a one-way and one-to-one relationship. Unfortunately, the time scales of the paleo time series are too broad and the instrumental observations of CO2 are too short; thus coupling the two sources of information is too difficult. But I believe the change from 280 to 400 ppm is significant.
thank you for your Comments. Regarding the review we published last year, it is very unfortunate that it is not available without charge. I wrote a long article in this book, and am quite unhappy that I also have to pay for the other articles in this book. Can you be so kind and send me your email address. I will bundle a collection
of our papers and will mail them to you. It would be nice to have a discussion on this fascinating and interesting topic also later when this Climate Dialogue finishes.
Best wishes, Armin
Dear Marcel and Demetris,
I apologize that I could not answer earlier. As you may have seen, I share many thoughts with Demetris regarding the definition and detection of LTP and so on.
But we do not agree in all points.
First of all, from our trend significance calculations we can see, without any doubt, that there is an external temperature trend which cannot be explained by the natural fluctuations of the temperature anomalies. We cannot distinguish between Urban Warming and GHG here, but there are places on the globe where we do not expect urban warming but we still see evidence for an external trend, so we may conclude that it is GHG.
Second, as you certainly know there is a long discussion on athmospheric temperatures versus SST. It has been argued by Fraedrich and also by us that the inertia of the oceans is an important factor for the LTP and thus we expected and confirmd that the Hurst exponent is larger for SST than for SAT. Based on this, Fraedrich even concluded that H should decrease continously when departing from the coast, i.e. stations very far away from the coast line, like Urumqui in China, should have H=1/2. I found this hypothesis interesting but we could not support it finally from our own analysis. So there is a discussion on the point you make, but I think it is settled ( and also the models show this) that SST has a higher persistence than SAT.
Many thanks for your comments. I agree with most of what you have said, including the presence and definitions of LTP, and your comments have been thoughtful and interesting. The topic of whether the recent changes are significantly difference to earlier periods in the presence of LTP should be the most interesting part of this debate, but it inevitably gets overshadowed by the debate between STP vs. LTP, which is less interesting in my view, since every paper I have read on the topic that has looked at this question in any depth has concluded LTP.
I have a small observation to make on your latest comment though:
I do not think this is the best approach to understand LTP. LTP is a phenomena that spans many scales. Demetris’ climacogram shows LTP over 9 scales. The oceans’ inertia exists only at a subset of those scales. Therefore to attribute the ocean inertia as an important factor for LTP makes little sense to me. At the 1 day scale, the atmosphere has inertia. At the decadal scale, the atmosphere may be considered fast and the oceans have inertia. At the 10 million year scale, the oceans may be considered fast and the land mass now has inertia. At the billion year scale, perhaps we may even consider the land masses as fast responding.
In this context, it makes no more sense to describe the oceans as a factor than it makes to say the atmosphere is a factor than it makes sense to describe continental drift of land masses as a factor. Likewise, it makes little sense to describe volcanoes as a “cause” of LTP; all of these perspectives are single-scale perspectives, and to explain LTP we need a multi-scale perspective.
Which is why, I do not agree at this stage in the approach of attributing specific factors or causes of LTP in this way. The cause of LTP has to be the dynamic interaction between all of these things because it is the only thing that operates at all scales.
I also disagree with this reasoning for two reasons. The first I am clear about, the second is based on some thoughts I have which are less clear at this time, so you may choose to ignore my second observation 🙂
My first point: as Demetris has noted, as scale increases, LTP from the oceans must necessarily influence SATs over land at some point. I’d also like to echo my earlier comment; LTP is pervasive at increasing scale. And as I note above, if you explain this difference by the inertia of the oceans, then you have a bigger problem, as the inertia of the land mass (through contentinental drift) is even greater again; surely the persistence over land should be even greater by this explanation. I think we must find other ways to explain LTP than single scale perspectives.
My second point, which I accept is a little sketchy at this time: I note the location you describe, Ürümqi, is a mid-latitude location with a continental climate. As such it exhibits a very large seasonal temperature variation. Such a large temperature variation would of course upset the estimate of H and does not reflect the LTP variability, so we remove it through the anomaly method prior to statistical estimation of parameters.
But this leaves a problem: the seasonal variation does not just change the first moment, which we correct for in the anomaly calculation, it changes the second moment also, as a function of the first moment. This seasonal variation of the second moment also causes large errors in the estimation of H. I do not believe the estimates of H by Fraedrich are correct, and it does not surprise me that you find slightly inconsistent results. I suspect your results are also unreliable unless you have somehow managed this change in the second moment.
I expect SATs over land to be strongly influenced by LTP. I know our current estimates of LTP in these locations are highly unreliable, and this is to some degree confirmed to me from the disparity between your results and those of Fraedrich, but I am unsure at this point how to advance this issue. I suspect we either need to remove the seasonal effects on the second moment or we need a method of estimating H in the presence of both the first and second moments of the seasonal variations. But my thinking is still immature on this point.
Rasmus, you say:
As I explained here and there, your own grey curve in your Figure 2 is fully consistent with a Markov model with a characteristic time scale of a = 1.25 years. See the graph below if you do not believe that: I fitted the red curve which is an exponential decrease with a = 1.25 years and plotted it on your own graph.
A climate with a = 1.25 years is a static climate (at scale, say, 30 years or more) and any deviation from mean is too small and purely random, without correlation with previous periods.
From what you write, I guess you can accept that, during Earth’s history, there were periods in which your “signal” as you define it (you say “here it refers to manmade climate change”) or your “external forcing”, was not present. At those periods, the climate models should behave as in your grey line, which is identical to my red line, which in turn signifies a Markov process, which finally produces a static climate (sorry for repeating trivial things). The truth is, however, that climate on Earth has never been static.
Otherwise, I agree with you that models can simulate chaotic weather. The problem is whether or not they can simulate (a) a chaotic climate and (b) a climate consistent with LTP. From what I know they fail to both.
I fully agree; actually, I think I had said already that in my phrase you quoted:
Furthermore, you say:
Yes, they may be inconsistent, but they may not, as well. Therefore, to tell that something is inconsistent with something else needs a proof. I may have told this several times in this blog, but I have not seen any proof of inconsistency. Sorry for having to repeat it once again. To save time, I will not comment on the rest of your comment by repeating things that I have already said as well.
Thanks for the interesting graph which shows that among numerous (hundreds?) climate model runs there were a few (six?) that did not suggest a warming climate in the last decade. This is a nice demonstration that Earth’s climate does not feel obliged to do what the majority of climate models dictate. It also demonstrates the vanity of deterministic modelling and, in my view, suggests the need to develop stochastic approaches to climate.
I think the following conclusion from Koutsoyiannis et al. (2011) is relevant:
Actually, I think Rasmus has been consistent and clear that the opposite is the case. See a quote from him here on this very topic:
I guess it’s possible there could have been a substantial period of Earth’s history with zero forcing, though I would suggest it’s an extremely unlikely occurrence. Let’s say there was a period with zero forcing. Given that we don’t know when that was or, presumably, have any sort of reliable climatic records for this hypothetical period I don’t know how you’ve arrived at the strong conclusion ‘The truth is, however, that climate on Earth has never been static.’ Do you have an example of a period which experienced zero forcing to support this apparent certainty?
Indeed, I have a few comments.
First, I noticed the following phrase from the first paragraph (typeset in bold) of the paper:
However, I have difficulties to read this in connection to what Virginie emailed to us:
Second, I noticed in this paper the phrase:
This suggests that the method followed was to reset every year the conditions in order to match the reality. Of course, such method is not feasible if we speak about future predictions and I do not think it is useful even in hindcasts. This is because that any model, even the funniest one, if regularly reset to the current conditions, will exhibit a good performance in reproducing a process characterized by persistence.
Third, as a colleague who saw the paper noticed, the way the results are graphically presented raises questions. You may see, for example, that in Fig. 1c of the paper, which refers to non-initialized forecasts, the points corresponding to consecutive years are connected to each other by lines, while in Fig. 1a, referring to initialized forecasts, the points are not connected by lines. (Perhaps by leaving the points disconnected, you get a feeling of better agreement). Furthermore, Fig. 1c, which is for 3-5 years ahead of initialization, does not indicate any impressive agreement with reality.
For these reasons, I do not think the paper has explained the pause of warming.
Please read my comment again and, in particular, please notice that I have used quotation marks for the terms I quoted from Rasmus, which are “signal” and “external forcing”. Your reply is about forcing in general. Of course there is forcing all the time, but it can be internal, produced by the climate system per se, not by external factors, like Rasmus’s “signal”, which, as defined by himself, “here it refers to manmade climate change”.
I think the proposition
is somewhat artificial in the case for temperatures, as we do know that there is a great deal of variance that is usually is removed before there analysis: the seasonal variations and the diurnal cycle. Most of the variance is tied up to these well-known cycles, forced by regional changes in incoming sunlight. Furthermore, ENSO has a time scale that is ~3-8 years, and is associated with most of the variance after the seasonal and diurnal scales are neglected.
For precipitation, the picture may be different.
Yes and no. The response to natural forcing is still not ‘internal’. We know there have been natural forcings before and we know that they have caused some variations on Earth. Now we have the best instruments ever, and we can measure natural forcings and infer their effects. I still think that external forcings do influence the analysis of LTP, and I have not seen any demonstrations to the opposite. I believe that we need to work through the numbers and would like to see numerical demonstrations to show if LTP exists without forcings. E.g. onthe sea-level pressure and other variables.
One of the major weaknesses I think with the arguements presented by Armin and Demetris is that they only look at one climate inidacotr, when we know in fact that climate involves many related aspects. It is important to draw on all available information, rather than neglecting related physics and observations and focus only on the statistical aspects of just one index.
Although the HadCrut4 163 year, it does not represent the same locations over the entire record. In fact, the early part is calculated from a smaller sample of thermometers, and one may even by eye discern the change in the sampling fluctuations associated with the changes in the data coverage. Again, the temperature is affected by external forcings. I suggest using sea-level pressure. Another approach is to subtract the northern from the southern hemisphere, assuming that the forcings and the trends affect the whole planet and that the two hempispheres are affected somewhat similar – as I’ve done and is shown in #470
I have not read Koutsoyiannis and Montanari (2007) nor Markonis and Koutsoyiannis (2013) – please provide the details here. The Nile is a completely different case to the global mean temperature. THe physics is entirely different. LTP may be true for the Nile, but not for other situations. For local temperature measurements, I would not be surprised to see some long-term like persistence, but I would ascribe most of it to low-level chaos and natural forcings.
This is a general comment (i.e. not addressed to a particular discusser) — and a rather pessimistic one.
In one of my earlier comments to Rasmus I wrote:
Earlier, in another comment I wrote:
Now Rasmus makes a statement which for me was shocking:
Of course, I did not expect that Rasmus would have read the papers by my colleagues and me. However, I would expect that each of the discussers reads the comments in this blog, particularly those addressed to them — or at least uses the “Find” utility of the browser.
So using the “Find” utility I was able to see that full details of Koutsoyiannis and Montanari (2007) are given several times in this blog:
In Ref.  in the Introductory post by Rob, Marcel and Bart;
In Ref  in my own main post;
In my First comments on Armin Bunde’s post.
Also link to the paper is contained in my First comments on Rasmus Benestad’s post.
Furthermore, for Markonis and Koutsoyiannis (2013) full details are given:
In Ref  in my main post;
In my First comments on Rasmus Benestad’s post;
In Spencer Stevens comment to Bart.
Also, both details and link are also contained in my reply to Rob.
All the above makes me wonder if climate dialogue is possible.
In a recent comment (https://mwenb.nl/long-term-persistence-and-trend-significance/#comment-490 ) you wrote in response to a question from Marcel about whether detection (of global warming) had taken place:
“I believe it has not taken place.”
This I found surprising in light of what you wrote earlier:
https://mwenb.nl/long-term-persistence-and-trend-significance/#comment-351 : “The global land air temperature, in the past 100y, increased by about 0.8 degrees. We find this increase even highly significant.” And in your opening post: “the probability of having 11 warmest years in 12, or 12 warmest years in 15, is 0.1%.” (based on a value of the Hurst coefficient, H = 0.94, higher than others have found) and “Whether this change is statistically significant or not depends on assumptions. If we assume a 90-year lag and 1% significance, it perhaps is”
I took your earlier responses to mean that according to you, the recent warming if significant to 99 or 99.9%, dependent on the exact metric used. You mentioned “highly significant”. And now you state “detection has not taken place”. Aren’t these statements mutually exclusive?
As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” So how small do you think this chance is?
No, my comment was only about “external” forcing and I quoted Rasmus providing a basic list of external forcings: ‘changes in Earth’s orbit around the sun, geological, volcanic, changes in the sun, or in the concentrations of the greenhouse gases’.
So, my question again is: Can you give an example of a period in Earth’s history which has not been influenced by external forcing?
Sorry if you were misled, but it was not my fault. I usually use blockquotes in my comments, but if I remember well the one you quote was from my initial comment. That was sent to the forum editors before the appearance of the dialogue. The editors posted it for me, without using blockquotes.
So what I said, as you may see it above, in my comment to Armin, was this:
So, what you quote as if it was said be me, was in fact said by Armin and is not supported by my calculations.
You can check it if you read my entire comment, rather than part of it which actually is a quotation to which I replied. The meaning is clear—even without blockquotes.
Bart, please also read my pessimistic comment just before yours.
Paul, I regard greenhouse gases part of the climate system. For example, in my view, changes in the vapour concentration classify as internal forcing. As I wrote in an earlier comment (Section 5, Linearity vs. nonlinearity):
So I may not be able to calculate the particular contribution of external and internal forcings. For me it suffices to say that the climate was never static, which implies variability — particularly due to internal dynamics. If you can do such separation, please do and let me know if you find that in the entire Earth’s history the ever changing climate has been driven by external forcing only.
Thanks Demetris, that clarifies part of the discrepancy, but the other two quotes remain, which I still cannot reconcile with your later statement. So my question still stands:
What is the chance that the observed changes are due to internal variability? (meaning a redistribution of energy within the climate system – there seems to be quite some confusion about what the different terms forced vs unforced/internal var. mean, which I will come back to in a later comment)
Bart, thanks for understanding. My answer stands too. If you read my main post you will see that I provide quantified answers for the “chance”. See in particular my graphs and their explanations. I hope Marcel can verify that what I replied to his comment (actually verifying his own reading of my post and comments) is consistent to what I wrote in my post and my later comments. So, I am afraid I cannot see what looks surprising to you.
Some recent signs (lack of progress, repetitions) may indicate that this discussion approaches its end. I wish to thank the editorial team, Bart, Marcel and Rob, for inviting me, my co-guests Armin and Rasmus, and all contributors for the fascinating discussion during these three weeks.
My best wishes for the continuation and further development of the Climate Dialogue forum. Even with the difficulties encountered, dialogue is the only way forward. Besides, as Heraclitus said, “Tο αντíξουν συμφέρoν και εκ των διαφερόντων καλλíστην αρμoνíαν και πάντα κατ’ έριν γíνεσθαι” (Opposition unites, the finest harmony springs from difference, and all comes about by strife).
If I may offer a simple suggestion for the future dialogues, I would propose to merge the two sections “Expert comments” and “Public comments”. First, these section titles are not very accurate; it would be more accurate to say “Editors and guests” rather than “experts”. My feeling is that everybody who contributes in this dialogue is an expert—both the eponymous and the pseudonymous discussers. Second, the reading of the comments would be more convenient and sensible if the comments were in chronological order rather than separated into two sections.
I’d like to offer the following observation of the discussion so far (more comments remain welcome, but are by no means demanded).
There appear to be different interpretations of natural variability and of detection which may be a frequent cause of misunderstanding in this dialogue and beyond. Below I’ll try to describe these different interpretations in an effort to elucidate where the different opinions may (partly) be coming from.
In general, the following processes involved in climate change can be distinguished:
– natural unforced variability (e.g. internal variability involving a redistribution of energy)
– natural forced variability (e.g. changes in the output of the sun or in volcanism)
– anthropogenic forced variability (e.g. changes in greenhouse gas or aerosol concentrations)
where a forcing refers to a process causing an energy imbalance, which in turn causes a temperature change. Internal variability on the other hand causes a temperature change arising from semi-random internal processes. This temperature change can then cause an energy imbalance (since outgoing energy scales as T^4), but the cause-effect chain (linking temperature change and energy imbalance) is opposite to a radiative forcing)
As we wrote in the introductory text, according to AR4 “an identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”
In other words, detection is based on distinguishing the forced (natural and anthropogenic) from the unforced (natural) component.
Demetris seems to argue that these different processes can not be distinguished, or at least that internal (unforced) variability and natural forcings can not be distinguished. Anthropogenic forcings can only be distinguished by virtue of them not having been acting on the system prior to ~1850. Armin seems to take a somewhat similar view, in combining natural unforced and forced changes in what he terms natural fluctuations. Rasmus seems to take the view as I outlined above (the distinction in three main types of processes).
Demetris argues that the current temperature signal is not outside of the bounds of what could be expected from natural forced and unforced changes, thereby using a higher bar than the standard definition of “detection”. He bases his statement on a higher Hurst coefficient than Armin does, which increases the bar further.
This may clarify how the statement that climate forcings introducing LTP and climate forcings being omnipresent (which all three agreed on) can still lead to different conclusions regarding the presence of LTP saying anything about internal variability, because different operational definitions of detection and internal variability (and perhaps also of LTP, as has been put forward by Armin) are used (where in one view internal variability is only the unforced component of change, where in another view internal variability also includes natural forcings).
This brings up the question, if (according to Demetris) the recent warming is not outside of the bounds of natural forced and unforced variability, where does all the excess energy come from that is observed to accumulate in the climate system? It doesn’t seem to be due to natural forcings (which show no warming trend over the past 50 years), nor is there any sign of a redistribution of energy within the climate system (everywhere we look it’s warming). Where is the energy hiding, or where is it coming from (if not from excess greenhouse gases inhibiting planetary heat loss)?
This is a very unsatisfying discussion.
To me it lacks a clear definition of Long-Term Persistence. Two of the invited debaters use it but both do not seem to be able to explain it in simple terms what is and how it comes about. Why is that? The rest of the discussion seems to be a lot of talking to one another on different levels. Telling others to read the papers doesn’t help if you can’t explain succinctly what you are on about. I do notice the blog operators trying to get something more out of it, but i feel their efforts failed. Pity…