Three statisticians go hunting for rabbit. They see a rabbit. The first statistician fires and misses, her bullet striking the ground below the beast. The second statistician fires and misses, their bullet striking a branch above the lagomorph. The third statistician, a lazy frequentist, says, “We got it!”
OK, that joke was not 1/5th as funny as any of XKCD’s excellent jabs at the frequentist-bayesian debate, but hopefully this will warm you up for a somewhat technical discussion on how to decide if observations about the weather are at all explainable with reference to climate change.
[source]
We are having this discussion here and now for two reasons. One is that Hurricane Harvey was (is) a very serious weather event in Texas and Louisiana that may have been made worse by the effects of anthropogenic global warming, and there may be another really nasty hurricane coming (Irma). The other is that Michael Mann, Elisabeth Lloyd and Naomi Oreskes have just published a paper that examines so-called frequentist vs so-called Bayesian statistical approaches to the question of attributing weather observations to climate change.
Mann, Michael, ElisabethLloyd, Naomi Oreskes. 2017. Assessing climate change impacts on extreme weather events; the case for an alternative (Baesian) approach. Climate Change (2017) 144:131-142.
First, I’ll give you the abstract of the paper then I’ll give you my version of how these approaches are different, and why I’m sure the authors are correct.
The conventional approach to detecting and attributing climate change impacts on
extreme weather events is generally based on frequentist statistical inference wherein a null hypothesis of no influence is assumed, and the alternative hypothesis of an influence is accepted only when the null hypothesis can be rejected at a sufficiently high (e.g., 95% or Bp = 0.05^) level of confidence. Using a simple conceptual model for the occurrence of extreme weather events, we
show that if the objective is to minimize forecast error, an alternative approach wherein likelihoods
of impact are continually updated as data become available is preferable. Using a simple proof-of-concept, we show that such an approach will, under rather general assumptions, yield more
accurate forecasts. We also argue that such an approach will better serve society, in providing a
more effective means to alert decision-makers to potential and unfolding harms and avoid
opportunity costs. In short, a Bayesian approach is preferable, both empirically and ethically.
Frequentist statistics is what you learned in your statistics class, if you are not an actual statistician. I want to know if using Magic Plant Dust on my tomatoes produces more tomatoes. So, I divide my tomato patch in half, and put a certain amount of Magic Plant Dust on one half. I then keep records of how many tomatoes, and of what mass, the plants yield. I can calculate the number of tomatoes and the mass of the tomatoes for each plant, and use the average and variation I observe for each group to get two sets of numbers. My ‘null hypothesis’ is that adding the magic dust has no effect. Therefore, the resulting tomato yield from the treated plants should be the statistically the same as from the untreated plants. I can pick any of a small number of statistical tools, all of which are doing about the same thing, to come up with a test statistic and a “p-value” that allows me to make some kind of standard statement like “the treated plants produced more tomatoes” and to claim that the result is statistically significant.
If the difference, though, is very small, I might not get a good statistical result. So, maybe I do the same thing for ten years in a row. Then, I have repeated the experiment ten times, so my statistics will be more powerful and I can be more certain of an inference. Over time, I get sufficient sample sizes. Eventually I conclude that Magic Plant Dust might have a small effect on the plants, but not every year, maybe because other factors are more important, like how much water they get or the effects of tomato moth caterpillars.
In an alternative Bayesian universe, prior to collecting any data on plant growth, I do something very non-statistical. I read the product label. The label says, “This product contains no active ingredients. Will not affect tomato plants. This product is only for use as a party favor and has no purpose.”
Now, I have what a Bayesian statistician would call a “prior.” I have information that could be used, if I am clever, to produce a statistical model of the likely outcome of the planned experiments. In this case, the likely outcome is that there won’t be a change.
Part of the Bayesian approach is to employ a statistical technique based on Bayes Theorem to incorporate a priori assumptions or belief and new observations to reach towards a conclusion.
In my view, the Bayesian approach is very useful in situations where we have well understood and hopefully multiple links between one or more systems and the system we are interested in. We may not know all the details that relate observed variation in one system and observed variation in another, but we know that there is a link, that it should be observable, and perhaps we know the directionality or magnitude of the effect.
The relationship between climate change and floods serves as an example. Anthropogenic climate change has resulted in warmer sea surface temperatures and warmer air. It would be very hard to make an argument from the physics of the atmosphere that this does not mean that more water vapor will be carried by the air. If there is more water vapor in the air, there is likely to be more rain. Taken as a Bayesian prior, the heating of the Earth’s surface means more of the conditions that would result in floods, even if the details of when, how much, and where are vague at this level.
A less certain but increasingly appreciated effect of climate change is the way trade winds and the jet stream move around the planet. Without going into details, climate change over the last decade or two has probably made it more likely that large storm systems stall. Storms that may have moved quickly through an area are now observed to slow down. If a storm will normally drop one inch of rain on the landscape over which it passes, but now slows down but rains at the same rate, perhaps 3 inches of rain will be dropped (over a shorter distance). What would have been a good watering of all the lawns is now a localized flood.
That is also potentially a Bayesian prior. Of special importance is that these two Bayesian priors imply change in the same direction. Since in this thought experiment we are thinking about floods, we can see that these two prior assumptions together suggest that a post-climate change weather would include more rain falling from the sky in specific areas.
There are other climate change related factors that suggest increased activity of storms. The atmosphere should have more energy, thus more energetic storms. In some places there should more of the kind of wind patterns that spin up certain kinds of storms. It is possible that the relationship between temperature of the air at different altitudes, up through the troposphere and into the lower stratosphere, has changed so that large storms are likely to get larger than they otherwise might.
There is very little about climate change that implies the reverse; Though there may be a few subsets of storm related weather that would be reduced with global warming, most changes are expected to result in more storminess, more storms, more severe storms, or something.
So now we have the question, has climate change caused any kind of increase in storminess?
I’d like to stipulate that there was a kind of turning point in our climate around 1979, before which we had a couple of decades of storminess being at a certain level, and after which, we have a potentially different level. This is also a turning point in measured surface heat. In, say, 1970 plus or minus a decade, it was possible to argue that global warming is likely but given the observations and data at the time, it was hard to point to much change (though we now know, looking back with better data for the previous centuries, that is was actually observable). But, in 2008, plus or minus a decade, it was possible to point to widespread if anecdotal evidence of changes in storm frequency, patterns, effects, as well as other climate change effects, not the least of which was simply heat.
I recently watched the documentary, “An Inconvenient Sequel.” This is a fairly misunderstood film. It is not really part two of Al Gore’s original “An Inconvenient Truth.” The latter was really Al Gore’s argument about climate change, essentially presented by him. “An Inconvenient Sequel” was made by independent film makers with no direct input by Gore with respect to contents and production, though it is mostly about him, him talking, him making his point, etc. But I digress. Here is the salient fact associated with these two movies.An Inconvenient Truth came out in May 2006, so it is based mainly on information available in 2005 and before. In it, there are examples of major climate change effects, including Katrina, but it seems like the total range of effects is more or less explicated almost completely. When An Inconvenient Sequell came out a few weeks ago, a solid 10+ years had passed and the list of actual climate effects noted in the movie was a sampling, not anything close to a full explication, of the things that had happened over recent years. Dozens of major flooding, storming, drying, and deadly heat events had occurred of which only a few of each were mentioned, because there was just so much stuff.
My point is that there is a reasonable hypothesis based on anecdotal observation (at least) that many aspects of weather in the current decade, or the last 20 years, or since 1979 as I prefer, are different in frequency and/or severity than before, because of climate change.
A frequentist approach does not care why I think a certain hypothesis is workable. I could say “I hypothesize that flies can spontaneously vanish with a half life of 29 minutes” and I could say “I hypothesis that if a fly lays eggs on a strawberry there will later be an average of 112 maggots.” The same statistical tests will be usable, the same philosophy of statistics will be applied.
A Bayesian approach doesn’t technically care what I think either, but what I think a priori is actually relevant to the analysis. I might for example know that the average fly lays 11 percent of her body mass in one laying of eggs, and that is enough egg mass to produce about 90-130 maggots (I am totally making this up) so that observational results that are really small (like five maggots) or really large (like 1 million maggots) are very unlikely a priori, and, results between 90 and 130 are a priori very likely.
So, technically, a Bayesian approach is different because it includes something that might be called common sense, but really, is an observationally derived statistical parameter that is taken very seriously by the statistic itself. But, philosophically, it is a little like the pitcher of beer test.
I’ve mentioned this before but I’ll refresh your memory. Consider an observation that makes total sense based on reasonable prior thinking, but the standard frequentist approach fails to reject the null hypothesis. The null hypothesis is that there are more tornadoes from, say, 1970 to the present than there were between 1950 and 1970. This graph suggests this is true…
… but because the techniques of observation and measuring tornado frequency have changed over time, nobody believes the graph to be good data. But, it may not be bad data. In other words, the questions about the graph do not inform us of the hypothesis, but the graph is suggestive.
So, I take a half dozen meteorologists who are over 55 years old (so they’ve seen things, done things) out for a beer. The server is about to take our order, and I interrupt. I ask all the meteorologists to answer the question … using this graph and whatever else you know, are there more tornadoes in the later time interval or not? Write your answer down on this piece of paper, I say, and don’t share your results. But, when we tally them up, if and only if you all have the same exact answer (all “yes” or all “no”) then this pitcher of beer is on me.
Those are quasi-Bayesian conditions (given that these potential beer drinkers have priors in their heads already, and that the graph is suggestive if not conclusive), but more importantly, there is free beer at stake.
They will all say “yes” and there will be free beer.
OK, back to the paper.
Following the basic contrast between frequentist and Bayesian approaches, the authors produce competing models, one based on the former, the other on the latter. “In the conventional, frequentist approach to detection and attribution, we adopt a null hypothesis of an equal probability of active and inactive years … We reject it in favor of the alternative hypothesis of a bias toward more active years … only when we are able to achieve rejection of H0 at a high… level of confidence”
In the bayesian version, a probability distribution that assumes a positive (one directional) effect on the weather is incorporated, as noted above, using Bayes theorem.
Both methods work to show that there is a link between climate change and effect, in this modeled scenario, eventually, but the frequentist approach is very much more conservative and thus, until the process is loaded up with a lot of data, more likely to be wrong, while the bayesian approach correctly identifies the relationship and does so more efficiently.
The authors argue that the bayesian method is more likely to accurately detect the link between cause and effect, and this is almost certainly correct.
This is what this looks like: Frank Frequency, weather commenter on CNN says, “We can’t attribute Hurricane Harvey, or really, any hurricane, to climate change until we have much more data and that may take 100 years because the average number of Atlantic hurricanes to make landfall is only about two per year.”
Barbara Bayes, weather commenter on MSNBC, says, “What we know about the physics of the atmosphere tells us to expect increased rainfall, and increased energy in storms, because of global warming, so when we see a hurricane like Harvey it is really impossible to separate out this prior knowledge when we are explaining the storms heavy rainfall and rapid strengthening. The fact that everywhere we can measure possible climate change effects on storms, the storms seem to be acting as expected under climate change, makes this link very likely.”
I hasten to add that this paper is not about hurricanes, or severe weather per se, but rather, on what statistical philosophy is better for investigating claims linking climate change and weather. I asked the paper’s lead author, Michael Mann (author of The Madhouse Effect: How Climate Change Denial Is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and Dire Predictions, 2nd Edition: Understanding Climate Change), about Hurricane Harvey specifically. He told me, “As I’ve pointed out elsewhere, I’m not particularly fond of the standard detection & attribution approach for an event like Hurricane Harvey for a number of reasons. First of all, the question isn’t whether or not climate change made Harvey happen, but how it modified the impacts of Harvey. For one thing, climate change-related Sea Level Rise was an important factor here, increasing the storm surge by at least half a foot.” Mann recalls the approach taken by climate scientist Kevin Trenberth, who “talks about how warmer sea surface temperatures mean more moisture in the atmosphere (about 7% per degree C) and more rainfall. That’s basic physics and thermodynamics we can be quite certain of.”
The authors go a step farther, in that they argue that there is an ethical consideration at hand. In a sense, an observer or commenter can decide to become a frequentist, and even one with a penchant for very low p-values, with the purpose of writing off the effects of climate change. (They don’t say that but this is a clear implication, to me.) We see this all the time, and it is in fact a common theme in the nefarious politicization of the climate change crisis.
Or, an observer can chose to pay attention to the rather well developed priors, the science that provides several pathways linking climate change and severe weather or other effects, and then, using an appropriate statistical approach … the one you use when you know stuff … be more likely to make a reasonable and intelligent evaluation, and to get on to the business of finding out in more detail how, when, where, and how much each of these effects has taken hold or will take hold.
The authors state that one “… might therefore argue that scientists should err on the side of caution and take steps to ensure that we are not underestimating climate risk and/or underestimating the human component of observed changes. Yet, as several workers have shown …the opposite is the case in prevailing practice. Available evidence shows a tendency among climate scientists to underestimate key parameters of anthropogenic climate change, and thus, implicitly, to understate the risks related to that change”
While I was in contact with Dr. Mann, I asked him another question. His group at Penn State makes an annual prediction of the Atlantic Hurricane Season, and of the several different such annual stabs at this problem, the PSU group tends to do pretty well. So, I asked him how this season seemed to be going, which partly requires reference to the Pacific weather pattern ENSO (El Nino etc). He told me
We are ENSO neutral but have very warm conditions in the main development region of the Tropcs (which is a major reason that Irma is currently intensifying so rapidly). Based on those attributes, we predicted before the start of the season (in May) that there would be between 11 and 20 storms with a best estimate of 15 named storms. We are currently near the half-way point of the Atlantic hurricane season, and with Irma have reached 9 named storms, with another potentially to form in the Gulf over the next several days. So I suspect when
all is said and done, the total will be toward the upper end of our predicted range.
I should point out that Bayesian statistics are not new, just not as standard as one might expect, partly because, historically, this method has been hard to compute. So, frequency based methods have decades of a head start, and statistical methodology tends to evolve slowly.
People experience climate change, this is earth change by good observance, experience, registration and comparison with historical data. What do statistics tell about that?
Hn, take 2…
This is a post worth thoroughly mulling over before responding the the content, but before I go to my (Australian) bed I can’t resist noting how the XKCD cartoon reminds me of the old Twilight Zone episode where the night sky lights up and people are amazed, and initially oblivious to the fact of a supernovaed sun on the other side of the planet. The progress of the episode was chilling to my young mind, and I suspect that it has made me conscious of how humans are refractory to really understanding the trains that hurtle toward them.
That episode also helps to point out that one needn’t construct a lying neutrino detector for those on the night side of the planet to figure out if the sun’s exploded – a window would do the job better…
Of course the physics pendants here would point out that our sun’s too small to go supernova, and if it somehow magically did we’d probably know that it had by the fact of our deaths within a few seconds after the radiation front hit the planet – night-side or no…
Bah, it was The Outer Limits…
Speaking of, “The Hundred Days of the Dragon” is suddenly somewhat apposite…
You can find a p-value, but one of the reasons we have so many difficulties with studies that can’t be replicated is the misunderstanding of p-values. They do not represent any amount of evidence against a null hypothesis (nor do they provide information in favor of an alternative hypothesis). They are conditional probabilities: the probability of obtaining a measure (usually a test statistic of some kind) as extreme or more extreme as the one in from your sample, assuming the null hypothesis is exactly correct.
The intended use (from the days of Fisher) was that they could serve a tool to indicate when more investigation was warranted. Neyman and Pearson began cementing the use of p-values in making binary “do not reject”/”reject” decisions during their development of hypothesis testing.
Very true. Add to that the fact that statistics is so often taught in departments by people who have had only passing exposure to statistical methods (we used to have a faculty member in another department who advised students that it was completely acceptable to remove data values they viewed as outliers in order to obtain “significant results”, for two reasons: “Nobody likes negative results” and “It’s standard practice.” Some faculty make things better through their work: he made things better by retiring.)
I’ll leave by pointing out that “Bayesian statistics” could be replaced by “robust, resistant, and non-parametric statistics” in your comment and it would remain true.
Please go into detail.
Gilbert, it has to do with the more frequent formation of quasi resonant waves in the jet streams caused by accelerated warming in the Arctic. The jet streams and associated trade wind systems get curvey and slow down, so it is easier for a storm leaving the tropics to run into a stalling feature rather than to get swept away. Sandy did the same thing, roughly, as Harvey.
THX, Greg. So it is down to the arctic warming faster than the rest of us and this reduces the *clash* of air masses thus wind. So we should see less from blizzards, tornadoes, and maby even landfalling hurricanes.
I’d call that a win — A milder, gentler global climate where food can be grown at higher lattitudes and heating bills are reduced. It becomes cooler in the low lattitudes due to cloud cover and warmer up north and down south. There is the caveat that sea level may rise. Venice adapted.
https://arstechnica.com/science/2016/01/we-narrowly-missed-a-new-ice-age-and-now-we-wont-see-one-for-a-long-time/
Carbon may have been good. I’d call that a win for most; Ice ages aren’t all Ray Romano with a spritz of nut-chasing paleo squirrel on top.
Gilbert, no, there is no reduction if a clash. There is no clash.
This does not affect tropical storm generation.
This does increase flooding rain events so far by about 300% in the upper Midwest, and to similar levels elsewhere.
This caused the California drought.
It is part if the reasons for the E coast experiencing multiple major blizzards per year instead of a major one every few years
No, sorry, no good news here at all.
Regarding g the ice age, we crossed out of the possible ice age zone before we hit 350 ppm. As we near an inevidible doubling of CO2 we are approaching catastrophic climate change.
Hot/cold visualized:
https://xkcd.com/1379/
Gilbert
If this makes it I will produce an enlargement of Greg’s assessment which scotches your idea of nothing but good coming from a GHG energised temperature climb and hydrological cycle. Cast your eyes further afield and the events in Asia right now make the effects of Harvey look almost like a sideshow.
But then I sense somebody dropping by to argue from ideology.
Really interesting and well-written post, Greg. 🙂
Bayes is very (perhaps universally?) useful, very powerful, extremely logical, poorly understood and generally reviled by people who do not like its conclusions. Seen this categorical rejection in Historical Jesus discussions and now we will likely see it with climate deniers as their statistical sophistry becomes even less persuasive.
It is my layman’s understanding that a Bayes equation will distill down to a number – the likelihood of the experimental question being true. Any idea what that number works out to be with Dr Mann’s exercise?
You can click through to the paper and see. It is not a number, because the authors applied both statistical approaches to a range of data, so they end up with a gazillion numbers and a nice graph.
Lionel A, I was attempting to ‘cast your eyes further afield’ where I found Asia’s monsoon to be the strongest since fifteen years ago. I also came across this:
http://news.nationalgeographic.com/2017/08/south-asia-heat-waves-temperature-rise-global-warming-climate-change/
Of course they are obviously misspoken. Having used a sling psycrometer in the past, I recognize the terminology of *wet bulb depression and its relation to absolute humidity and dew point when considered along side the dry bulb temperature. The statement is gibberish.167 F?? welcome to Alabama where we safely cook steak, pork, and poultry by hanging it upside down in the shade.
But, considering the 94 degrees Fahrenheit (34.4 degrees Celsius) and 80% humidity — I can’t imagine the kind of cap of the tropopause strong enough to prevent convective storms under those conditions. Maybe if the models were right and the heat is supposed to be greatest in the mid levels of the atmosphere would this occure; but I can’t help but notice, just like the missing heat in the ocean, that those layers aren’t really warming that much, if at all.
According to my analysis of predictions given to Kees de Haar, psychic medium, in the period 1984 – 2005 a 100 percent relation exists between the rising number and intensity of heavy hurricanes and storms in this period of time and climate change. I shall not be amazed that the direct relation between the rise of intensifying hurricanes and storms and climate change will be established by bèta-scientists within short. More predictions of De Haar have been established already by bêta science. Like inter alia the decomposition process of glaciers world wide, the Arctic, and the Antarctic.
“According to my analysis of predictions given to Kees de Haar, psychic medium, in the period 1984 – 2005 a 100 percent relation ”
You are a monumental idiot.
http://thevane.gawker.com/this-is-why-the-heat-index-is-so-important-1609195413
Hmm. It’s treason then.
Just downloaded the paper and skimmed it, so no serious comments yet, but a couple comments (if it’s okay?)
It’s a little disconcerting to see the discussion of p-values in general, and a seeming (again, I’ve only skimmed it) statement of use of values as high as 10%. P-values alone are not good indicators of much of anything of interest.
The choice of a one sided (greater incidents of serious weather) alternative is always interesting. Here is makes the assumption that changes will lead to an increase — that may be a good assumption based on the physics, but if so that should be made (more) clear.
Finally, a statistical point: the hypothesis that states there is no change at all will never be true — for this and many other reasons, statistical hypotheses should always be interpreted as descriptions rather than strict fact — and this is one of the primary reasons we shouldn’t simply say “reject” or “fail to reject”. There should be some discussion of the estimated size of the effect regardless of the result. This seems to be somewhat addressed in the discussion of “convergence” — I hope there is some more detailed discussion than I’ve seen in my fast read.
And, true for both frequentist and Bayesian testing, the validity of all this work depends on the correctness of assumptions. Since this is a simulation, we need to believe that they’ve taken all of the relevant background into account when the time series were generated.
I didn’t see it: do you know whether they posted their code anywhere?
http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleCode.html
Will look at that when I get home. Does that have the code for the simulation paper?
Probably not, but it probably isn’t hard to get.
Dean, look in your own mirror.
Gerrit, provide scientific evidence in favor of any psychic work — and bit from any of the fake journals that push it.
There is a simple reason you determined there is a perfect relationship presented in the ramblings of your favorite quack: you want one to be there.
Want to be taken seriously? Leave the psychic bullcrap out of posts. There are no gullible people here for you to scam.
In October, November 2017 De apocalyps van de aarde in vijf bedrijven, de in gang gezette grote ommekeer van de aarde, voorspellingen van de geleidegeesten van God aan Kees de Haar, medium’ will be published.
It is a three volume stud ,about ‘the apocalypse of the earth in five stages, the great change of earth on its way, predictions of guiding spirits from God given to Kees de Haar, medium’.
This study of 85 documented séances of Kees de Haar, medium (Netherlands) in the period 1984 – 2005 consists three books. Book II, Bronnen (Sources) contains all séances with lemma’s (lemmata). Book I, Hoofdwerk (the main work)carefully documents and analyses the five stages, inclusive a description of hits of the predictions, failures of the predictions and predictions of which the outcome is still unknown. The five phases are: 1. economic and financial crises; 2. climate change = earth change; 3. new diseases; 4. wars and terror and attacks of terrorists; 5 refugees, and specific parts about God, his predictions and the causes of God’s warnings. Book I also contains the letters sent to official organizations worldwide to warn them about the decomposition processes of glaciers, the Arctic and the Antarctic and it contains a detailed description of these processes, which processes have been confirmed later by NASA and beta scientists. Book III Bewijs en Tegenbewijs (Evidence and Rebuttal) positions the visual reality in relation to the paranormal and underlines the value of psychic research. All sources have been mentioned. Lists of literature and of persons mentioned are part of the Books as well as numerous notes. The study fulfils the conditions of a scientific thesis. Scientific recommendations are part of the book. A publisher known for his scientific publications characterizes this study as impressive. The books invite scientists to do further scientific examination. The study is transparent, open and based upon the need to give exact details for reasons of verification and or falsification. Knowledge of Dutch is required, because that is the language of the séances, received in the Netherlands, although Book I and Book III also contain parts, written in English. Fortunately many people, who master Dutch as well as English live all over the globe. The time of silence is over, it is time to speak. The study is part of a long tradition of apocalyptic literature and foreshadows the not unlikely outlooks of a new era.
So you have nothing that is valid about your psychic. No surprise there.
# 25
Dean,
See Garrit at #1:
It is unlikely that Garrit understands exactly what it is that you are asking for.
I once met someone who thought that the late 60’s TV series Dark Shadows was real and that Barnabus was communicating with him…
“It is unlikely that Garrit understands exactly what it is that you are asking for.”
True. Given how preoccupied he is with his favorite scam artist psychic it’s likely he doesn’t have a grasp of any significant portion of reality.
Had to share this …
http://www.sacbee.com/opinion/editorial-cartoons/jack-ohman/article171837502.html
@ dhogaza, excellent.
The political cartoons are humorous mirrors. Time for a snapshot of all participants on this blog in one cartoon, including several cowards without a name, without the least grain of decency, but with backpacs filled with prejudice and enough swearing to participate in wold’s final swearing contest. They are likely to get a painful upyours end.
I wait and see from a safety distance.
Direct connections exist between climate change, the heating up of earth, oceans and atmosphere, the intensifying of hurricanes in strength, the melting of the poles, changes of the crust of the earth, earthquakes. It is no coincidence that all these phenomena coincide. See the poles, see the heavy hurricanes Harvey and Irma (category 5 plus) , and see todays earthquake in the Pacific Ocean (Mexico), 8.1 Richter. In my study this trend has been predicted to Kees de Haar, medium, inter alia on 27 April 1986 (séance IV); 13 April 1996, (séance 6); 2 August 2003, (séance 34). In those days my educated friends and peers and scholars found this trend interesting though questionable and some thought them unacceptable, but concerning these and other predictions they more and more found out, as some admitted to me, that the trend of these predictions is reaching the level of presumably right and becoming reality. My relations in the Netherlands and elsewhere wait and see how this develops. More about this has been published in my study and comments ´De apocalyps van de aarde in vijf bedrijven´, (meaning The apocalypse of the earth in five stages), 2017.
For people in the Occident the spiritual source of these predictions is provocative because contrary to their beliefsystem that God and spirits are no reality and only manmade constructs. However there are other realities which cannot be denied.
Very rapidly this web page will be famous amid all blogging and
site-building viewers, due to it’s nice content