Guest Essay by Kip Hansen
Welcome to Model-Land, ladies and gentlemen, boys and girls! “Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind in which optimizing a simulation invariably reflects desirable pathways in the real world.” Here in Model-Land you’ll see fabulous “computational simulations and associated graphical visualizations [that] have become much more sophisticated in recent decades due to the availability of ever-greater computational resources!” Where “…[t]he qualitative visual appeal of these simulations has led to an explosion of simulation-based, often probabilistic forecasting in support of decision-making in everything from weather forecasting and American Football, to nuclear stewardship and climate adaptation.”
If you come and play, you’ll want to stay! ™
[the foregoing is a Paid Advertisement from the fictional makers of Model-Land ®]
* * * * *
Model-land (in the leading image) looks to have all the requirements for an ecological study: hills, valleys, grass, trees, bushes, a little river, sky and clouds. Yet any attempt to transfer the implications of changes in model-land to the real world are doomed to fail. Why?
Because “Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind” say Erica L. Thompson and Leonard A. Smith in a new paper that appears in the e-journal Economics titled “Escape from model-land”.
“Both mathematical modelling and simulation methods in general have contributed greatly to understanding, insight and forecasting in many fields including macroeconomics. Nevertheless, we must remain careful to distinguish model-land and model-land quantities from the real world. Decisions taken in the real world are more robust when informed by our best estimate of real-world quantities, than when “optimal” model-land quantities obtained from imperfect simulations are employed.”
“Computational simulations and associated graphical visualisations have become much more sophisticated in recent decades due to the availability of ever-greater computational resources. The qualitative visual appeal of these simulations has led to an explosion of simulation-based, often probabilistic forecasting in support of decision-making in everything from weather forecasting and American Football, to nuclear stewardship and climate adaptation. We argue that the utility and decision-relevance of these model simulations must be judged based on consistency with the past, and out-of-sample predictive performance and expert judgement, never based solely on the plausibility of their underlying principles or on the visual “realism” of outputs.”
“Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind in which optimising a simulation invariably reflects desirable pathways in the real world. Decision-support in model-land implies taking the output of model simulations at face value (perhaps using some form of statistical post-processing to account for blatant inconsistencies), and then interpreting frequencies in model-land to represent probabilities in the real-world. Elegant though these systems may be, something is lost in the move back to reality; very low probability events and model-inconceivable “Big Surprises” are much too frequent in applied meteorology, geology, and economics. We have found remarkably similar challenges to good model-based decision support in energy demand, fluid dynamics, hurricane formation, life boat operations, nuclear stewardship, weather forecasting, climate calculators, and sustainable governance of reindeer hunting.”
This paper is a Must Read for anyone whose interests intersect with the output of computational models or computer simulations of any type and for any purpose.
WARNING: Model-haters should not get their hopes up — this essay is not a justification for the “all models are wrong” viewpoint. What the highlighted paper attempts to do (and succeeds in doing) is illuminating the dangers of misunderstanding what models are capable of doing under what circumstances and for what purposes, and suggesting approaches to an escape from model-land into the real world.
Right out of the box it warns that tuned models are intrinsically bad at projecting effects in the long tails of probability, events which have a very low probability or “Big Surprises” which are, inside the model world, inconceivable.
There are lots of different types of models of different types and classes of physical and social processes, but the authors interestingly classify them in two general types:
In “weather-like” tasks, where there are many opportunities to test the outcome of our model against a real observed outcome, we can see when/how our models become silly.
In “climate-like” tasks, where the forecasts are made truly out-of-sample, there is no such opportunity and we rely on judgements about the quality of the model given the degree to which it performs well under different conditions and expert judgement on the shortcomings of the model.”
If I have a model running that makes projections of movements of the Dow Jones Industrial average, which changes by the minute, I can easily validate the accuracy of my model output by comparing it to the real world market index. This DJIA model would be a “weather-like” model — easily checked for reliability — I could test run it for a few days or weeks before actually putting my money at risk following its projections. Even then, I must be aware that exceptional circumstances could, in the real world, cause changes in the DJIA that my model could not even conceive of thus I would be wise not to bet the bank on any one trade.
On the other hand, if I have a model of the real estate market for multi-bedroom apartments at the high-income end of the market, in which returns are to be measured on a multi-decadal scale, this might be considered a “climate-like” model — in which the past might not be a good predictor of the future, with no ready opportunities to test the model against the real world. Thus, the authors posit, I would need to take into consideration expert judgement rather than depend on the quantitative output of the model alone.
An example: A model of the real estate market in a nearby town, initiated 30 years ago, might have shown that there would be a strong and continuing market for up-scale high-end apartments for young professionals and their families — this market niche had been growing steadily over the previous twenty years. However, real estate markets can be complex. Had a company relied on the model output and built a series of expensive multi-story multi-bedroom apartments 30 years ago, the project would have been a financial disaster. Why? The model would not have been able to foresee or even conceive of the sudden departure (25 years ago) of the primary employer of professionals — which abruptly closed is offices, research center, and manufacturing plant, leading to a mass emigration of highly paid professionals and their families out of the area.
It is comfortable for researchers to remain in model-land as far as possible, since within model-land everything is well-defined, our statistical methods are all valid, and we can prove and utilise theorems. Exploring the furthest reaches of model-land in fact is a very productive career strategy, since it is limited only by the available computational resource. While pure mathematicians can, of course, thrive in model-land, applied mathematicians have a harder row to hoe, inasmuch as, for large classes of problems, the pure mathematicians have proven that no solution to the problem will hold in the real world.

Thompson and Smith go on to explore the implications of imperfect models (every model is imperfect outside of pure mathematics). Of course, in non-linear numerical models, any, even minute, change in initial conditions can lead to vastly different projections, which has been called The Butterfly Effect. The specific effect is shown clearly by UCAR’s Large Ensemble Community Project , which I have previously covered in my essay Lorenz Validated at Judith Curry’s excellent blog, Climate Etc. If you are not fully aware of what the Butterfly Effect means for Climate Models, you should read the Lorenz Validated essay now, then continue with this piece. [ An interesting video example – opens in a new tab or window ].
They describe another problem, developed over a period of years by a group, including the present authors, at the London School of Economics (LSE), as The Hawkmoth Effect, a poster of which has been shown around various conferences, including at the AGU (Dec 2013) and LSE (2014). Primarily the Hawkmoth Effect says “in a chaotic system if our model is only slightly mathematically mis-specified then a very large difference in outcome will evolve over time even with a “perfect” initial condition[s].” Paraphrased, nonlinear numerical models of complex systems are at high risk of exhibiting Structural Instability, in which small changes to the structure of the model can produce large changes in the outcomes of the models:

![]()
The Hawkmoth hypothesis is a scientific controversy (mathematical and philosophical), with a series of papers supporting the idea, and another series of papers attempting to refute the idea. Various efforts have been made to denigrate the Hawkmoth Effect as it applies to climate models (and here) and in the deep maths world, there is pushback on whether the effect is truly ubiquitous.
In the Climate Model field, the approach to handling the Butterfly Effect has been to use “ensemble means”:
“If we have (somehow) perfectly specified our initial condition uncertainty, but have a structurally imperfect model, then the probability distribution that we arrive at by using multiple initial conditions will grow more and more misleading – misleadingly precise, misleadingly diverse, or just plain wrong in general. The natural response to this is then, by analogy with the solution to the Butterfly problem, to make an ensemble of multiple model structures, perhaps derived by systematic perturbations of the original model. Unfortunately, the strategy is no longer adequate. In initial condition space (a vector space), there a finite number of variables and a finite space of perturbations in which there are ensemble members consistent with both the observations and the model’s dynamics. Models lie in a function space where, by contrast, there are uncountably many possible structures. It is not clear why multi-model ensembles are taken to represent a probability distribution at all; the distributions from each imperfect model in the ensemble will differ from the desired perfect model probability distribution (if such a thing exists); it is not clear how combining them might lead to a relevant, much less precise, distribution for the real-world target of interest.”

It is understandable why there is concern in the Climate Modelling world regarding this continuing Hawkmoth Effect effort at LSE — an effort which started as a PhD thesis (Erica L. Thompson) in 2013 — and is still going strong in this latest paper in 2019. There is no question that the Butterfly Effect is real and operates in climate models (repeating the link to Lorenz Validated) If the Hawkmoth Effect is real and is shown to be operationally effective in the current collection of climate models, which the image of multiple model outputs above seems to imply, then confidence in long-term climate projections will be seriously shaken. Different models, initialized differently, produce projections that grow in divergence with time — and none of the models, or scenarios, mirror real world observations. [the graphic includes a Meaningless Mean added by minds happily living in Model-Land.]
Thompson and Smith offer suggestions on how to execute an escape from Model-land and thereby avoid some of its pitfalls. I plan a follow-up essay which will cover Thompson and Smith’s exit-from-model-land strategy and some recent real world examples of what happens when an attempt is made to apply the output of climate models in real world planning.
Escape from model-land [pdf] is an easily read 8 pages, open access, and well worth your time if you are interested in models, modelling and the results of models.
# # # # #
Author’s Comment policy:
Judith Curry listed the original link to the Model-land paper in one of her Week in review – science editions sometime recently. Judith’s efforts help to expand the breadth of my exposure to interesting ideas and the latest science – and this is not restricted to the climate field. Thank you, Judith.
There is a rising movement in other scientific fields to rein in the seeming over-confidence in models. Reasonable minds are beginning to shake their heads in perplexity as to how we got here — where, in some fields, model projections are demanded by organizations giving research or project grants, despite the known problems and the in-applicability of model outputs to conditions on the ground: more on that in the next part of the Model-land series.
I do read every comment left here by readers. I try to answer your questions, supply further links and discuss points of view reasonably close to being “on topic”.
# # # # #