December 25, 2019 0

Perfect Model: The Past, Present, and Future of Prediction

Welcome everybody. It’s nice to see you
all here this evening. My name is Rowan Flad. I’m a professor of
anthropological archaeology here at Harvard and I’m standing
in for Jeff Quilter, who is unable to join us tonight. The director the Peabody
Museum is on a flight back from England, and
he sends his regrets, but he’s happy to see
you all here with us. I’m happy to see you all
here with us for this latest installment in the
divination lecture series that the Peabody
Museum has been running over the course of
this academic year. Tonight, we’re going to have
a scientist and author David Orrell talk to us
about the history and challenges of prediction
from the Oracle at Delphi up to the latest methods being
developed in areas like systems biology and economics. I wanted to welcome you all
to join us after the talk up on the third
floor of the Peabody Museum for a reception, where
we’ll be able to converse with our speaker a
little bit more casually, but we also have
immediately to my right a table where several books
that David has authored are available for
purchase and for signing. We also have on
that table a list of events that are going
on at the Peabody Museum for the rest of
this academic year. In case you don’t want
to pick up a paper copy, we also will be able to
provide information online if you provide an
email address to us. So please do stop by
if you want to get more announcements about our series. David Orrell is an acclaimed
writer and mathematician visiting us from his
current hometown of Toronto. He has gained wide
recognition and attention for his writing research
on the topic of prediction and forecasting, making
him a perfect addition to the museum’s yearlong
series on divination that began almost
10 months ago now with a talk on Chinese
divination systems from the Bronze Age. He studied mathematics at
the University of Alberta before moving to
Oxford, where he received his doctorate in
prediction and nonlinear systems. And then he worked,
subsequently, in applied mathematics,
which has taken him to diverse areas including
particle accelerator design, weather forecasting,
economics, and cancer biology. His books have been translated
into over 10 languages, and have appeared on the local
and national best seller lists. His recent book, “Econo-Myths 10
ways Economists Get It Wrong,” and I’m sure there
are more than 10, was a finalist for the 2011
National Business Book Award. “Apollo’s Arrow The
Science of Prediction and the Future of Everything,”
with a national best seller and finalist for the 2007
Canadian Science Writers Award. That particular
book is available, along with David’s book “Truth
or Beauty” over on the table as I mentioned. David also writes for numerous
publications including “World Finance,” “Ad
Busters,” “The Literary Review of Canada,” and his work
has been featured in places such as “New Scientist,”
“The Financial Times,” and “The CBCTV.” He continues to conduct research
in the areas of systems biology and economics, and also runs
a mathematics consulting firm “Systems Forecasting.” Today, he’s here to talk
to us about the topic perfect model the past,
present and future prediction. Please join me in
welcoming David Orrell. [APPLAUSE] Thanks very much for
the introduction, and thanks to the organizers for
inviting me here to take part in this series on divination. I think our interest
in divination and looking into the future
is tied up, in many ways, with our desire
for a world which is comprehensible
and stable, in a way. And just allow me to say
how shocked and saddened I was by these events at
the marathon recently. I think it’s things
like that which make the world seem
exactly the opposite of the stable and
comprehensible. So we have this desire
to look into the future to make sense of
the world, and I think it was well captured
by the hockey player Wayne Gretzky. He says, “A good player
plays where the puck is. A great player plays where
the puck is going to be.” And this sort of
captures the idea that we want to
look into the future and see what’s coming
around the corner so that we can position
ourselves better for it. There is actually a caveat which
is that this technique only works if not everybody
does at the same time because otherwise the
referee would drop the puck and everyone would skate away
from it to where they thought it was going to be at
some point in the future and that turns out be important
so I’ll come back to that. But it’s true. We want to look at the future. So if you’re a student
you might be thinking, OK. What’s the job market going
to be like when you graduate. Or if you’re interested
in stock markets, you might want to
know what’s going to be the next big
business success, or maybe you’re thinking
longer term things about what was going to happen
with climate change, and so on. So we want to look into the
future, but on the other hand, there’s also this sort
of countervailing fact. Empirical observation
attributed to Yogi Berra among other people, prediction
is very difficult especially about the future. So I’ll give you three
examples of that. The first one has to do with
weather or climate change. So one of the sort of key
questions about climate change is if the carbon dioxide
levels in the atmosphere are due to double, what would
the effect be on the climate. And the first conference to look
into that was held back in ’79, and at that time, there
are only really two models. And so one day, the
first model was presented and it gave a warming
of two degrees. The next day, another
model was presented and it gave a much higher
warming of four degrees. And then, at the end
of the conference, they just sort of
split the difference and added the fudge
factor and came up with this range of
1.5 to 4.5 degrees centigrade, which,
you know, that’s fine. What’s strange though,
is that over those the intervening decades
that range, which is really quite large, 3
plus or minus 50%, has not got smaller really. It sort of changes a little
bit, but basically it hasn’t shrunk down. And actually some
studies suggest that the true uncertainty is
probably even bigger than that, so this is curious. This is not what we’re
used to in science. I mean, normally in science, the
harder you look at a problem, you got to finer
and finer accuracy, but that doesn’t seem
to be happening here. Perhaps the most graphic
example of our difficulty in predicting the future was
the recent financial crisis, which was not the finest moment
for the forecasting community. It wasn’t just people
at failed to predict the
crisis, but the same was true of big
institutions like the IMF, and OECD, and Federal
Reserve, and so on. Then there’s the
area which I work in now, mostly, of biology. When the Human Genome
Project gave us access to the book of
life, it’s turned out to be a bit of a puzzler because
although this has completely changed all sorts of areas of
the way that biology is done, the cost of developing
successful drugs has actually soared and
now costs over a billion to develop a new
drug and we still can’t predict the spread
of pandemics like avian flu or swine flu. So what’s going on here? Why is it that,
despite resources were putting into these
things, we’re not really getting much better at
predicting the future, and are these problems related? And I think they are. And I think they’re connected
to our rather mechanistic way of thinking about the world. So I’m just going to start by
giving a very kind of brief run through the history
of prediction, or our Western
tradition of prediction, and then talk in a bit
more detail about how we make predictions
in these three areas. OK, so our Western
tradition of forecasting started with this person,
the Oracle at Delphi, and the Pythia acted
as a channel for Apollo so divination means
inspired by God. And for the Pythia
the main requirement was enthusiasmos, which
didn’t mean enthusiasm. It meant actually being
possessed by a god, the god Apollo, the god of prediction. And the Oracle would be
consulted on everything from minor matters of health to
the major questions of state, like such as should
you go to war. And she was a woman. She’d be sitting here on a three
legged stool known as a tripod, and it’s said but she inhaled
the vapors from a cleft in the mountain, perhaps
ethylene gas which put her into kind of a trance. And her predictions were,
perhaps as a result, were often kind of vague
or even double sided, but actually, perhaps
for this reason, because it was hard to show that
her predictions were actually false, this was the most
successful forecasting operation in history. It lasted for
almost 1,000 years. So one of her more
accurate predictions, or famous predictions,
was the birth of this guy, the philosopher
Pythagoras, who was considered a demigod by the Greeks. And he was named
after the Pythia and he was believed to be
fathered by the god Apollo, and he would go on to found
a school of prediction that was based on number. Actually, it amounted
to a sort of what you might call a
psuedo-religious cult, which was built around number. And the Pythagoreans
are best known for their theorem
about right triangles, but perhaps their
most important insight was the discovery of that
music can be reduced, in a way, to numbers. So the notes which
harmonize well are related by simple
mathematical relationships, equations. So if you take a string, and
you pluck it, you get a note, and if you fret it exactly
halfway up and you pluck it, you get a note which
is an octave higher. And then it turns out that all
the notes which harmonize well are related by these
neat mathematical ratios. So this was a staggering
discovery at the time because music was considered the
most expressive and mysterious of art forms. So the idea that it could
be reduced to number implied that perhaps
everything, the universe itself, could be reduced to number. So the Pythagorean
philosophy was summarised by this
list of 10 opposites that I think is interesting
in a number of ways. It’s a little bit like yin yang. So the left hand column with
right, male, at rest, straight, and so would correspond to yang,
the right hand column to yin. But there is an
important difference, which is that the left
column was sort of considered good, the right column evil. So basically the
Pythagoreans tried to align themselves with the
qualities on the left hand side. And so instead of
being seen as sort of two opposites
that are connected, it was like polarizing
them in a way. And also some of
these pairs spell out something, which is a bit like
a kind of an aesthetic code, if you like, for science. So for example, one
versus plurality. This is the idea of unity,
and scientists are always looking for unified theories
which explain everything. Stability is at rest
versus in motion, we’ll see that a
lot of our models are based on ideas of stability
as well as linearity, straight versus crooked. Light versus darkness. Light is associated with
the light of reason, so rationality. And square versus oblong. This idea of symmetry you
know that something trying to make everything square
and simple as we’ll see also plays a very strong role
in our predictive models. OK, so, as I said, because of
their success at reducing music to number, they believed
that the movements of the entire cosmos could
be described similarly using numbers and
the idea of harmony. And the Greeks developed, a
predictive model of the cosmos, and this was used
primarily for astrology. And this was based on two
assumptions, neither of which were right, but it
didn’t really matter. So the first one
was that everything moved around the Earth, and
the second was that everything moved in perfect circles. And the reason for
choosing circles was really primarily
aesthetic as Ptolemy said, “these circles are
strangers to disparities or disorders.” So this model worked
very well in an intuitive way for the stars, and
the sun, and so on. But with planets,
for some planets, it posed a problem because they
would go around, and then stop, and then apparently backtrack,
and then continue on their way. And so to get around this,
rather than ditch the circle, they just added more circles. So this is the
idea of epicycles, a smaller circle which moves
around a larger circle. So this model became the
official model of the church, and it remained
unquestioned for centuries right up until the Renaissance. So why was it so successful? Well, the main reason
was that it worked. It could predict
things like eclipses to a very high
degree of accuracy. And when you think
about this, this was a time when what
happened here on Earth was believed to be affected
and influenced by what was going on with the cosmos. This was a staggering
demonstration of the power of mathematics. So the geocentric
model, as I said, it became adopted
by the church and it was believed that
the individual, the planets, the celestial
bodies, played different notes. So the stars would play D,
Saturn would play the note C, Jupiter G, and so
on, and the result was a kind of a harmony
with the Pythagoreans called the harmony of
the spheres, which was sort of a beautiful,
celestial music. And this idea that
the cosmos was based on this sort of beautiful
numerical order and harmony also implied that
human beings should try to align themselves with that. And so the Roman
architect Vitruvius said, “Without
symmetry in proportion, there can be no principles
in the design of any temple.” And so for this
reason, buildings were built around the
ideas of symmetry, and circles, and squares. And here, Vitruvius
actually tries to force the human body to fit
into the circle and the square. You notice that the limbs
are rather distended there. And his influence can
be seen in buildings such as the Pantheon
in Rome, with its geometrical proportions,
this motif of circles and squares, and later, this got
picked up again the Renaissance sense by architects such as
Palladio in Italy and Inigo Jones in England. The similar interest in this
kind of classical geometry also infused other
areas such as art. So in the early
Renaissance, the baptism of Christ by Piero della
Francesca, the square represents the earthly
realm and the circle represents the heavenly realm. And then with Leonardo da
Vinci something interesting happened here. Which is that, in his
version of Vitruvian Man you notice that the
circle and the square are no longer in perfect
alignment anymore, so it’s like he shifted them. He’s broken symmetry,
and the reason is that he was basing his
paintings on close observation of human beings of
the natural world, and he just realized that
actually if you just shifted it slightly it gave a
much better result. And of course, this has gone
on to become an iconic image. And I think this
sort of pre-figured what happened in
science in a way, because those two
key assumptions of the Greek model both came
under question the same way. So Copernicus proposed
that Earth went around the sun rather than
vice versa, and then Kepler showed that the
orbits were not circles but actually ellipses,
and he didn’t actually like that very much
because in his earlier work he’d believed that they were
all circles that were stacked together based on the
perfect solids of Pythagoras, but he realized that
ellipses actually gave a much better fit. And then Isaac Newton
came along and he derived his three laws of
motion and the law of gravity, and so this is
interesting because Kepler had shown that the orbits aren’t
circular, they’re elliptical, but Newton showed that
underlying that apparent asymmetry is a law which is very
elegant and symmetric according to all these principles. So this is the equation
here, and basically it says it’s symmetric. So for example, the
Earth attracts the moon, but the moon also attracts
the Earth in return, which is why we have tides. And mutability,
it doesn’t change. The law of gravity is
supposed to hold everywhere in the universe, subject
to Einstein’s adjustments, of course. Unification, this
law can be used to describe the
motion of the moon around the Earth
or a falling apple. And elegance, it does
all this in an amazingly compact and elegant form. There’s only one
unknown parameter here, the
gravitational constant. And it’s a very
simple equation, which can describe all of this
huge range of phenomena. So Newton believed
that matter was made up of these solid, massey,
hard particles, i.e. atoms. And so this laid out a template
which science has really continued to follow until
the present day, which is that to understand
or predict a system, you just need to break it down
into its constituent parts, figure out the equations
which govern them, and write them out as
equations, and solve, and so this is what we do. In something like weather,
we look at parcels of air and water vapor. In the economy, we look
at individuals or firms. In health, we look
at genes or proteins. And so this has been very
successful, obviously, in areas such as physics,
and chemistry, and so on. But how’s it doing with these
sorts of things, these things that we actually want to predict
on kind of a daily basis? So to answer that, I’m going to
look at each one of these three quickly in turn,
starting with weather. So the term weather forecast
was actually introduced by Robert Fitzroy, so he was the
captain of Darwin’s “Beagle.” He set up the first UK Met
office in the 1850’s, and he defined as follows, “Prophecies
or predictions, they are not. The term forecast is strictly
applicable to such an opinion as is the result of a scientific
combination and calculations.” So you can see he’s trying
to be very sort of detached, and objective, and
scientific here. And the reason was that
he was kind of caught between two different camps. On the one hand, he was in
competition with astrologers and people like
Zadkiels Almanac who were coming in making all
these flaky predictions. And then, on the
other hand, there was this scientific
establishment that was looking
down on his attempts to make these prophecies. And these pressures probably
played a role in Fitzroy taking his own life. But then during the first
World War, the mathematician Lewis Fry Richardson, he was
serving as an ambulanceman, and he managed to make the first
numerical weather forecast. He did this by taking some
observations from weather balloons every year, and
dividing the atmosphere up into a grid, basically, and then
using differential equations to solve how the equations of
fluid flow for this region. And he managed to do all
this while he was working as an ambulanceman,
and so he was only trying to predict what would
happen six hours ahead. So this is actually a
hindcast, not a forecast. You know, looking at past data. All of his calculations took
him six weeks to perform, and the results were
actually way out. But it was sort of a
proof of principle, if you like, that this kind
of thing could be done. He imagined this forecast
factory full of computers who would be people
with slide rules, and each person would be
responsible for a separate part of the globe, and there’d be
this central figure directing, like a conductor
of an orchestra. So Richarson’s
dream was eventually realized in the 1950s, but
of course the computers were punch card machines like
the US Army’s ENIAC computer. Soon, researchers were producing
complicated 3-D general circulation models,
which are loosely based on equations
of fluid flows or quasi-Newtonian equations,
and much of this research was actually funded by the
military because, of course, weather has always played
a very important part in military conflict. And there is a hope
they would not just be able to predict but even
control the weather and turn into a kind of a weapon. But since the 1950s, despite
massive advances in computing– so computers today,
the fastest ones are about a million times
faster than an ENIAC– observational powers such
as weather satellites, huge demands from industry
such as agriculture, weather forecasting
has certainly improved a great deal,
but it hasn’t really improved as much as people
thought it would back then. Predictions are good for
a few days, certainly, but things like for
precipitation or especially extreme events are
difficult to predict, and controlling the
weather is right out, although certain leaders– I
think Hugo Chavez in Venezuela has claimed to
control the weather and I think they do it in North
Korea or somewhere as well. I’m not sure. But anyway, in general,
it remains elusive. So why is it that the forecasts
haven’t improved more? Well in the 1960s, this idea of
the butterfly effect came out came out of MIT
here down the road. And I guess everyone’s probably
familiar with this theory, so the idea is that
the weather it’s an incredibly sensitive system
so perturbing it slightly, such as something like a
butterfly flapping its wings, can create a disturbance which
sort of ripples out and can create a storm on the
other side of the world. And so this became
popular in the media, but also in the forecasting
community, an explanation for the inaccuracy
of weather forecast. So actually, when I was
working on this area back in the late
’90s, it was sort of be the default explanation
for why forecasts went wrong was the butterfly effect
and there was even this thing called a perfect
model assumption, which sort of stuck in my mind. We will assume that
our numerical model is essentially perfect,
because all the error must be due to chaos. So on the one hand,
it was bad because it meant we couldn’t predict
exactly what was going to happen, but if the
model was correct, then you could still make
these probabilistic forecasts. So instead of just
running one forecast, you run lots of them–
it’s called an ensemble forecasting– and then you can
do sort of statistical stuff and make a probabilistic
prediction. So it’s a little bit
like the Oracle at Delphi being a little bit vague
and making predictions which are a bit harder to validate. So when we looked to this,
we came to the conclusion. I’ve always thought that
the butterfly effect was a little bit of a
strange theory when you think about it, because
if you sort of flap your hand in front of your
face, you don’t really get the feeling that there’s
a disturbance rippling out from you, which is going
to go and complicate the job of some Brazilian
weather forecaster in two weeks’ time on the
other side of the world. What’s more likely is
that that it’s just going to dissipate
rather quickly, even if you’re actually outside
and not in a room like this. And we came to the conclusion
that actually there’s a much simpler explanation,
if less attractive, which is just model error, which
is the models cannot capture the full complexity of
the atmospheric system. So I think there are really
two main problems when you try to build a model
of one of these systems. So one is the idea of
emergent properties. So emergent properties. Local effects and complex
systems lead to emergent properties that by definition
cannot be reduced to simple physical laws. So this tends to happen when you
have small local effects which grow and then create an effect
of the larger scale, which you could not have predicted
without actually recreating the entire system in full. So an example of
this is clouds, which are formed when water
droplets locally cost around small particles such
as dust, or salt, or whatever in the atmosphere. So this is all sort
of local things, and there is no kind of
shortcut way of doing this. There’s no kind
of simple equation that you can come up with. All we can do is
kind of build models which capture the approximate
amount of cloudiness which we will expect, given certain
atmospheric conditions and so on. But the formation and
dissipation of clouds is one of the most challenging
aspects of weather or climate forecasting, and the reason is
that these sort of Newtonian equations, they
just don’t work when you get into that situation. The other problem
is complex systems such as the atmosphere
are dominated by opposing positive and
negative feedback loops. So the positive feedback. An example of this
is clouds again. So heat increases water
vapor due to evaporation, and then this
increases cloud cover, which cools the
atmosphere, so that’s negative feedback on heat. Except at night, when
it does the opposite because the heat is
allowed to escape, so it actually heats
the atmosphere. So that’s a positive
feedback effect. And these forces are
in a delicate balance which makes models
sensitive to small changes. So there are similar problems
when modeling and predicting other kinds of complex systems. So in biological
systems, for example, if you look at what’s
actually going on in a typical biological system,
in your own body or whatever, it’s just kind of a
morass of feedback loops. So for example,
positive feedback loops allow a rapid response. Something like blood clotting
for example, something that has to happen quickly and
have a lot of positive feedback driving it. And yet, at the same
time, these things are coupled with negative
feedback that provides control. In the stock market,
as we’ll see, you get positive
feedback from people like momentum buyers versus
value investors, who tend to act as negative feedback. So I think that the Greek
philosopher Heraclitus, who was a contemporary
Pythagoras to put it best, when he talked about this
harmony of opposite tensions. So it means that at
apparent stability, it looks like there’s
nothing happens but there’s actually strong
forces going on underneath. And that means
that the situation can change suddenly and
wildly, as in earthquakes, extreme weather or climate
events, or something like financial crashes,
which we’ll come to next when we talk about economics. OK. So in economics, the tools
used for predicting the economy actually followed a
rather similar path as what happened in meteorology. Our orthodox or
neoclassical theory was developed in
the 19th century. It was explicitly inspired by
Newton’s rational mechanics. But is the economy was
really like a machine? People behaved like atoms? Well, Newton didn’t
think so, as he said after he lost
most of his fortune in the collapse of the
South Sea Bubble, which was the economic
crisis of his time, “I can calculate the
motions of heavenly bodies, but not the madness of people.” But economists, being
eternal optimists, decided to press ahead
anyway, and they came up with this theory,
neoclassical economics. So here, the atoms
of the economy are individuals or
firms, and they’re assumed to act
independently and rationally to maximize their own utility. This led to the idea of
the rational economic man, and everybody knows that
this is a caricature, and no one really believes
it’s completely true, but it has been a very
influential caricature. It’s kind of the epitome of
Nietzche’s Apollonian principal of individuation,
where it’s basically just kind of shrinking
everything down to the needs of the self. And then the ideas
that the actions of all of these self-interested
individuals, through the invisible
hand, drives the economy to a stable equilibrium
and the result of all this is supposed to be maximum
societal happiness. And these assumptions were used
to build general equilibrium models, just as
with weather models, but if you look at the equations
for general equilibrium, obviously they talk about
these flows and equilibrium conditions. It’s sort of the same idea that
you have with a weather model, but instead of flows of air
or water, it’s flows of money. But it turned out to be even
harder to predict the economy than it is to
predict the weather. The graph here
shows– The solid line is changes in gross domestic
product in the United States year to year, and
predictions are the lighter lines from
the Energy Information Administration. And really these,
kinds of predictions are not much better than
just making a kind of a naive forecast, which would
be to sort of take the average of over
the last few years and just do something
very simple. And this isn’t just
the case for the EIA. It’s the same for
other forecasters as well, such as IMF and so on. So very difficult to do. And then, in the 1960s,
again, as with chaos theory, as with the butterfly effect,
the efficient market hypothesis became a popular explanation
for the inaccuracy of economic forecasts. So this is a
physics-inspired theory, and basically it assumes that
the price system is perfect. So changes are due
to random news. So, for that reason,
because they’re random, they can’t be
predicted in any way. So it’s just sort of a
physics-inspired theory where you think of something
that’s completely stable and it’s just getting randomly
perturbed every now and then. So prices are perfect and
they instantaneously adjust. And there are different
versions of the efficient market hypothesis, but the
basic idea is that system is magically self-correcting. Again, rather like
the butterfly effect, which is putting these
rather magical properties onto the atmosphere, here, the
efficient market hypothesis is putting rather magical
properties on the economy. Because anyone who
works engineering knows that it’s very
difficult to build any kind of a system that
self-corrects all the time, that’s always perfectly
stable and can handle any perturbation. So this is a rather
remarkable property that we’re ascribing
to the economy here. So the market
cannot be predicted, but as with the
weather forecasting, it is possible to make
probabilistic predictions. OK? So the idea is that
you can calculate risk based on the normal
distribution, or variance thereof, the bell curve,
and the normal distribution has similar mathematical
properties as a square. “Normal” being from the Latin
for square, or the circle, and that the risk
of something can be reduced to a single number. So basically what you
do is, you look at, let’s say, an asset like a stock
fluctuation price up and down, and then by looking at
how much it has fluctuated over a certain time
period, you can ascribe a standard
deviation to that, and that is one number
which tells you how risky that asset is in theory. And this technique formed
what Alan Greenspan called an intellectual
edifice, which included all sorts of things
like Black-Scholes, Gaussian Copula, or Value at Risk. All of these sort
of risk formulas which are kind of based
on versions of this. And this all collapsed during
the recent financial crisis when it turned out that
these risk models just simply did not work. They grossly underestimated
the chance of extreme events. So why is this, then? I mean, one theory which
Greenspan mentioned after this quote
is that they just weren’t trained on enough data. That they were
trained, when you were looking at these fluctuations,
that you were just looking over a period
when everything had been kind of calm, and
nice, and positive, and that was the problem. But could it be that
actually the problem is much deeper
than that and goes right to the core of the
way that when we think these sort of Apollonian
assumptions, if you like, that we’re putting into
our forecasting models? So I’ll just go through
some of these sort of gold-plated ideas, such
as stability, symmetry, and rationality, and so on
that we find all the time. So stability, first of all. Our theory assumes
that the invisible hand drives the economy to
a stable equilibrium, but is the market ever stable. And if you just
look at something like the price of gold,
which has been in the news recently because it
was falling so quickly, I mean, it was certainly
stable, obviously, up until the late
’60s, because it was being controlled by government. But then when it was
released to market forces, this invisible hand
here seems to have a bad case of the
shakes, because it’s wobbling around all
over the place, right? And this is something
that makes sense if you sort of forget about
the idea of stability, because stability
is actually a very special property of a system. As I said, it’s not
easy to get stability, and it tends to be
something that’s produced by negative feedback. But in something like
an asset such as gold, positive feedback is
incredibly important. Because if you think about
something like gold, the reason you’re really
buying it, often, is because it’s a store
of value, and when it’s going up in value, people
get excited, and they buy more, and that drives the
price up further. So that’s a positive
feedback, and then you hit a kind of a tipping
point and the same thing happens on the way down and you
get this characteristic boom bust behavior. And it’s often
sort of a question whether the price of gold
is in a bubble or not, but it’s not really
the right question because it’s always
in a bubble, you know? Gold. There is no number which tells
you how much gold is worth. There’s no number
written on it anywhere. It’s what people think it is. It’s just how much
do humans value it for really for its beauty,
but fundamentally for its for its store as
wealth, and that’s a psychological construct,
a sociological construct. So it’s a bubble. And I don’t mean that
to sort of imply it as overvalued all
the time, but it is more like a bubble
which is constantly deflating, and inflating,
and deflating, and so on. So it’s going up and down. So this kind of boom bust
behavior is characteristic, not just of gold, which
you could argue it doesn’t have a strong
sense of integral value, but it’s also true of
the housing market, as is well known now, and
even something like oil. I mean, oil is often called
the lifeblood of the economy, but back in 2008,
it looked like we were having a cardiac event
with this massive spike. So here, again, the solid
line is the oil price, and then the dash lines here
are the forecasts, again, from the Energy
Information Administration. I’m not trying to pick on them. It’s that, for forecasters, they
do a good job of supplying data from their past forecasts,
which isn’t always the case. But it’s interesting to note
that back in the early ’80s, they were predicting
a price spike, and I think this is probably
a memory of the 1970s price shocks. And then when there wasn’t,
the forecast sort of learned that wasn’t going
to happen, it got flatter, and flatter, and
flatter, until then there was this massive spike and
it plunged back down again. All right. So this kind of
unpredictability of something like gold or
something like oil is superficially consistent
with the efficient market hypothesis. And it is frequently
kind of trotted out as a sort of a proof. You know that it must be true. But really, this
isn’t in the case. I mean, the fact that
something is unpredictable does not mean that
it’s efficient. I mean, snowstorms are
unpredictable but no one says they’re efficient. The risk calculations, which
the efficient market hypothesis is used to make, they also
assume the price variations are small, random,
and independent, and therefore
normally distributed, or at least nearly
normally distributed. But it turns out that, if you
actually just look at the data, the price fluctuations follow
a completely different sort of distribution, and it’s
not as a scale-free power law distribution. One implication of this is that
large events can occur>risks can be much bigger,
as in, September, 2008 when it was nearly lights
out for the entire economy. There’s this quote from Goldman
Sachs around that period, “We were seeing things that
were 25 standard deviation moves several days in a row. Well, a 25 standard
deviation move is something that
should not happen once in the lifetime of the universe. So if that’s happening
several days in a row, then you know there’s something
wrong with the modeling approach. OK. So what is the scale free
power law distribution? Well, a good way to see that
is on the surface of the moon. They don’t have
financial crashes there, but they do have
craters, and it turns out that they follow exactly the
same kind of distribution. So if you just imagine that
each one of these craters is some sort of price shock
in the stock market, what you notice is that
it’s very asymmetrical. So most price changes
are small, but there’s a number of bigger
ones, and then there’s the possibility of
this absolutely huge one in the middle. Scale free refers to the fact
that there is no natural scale. It doesn’t really make sense
to talk about an average crater size. It’s not a meaningless concept. And the normal
distribution, or whatever we think about what’s
normally expected, but what is the normal crater? I don’t know. And for that reason,
when you look at it, you can’t get a handle
on how big this. I mean, how big
are these craters? How big is that
one in the middle? It could be any. What’s the unit? And the only way you can do
that is by overlaying something. In this case,
Europe, and then you can see that the big
crater in the middle is actually the size of England. So that would be the Great
Depression of earthquakes. And the thing is
that extreme events are part of the landscapes. These things which seemed
like financial crashes which come out of nowhere
are not aberrations. They’re actually
part of the system. Another system which shows this
sort of power law of statistics or earthquakes– So
most earthquakes, there’s always sort of a
small tremors going on, which are undetected, and then a
smaller number of earthquakes, which you can actually feel. And then, again, the very
small number of extreme events. The similarity with financial
crashes goes even deeper. The left plot there shows
fluctuations in the S&P 500 around the financial crash. One of the Lehman’s
employees said it was like a
massive earthquake. And this is the
acceleration in the ground shocks during Kobe earthquake. So you can see that there’s
quite a degree of similarity. So back to these
assumptions, then. The theory assumes that the
people act independently and make rational decisions
to optimize their own utility. The proof of market
equilibrium actually assumes infinite
computational capacity. Then there’s rational
expectations theory, which has been hugely influential. So it’s a bit like
we’re modeling ourselves as if we’re like the
all seeing eye, which adorns the back
of every US dollar bill with this sort of amazing
power to understand the future. But, of course, the truth is
a bit more like this, right? As we know, as
behavioral economists, and other people have
been pointing out, we’re influenced
in our decisions by trust, by confidence, by
fear, by emotions of all kind. And we’re trying to
fit this kind of thing with our circles
and our squares, but it’s not really working. Neoclassical theory was
really based on the idea, and it was very noble
cause, actually, because the idea was
to optimize happiness, and we assumed that the
economy would realize “the maximum energy of
pleasure, the Divine love of the universe.” But what we’ve
seen from the data is that the gross domestic
product in the United States, for example, and rich
countries in general, has increased greatly
in recent decades, but reported happiness levels
and happiness levels measured in various ways show
that happiness actually is being fairly static or
even slightly declining over that period. And there are some reasons
which are put forward for that. And one reason, which
certainly plays a role, is there’s kind of a
saturation effect, which is that once you’ve made above
a certain amount of money, then you get a limited extra
pleasure from getting more than that. But maybe there’s
another thing as well which is that we
perhaps internalized some of these aspects
of this economic model. All the stuff about optimizing
or maximizing our utility, and being rational and
selfish in our decisions, when in fact, we know that
what makes you happy is often the stuff
to do with community, and doing things
for other people. It’s often the case
that it can make you happier to buy
something for somebody else than to buy it for
yourself, and so on. And so it could be that
actually this theory of ours is making us unhappy. We’re not very
good at predicting how to make ourselves happy. Just as individuals are treated
as being completely rational, the economy is believed to be
governed by rational objective laws. But this doesn’t
take into account what George Soros
calls reflexivity of the economy, which is that– I’ll give you some examples. There’s this quote from Douglas
Adams, the British humorist. “There is a theory which states
that if ever anybody discovers exactly what the universe
is for and why it is here, it will instantly disappear and
be replaced by something even more bizarre and inexplicable. There is another theory which
states that this has already happened.” So the laws of physics are
obviously not reflexive. I mean, when Newton
discovered the law of gravity, it didn’t change
just to annoy him. But the economy is. Economic models
changed the economy. So as an example, risk
models drastically underestimate the chances
of extreme events. This makes bank
take on more risk, and this makes extreme
events more likely. So the models are
affecting the reality. Banks use the same
models, so when volatility exceeds a threshold, they
all give a sell signal at the same time. So this goes back to
the Gretzky quote. Methods only work if
not everyone uses them at the same time. Someone needs to play
the puck where it is. Because we think the economy
is stable and self regulating, regulators relax the
rules and this makes the economy even more unstable. So the main problem with
our mathematical models of the economy is,
it’s not it that they failed to predict the crisis. It’s that they actually
helped make it happen. OK. So just to summarize a bit here. Orthodox theory and
forecasting tools are based on these gold plated
ideas of stability, symmetry, order, and logic, which go
back to the ancient Greeks. We model people as
if they are rational and can see the future. We model the economy
as if it obeyed the harmony of the spheres. You can almost here the
celestial music playing in the background. But it’s far more
wild than that. I think you could say
the same about something like the climate system as well. It’s much more wild than
our models allow for. And I feel that we’re
doing the same thing that the Greeks were doing. We’re imposing our
ideas of order, and stability, and rationality
on to the universe. But there is one
difference, which is that the Greeks could
predict when the lights were going to go out, right? We can’t do that. Our models don’t have that
degree of empirical validity. So what do we do? Normally in science, went
science is working well, it proceeds by changing a theory
only if another theory comes along that makes
better predictions. But there’s a problem. What if the system is
inherently unpredictable? What happens then? And I think this is one
reason why we end up with things such
as the butterfly effect, or efficient
market hypothesis, or theorems like this,
because they explain away the unpredictability
while retaining the power of the model. They’re not really questioning
the model’s ability to make probabilistic
forecasts, for example. But to move beyond that, so do
we need another Newton, or just a new aesthetic? Paul Krugman said, “The
economics profession went astray because economists
as a group mistook beauty, clad in impressive-looking
mathematics, for truth.” and I think the
same could be said for other branches
of science as well. There’s something to
be learned from that. So I’m going to talk now
a bit about predictions. How ideas are coming from
areas in the life sciences, and how these
techniques, I think, can be useful for
other areas as well. And there’s a shift
in perspective from seeing the world
as a machine to seeing the world as a living organism. So what are the differences? Complex organic systems, from
a living cell, to society, to the Earth’s atmosphere, are
characterized by these emergent properties which emerge from
local effects and cannot be reduced to simple equations. So examples are things such
as clouds, social behavior. Systems operate at a state
not stability or equilibrium, by a state that is
better described as far from equilibrium
in the sense that the contents are
constantly being turned around. Power-law statistics
are the signature of systems that are operating
far from equilibrium, which is the sort of statistics
which are exhibited here. Network dynamics as opposed
to individualistic atomic dynamics. And then there
are these opposing positive and negative
feedback loops, which create an internal dynamic tension. And all of this results
in inherent uncertainty so that mechanistic
models which attempt to capture these
effects tend to be highly unstable and
sensitive to small changes. And this is what we see with
things like climate models, or biological models, or
financial models as well. So a number of new mathematical
techniques– Or new-ish. They’ve been around for while
now– have been developed to visualize, understand,
and in some cases predict these sorts of systems. So examples are network
theory, nonlinear dynamics, agent-based models–
and I’ll give a couple of examples of
these– and then also insights from other areas. So this is an example of what we
can learn by studying network. So this is a protein-protein
interaction network for genes related to different diseases. And this is by a team
of systems biologists. And the nodes here
represent proteins, and lines represent
interactions, which means that these
proteins interact on how to form a complex, or one
regulates the other, whatever. And the color coding here
indicates the disease which these proteins play a role in. And then you can use this kind
of visualization technique to zero in on how
diseases might be related as well as what
proteins might be a promising target for a drug. Here’s the same
technique applied in a rather different context,
the world economic system, financial systems. So here this is a network of
43,000 transnational companies, and here, the nodes represent
companies and the lines represent direct or
indirect ownership links. So this is sort of a
snapshot of the power structure of the economy. So this is quite revealing
in a lot of ways. So, for example, it shows
that the power structure is very asymmetric. To the top 1% controls over
40% of the power in the system. This is related to the fact
that power law statistics, the number, the connectivity
of these nodes, a small number are very highly connected while
most every few connections. A network effect
means the network is very highly connected. And normally, you
might think this is a good thing, because
it means that communication is made easy. But it can actually lead to
problems, such as contagion. So that means that a
financial disturbance in one part of the world
can quickly ripple around and affect the rest. In fact, this data
was done shortly before Lehman’s went
bust, and Lehman’s was ranked maybe 20 something. I can’t remember
and it just shows how fragile or
susceptible the system was to one of these nodes
being knocked out. and in nature,
ecosystems, for example, tend to be built up of
smaller sub networks which are less closely connected. It’s a dynamic system. It’s changing all the time. So if you look over
the last 20, 30 years, the degree of connectivity
has increased greatly. And there’s a high degree
of instability here, and this is as shown by
the Lehman’s incident, but the other thing that’s
kind of build into this is that, if you look at the top
companies, which are down here, most of them, I think Barclays
was one, and most of the others are banks or something
in the financial sector. So these are all
operating really what you would call
the virtual economy. They’re not normal
companies that are making or selling stuff. They’re companies which are
doing things with money. They’re creating
credit, they’re dealing in fancy financial derivatives,
credit default swaps, or things like this. And all of this is
inherently unstable. Because it’s fine, basically,
when the economy’s going up and everything’s
going wonderfully. Everyone’s making lots of money. But then when things
start to go badly, the problem is everyone wants
to cash in their virtual chips at the same time, and
that leads to a crash. Back to the idea of prediction,
how can we make predictions about these things? And I think perhaps one
of the main insights is that prediction and modeling
are not the same thing. So normally, the way we go about
predicting a complex system is just to all be
very logical about it, and reduce it to its parts, and
include absolutely everything in the model. And if something isn’t
included, then someone will point that out, and
you have to put it back in, and you end up with a very
large, cumbersome model of a system, be it of the
climate, or biological system, or whatever. But actually, what happens in
practice is that these models, they become more complicated,
and a number of unknown parameters– the
number of things that you’re basically
just guessing– increases. And then there’s also
the fact that you, as you become more
realistic, you start to include all of these
nonlinear feedback loops, which are very, very hard and
practice to get right and make the model very, very unstable
and sensitive to small changes. And empirically–
And there have been studies such as
this M3-Competition, but what seems to
hold quite generally is that simple models tend to
be better at prediction, perhaps just because, if you have
a very complicated model, you can always fit whatever
historical data you want, but they’re not very
good at prediction. A simple model will
do a less good job of predicting what
happened in the past, but tends to be robust
and reliable for making predictions. And so the idea is to make
what Arnold Zellner called sophisticatedly simple models. This is an example of
an agent-based model, and it’s from a company
that I work with. And it’s a representation
of a tumor. And so each of these circles
represents a cluster of cells. And then what you do is,
you follow these cells as they grow, and
divide, and die, and then you get kind of
an emergent behavior which captures reasonably well
what happens in a tumor. But the model at the level
of the individual cells is kept deliberately simple. Really, as much is
left out as possible in order to keep the number
of parameters simple, in order to reduce so
that you can actually fit this thing to the data. But while prediction is possible
for some systems, in a way, it’s almost a question
of having– There are pockets of predictability. There are things
which are predictable, but we also need to acknowledge
the uncertainty of living systems, and I think the medical
analogy is appropriate here. So doctors routinely
use their experience to make statements about
health, but they generally avoid making precise
predictions like, you’re going to have a heart
attack in exactly 37 days, or something like that. And I think in the same way,
we can use models and data to improve system health. I mean, the main
problem with the world is not that it’s unpredictable,
but that it’s very unstable, as shown by these things
like the oil price shocks, and credit crunches, and so on. Yeah. I guess the main question is,
if we can’t predict something like an economic crisis, how
are going to do it predicting an environmental crisis. So can we predict the exact
timing of the next crisis or opportunity? No. Can we use available tools
to prepare ourselves better for it? Yes. I think so. And one way to do
that is to compare these different systems, see
what you can learn that way. So if you were to ask which
of these is better regulated, well, the financial
system is obviously not terribly well regulated. Human cells are normally amongst
the most regulated of systems. They’re regulated for
absolutely everything. This is actually a
prostate cancer cell which has escaped
regulation, and therefore has become a threat. And then there’s
the biosphere, which has been around for a
couple billion years, although we are now starting
to care about it ourselves, and so we can learn
a lot from that. OK. Just to summarize quickly. So, our mental models,
in art or science, shape the way we experience
reality, and therefore shape the world we live in. The Greeks believed
that the cosmos was ruled by
mathematical harmonies, and for centuries,
this was reflected in their art and architecture. And so there are these ideas
of unity, stability, linearity, rationality, symmetry. In the Renaissance,
there was a shift towards realism, perspective,
and objectivity, and this helped pave the way for
today’s scientific models. However, many of these sort of
classical ideals were retained. And today, I think we’re
finding that this reductionist approach does have
its limitations, and a new approach to
modeling is emerging, which views the world more as a living
organism than as a machine, and then this is coupled
with a new aesthetic, which finds beauty in the
complexity of life rather than the elegance
of symmetry and geometry. So I think that’s going to be
more Frank Gehry than Mies van der Rohe. OK. Thanks very much. [APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts
Recent Comments
© Copyright 2019. Tehai. All rights reserved. .