Tuesday, April 11, 2017

Market complexity also makes for instability

Since the financial crisis of 2008, an explosion of research has aimed to understand what makes financial markets prone to sporadic crises. The potential sources of trouble are many, including debt and leverage, financial concentration and the problem of “too big to fail,” as well as perverse incentives for bankers to take on large risks. Markets go wrong in any of a thousand ways, and, unfortunately, it seems that understanding each one requires intimate familiarity with the fine details of the financial architecture, contracts, legal regulations, individual incentives and so on.

Yet a narrow focus on details can distract attention from profund similarities. Network scientists know that the topology of a network – the pattern of links or relationships that hold it together – can have a decisive influence on its properties. In the context of financial networks, new research suggests that subtle changes in network topology may be the key to understanding a common pathway by which financial markets become unstable. For all the forbidding complexity of the modern financial system, they suggest, instability tends to follow from the emergence of particular cycles or closed circuits of dependence within the network topology, as these tend to amplify disturbances or distress.

I'll explain the basic logic of the work briefly below. It's a theoretical paper, and not meant as a recipe for detailed practical policy. But it does help clarify a basic mechanism that drives instability, and offers broad insights on the kinds of policies that could avoid it.

First, a little background. Some of the motivation for this work comes from the history of thinking in ecology. Back in the 1970s, ecologists widely believed that the stability of an ecosystem would generally be enhanced by increasing complexity, as reflected in the presence of a large number of interactions between a diverse set of species. But the theoretical ecologist Robert May overturned this intuition, at least partly, by showing in simplified network models of food webs that complexity can in some cases undermine stability. His analysis indicated that networks with a larger number of interactions could be less stable, and inspired ecologists to begin searching for possible new factors that might account for ecosystem stability – for example, the presence of specific topological motifs within food webs.

Just after the financial crisis, May – who was formerly the Chief Science Advisor to the UK Government – joined with the Bank of England's Andrew Haldane in arguing for the relevance of this insight to the stability of financial systems as well. Financial networks have grown enormously more complex in the past 30 years, and, as May and Haldane noted, the pre-crisis literature in economics and finance mostly viewed this as a good thing. Traditional thinking held that more complexity, achieved through a wider spectrum of financial instruments, greater diversification and wider spreading of risks, would improve stability. Yet May and Haldane pointed to a handful of studies, mostly in the last decade, linking rising complexity with increasing instability.

Six years later, this idea that too much complexity can cause trouble is becoming less “radical,” although the story also remains unsatisfyingly complicated. Models serving as examples tend to include fairly intricate details of how financial institutions interact – particulars of contracts, for example, or mechanisms for debt default resolution. Do such details always play a decisive role? Or is there a simple and general story about how changes in network topology create instability that stands above the details?

This is the question asked – and answered, in the affirmative – by this paper. What Marco Bardoscia and colleagues do is to study a class of models of the Interbank network, and probe the stability of the network as they vary two parameters characterizing its complexity. These are 1) market integration, reflecting the number of banks participating in the financial system, and 2) diversification, referring to the proliferation of financial contracts. Importantly, the study doesn't test stability in the usual way of running stress tests and estimating the total losses likely to amount from some assumed shocks to the system. This approach requires specific assumptions on the nature of the financial contracts and mechanisms of distress propagation, making it difficult to draw general conclusions.

Instead, Bardoscia and colleagues study how gradual changes in the interconnection pathways in the network can create mechanisms that tend to amplify small disturbances, rather than dampening them away. For example, the figure below illustrates how the network goes from being stable to unstable just due to gradual diversification, normally thought of as beneficial for risk management. It shows a network eigenvalue λmax reflecting whether the propagation dynamics of the network dampen (λmax < 1) or amplify (λmax > 1) small disturbances. The researchers used the balance sheets of the top 50 listed banks in the European Union as a starting point, and then simulated a process in which banks gradually increase the degree of diversifi cation by creating further exposures towards additional counterparties. They carefully rebalanced the network at each stage to keep the assets and liabilities consistent with the original balance sheets and the interbank leverages of all banks fixed. As the degree of diversi fication increases, a bank's exposures spread out across ever more counterparties. Even though the total interbank exposure of each bank remains constant, the banking system eventually goes unstable, and it doesn't even take a lot of diversification to make it happen. As the figure shows, instability first arises when contracts link together just 3% of all the possible pairs that could in principle be linked.

This example illustrates the transition from stability to instability as complexity increases through diversification. The paper equally establishes that a simular transition takes place if complexity rises just through an increase in the number of banks.

The conclusion is that more complex and highly networked markets should generically tend toward instability. A financial system can go unstable as the number of banks increases, or as the number of contracts among banks increases, even if the individual leverage of banks does not increase. In either case, instability appears as a holistic, network effect, even though each bank individually has an unchanging risk profi le. The implication: financial policies that seem wise from the point of view of the risks to individual banks can actually – and counter-intuitively – increase fi nancial instability to the whole system.

The paper also goes into some detail on the origin of such instability, which lies in the fact that in both processes banks get increasingly involved in multiple cycles (i.e. closed chains) of contracts. This is an interesting technical detail that I won't get into, although such factors might well prove useful as targets for monitoring by authorities. In any event, it's clear that systemic risk cannot be reduced through measures long thought to reduce risks in standard economics. Banking proliferation and diversification, if excessive, can create worse problems than they solve.

Sunday, July 10, 2016

The (un)Wisdom of Crowds

Does the Wisdom of Crowds work for elections? Should we think that the result of the British Brexit vote, because it was a free vote put to the people, was not only democracy in action, but also a wise method for a nation to make such a decision?

I've touched on this matter in my latest Bloomberg View column, drawing on a new study on group decision making by researchers from the Santa Fe Institute and the Max Planck Institute for Human development in Berlin. This study asks under what conditions we should expect larger crowds to make better decisions, and finds that, in general, this is only the case when the problems being faced are relatively easy -- so that any any individual has greater than 50% chance of getting the right answer. When problems are diffficult, the wisdom of crowds tends to fail, and small groups make better decisions.

Most importantly, they find that if the problems faced by a group come in an unpredictable mixture of easy and hard -- the more realistic case -- then the best decisions are made by groups of fairly small size, ranging from 10 to 40 or so. This insight doesn't apply directly to the UK referendum, which didn't necessarily have a right or wrong answer, but I think it does back up the view that a referendum is an extremely crude means for a nation to decide such as complex matter as whether to stay in the EU or not. As the researchers point out, many democratic decision making bodies around the world -- from juries to town councils to parliamentary committees -- make decisions with a fairly small number of people, usally from 5 to 40. There may be good wisdom in this.

And maybe it suggests that the best path forward for the UK is for parliament to weigh the decision to leave the EU using all its resources, and not being constrained in advance by the referendum result. Whether they ultimately decide to leave or to stay, that also would be democracy in action.

The Bloomberg thing is here.

Wednesday, February 17, 2016

Improve technology -- and still use more stuff overall??


A while back I had a brief argument with Paul Krugman and some other economists over economic growth and the future of the planet. It's common knowledge, of course, that human use of materials has grown over time -- globally, we now use more steel, plastic, glass, oil, water, etc. than ever before. We use more energy than ever before, and our agriculture puts more phosphorous and nitrogen into the oceans than ever before, and these trends toward more usage of physical stuff of every kind continue. All of this puts pressure on a variety of planetary processes, which we rely on, and threaten at our own peril.

As economies grow, I argued, they inevitably use more physical stuff, even if advanced economies do shift increasingly to providing services. Hence, economic growth (at least GDP growth as we presently know it) will need to be limited if we're to avoid over-burdening the planet, and to preserve its ability to support us in an acceptable environment.

The economists countered that my argument reflected a physical scientists' typical mis-understanding of economic growth. It needn't involve more "physical stuff," they countered, but could use less physical stuff over time as our technology makes us more efficient, and still generate ever more economic value. This is the idea of "de-coupling" between economic growth and physical materials use. I agreed with that point, in principle, but pointed out that -- for all economists' faith -- this hasn't happened so far, and we have no evidence that it will happen or even can happen. Economic growth is still closely associated with increasing consumption of physical materials and energy. Why should be think it won't be in future?

That was then, and we dropped the argument (although I just noticed that Tim Worstall at Forbes took one more swipe at me, and deeply mis-understood my point). But some new research offers an update on the story. It comes from some engineers who have looked at the best data they could find on technologies  and technological development over the past half century, and asked if these technologies -- which have generally improved at an exponential rate on many measures, including efficiency -- have led to a decreased use of materials and energy. Their paper is here, and the short answer they give is "no." I've written a Bloomberg View piece on the paper, and I'll just quote a short section:

Two engineers, Christopher Magee of the Massachusetts Institute of Technology and Tessaleno Devezas of the University of Beira Interior in Portugal, looked at two sets of data covering 116 different technologies existing between 1940 and 2010, ranging from the chemical industry and electronics to metals, wood and energy. Almost every technology over this period shows exponential improvement (though at different rates) in prices, performance and efficiency of energy and material use. Over 20 years up to 2009, for example, the price of photovoltaics consistently dropped about 10 percent per year.

The improvements weren’t enough, though, to outpace the combination of population growth, economic expansion and the rebound effect. As a result, overall material use tended to increase: Those photovoltaics, for example, consumed about 13 percent more materials each year.

To be sure, the data are far from perfect. Information on many of the 116 technologies exists over intervals of only one or two decades. Still, the fact that none of the data fit the usual story of decoupling suggests that the concept is at the very least highly questionable. The only six exceptions were technologies for producing substances such as asbestos, mercury and thallium -- all toxic materials that were ultimately controlled by policy intervention and legal restriction.

The results don’t imply that humans won’t ever achieve decoupling. They simply suggest that the historical record so far isn’t encouraging, and that there’s no reason to expect it to happen on its own.

I don't think this is the end of any argument; just more information to consider.

One final comment on the Forbes Worstall post, which I've just now seen. He suggests that I was arguing, here, that economic growth will eventually have to end because we will face "hard limits" to available energy. I'm not sure where he got that; it's not anywhere in my article. I don't think we're ever going to run out of energy, at least not for a very long time. I've even gone to some lengths to examine how much energy is available from solar sources (it's immense).

My argument was never that we will run out of energy, but that we will be forced, by deteriorating environmental conditions, to reduce and restrict how much energy we use. The energy we use always ends up in the environment, changing the environment, modifying its chemical makeup, the nature of its flows, and even its temperature. There's no getting around it; this is thermodynamics. And the effects, over time, are not small. Of course, somewhere along the way, even if we do restrict our energy use to some safe level, we might be able to eek out a bit more growth and extra GDP by improving energy efficiency, but that will come to an end too -- there are limits to efficiency. Once we've reached maximum efficiency in our technologies, we'll be limited in how much we can do.

Worstall suggests we might have another "13 millenia of exponential growth" before running into any problems, but this is a vast misunderstanding. See physicist Tom Murphy's famous post in which he estimates how long it will take continued exponential growth in energy usage -- along the trajectory we've seen the past few centuries -- to make the oceans boil due to waste heat. It's not 13 millenia, and not even close. It's 400 years. 

My point in all of this, of course, is not to predict with precision the situation we will face in this future point. I don't know any more about it than the economists do. The point is that, according to the current data, the rosy picture economists often paint about near term de-coupling look mostly like wishful thinking.

Tuesday, February 16, 2016

Economic growth -- vastly slower than we thought (maybe)



You'd think that by now we'd have a pretty good empirical understanding of how economies grow, i.e. what the normal pattern of growth is through time. We've been studying economies for a couple of centuries, and have had reasonably good numbers of the (crude) measure of GDP for half a century. In economics -- and among the financial media generally -- it's almost beyond question that the normal pattern of economic growth is exponential growth. Indeed, almost no one ever supposes it might be different; we only debate how fast or slow recent exponential growth has been.

But we might be wrong, especially for mature economies. That's  the conclusion of some recent research by a team of European economists and statisticians who looked at the data on 18 mature economies from 1960 onwards. They find that the best fit to the data isn't exponential at all, but linear, suggesting that if growth was ever exponential (in young economies), it isn't  like that any more.

I wrote a piece in Bloomberg about this a couple weeks ago. I wasn't aware of this line of work, but apparently a handful of (mostly) German economists have been pointing to this evidence for nearly twenty years.

It would hardly be surprising, of course, if human economies -- like individual people themselves, cells, bacterial colonies, trees and anything else alive -- turn out to have natural stages of growth, with fast growth eventually slowing toward something more gradual and, eventually, stopping altogether (which wouldn't imply the end of change, just some kind of balance). Current ideas in economics might need considerable re-thinking, of course.

But that wouldn't  surprise me either.

Monday, February 1, 2016

Shifting view

Hey, I've changed the title of this blog!

Why? Because I'm going to shift its perspective a little. As all of you will know, I've (mostly) stopped blogging here in the past 4-6 months. Reason? Because much of what interests me now has no immediate link to "finance," and so "physics of finance" doesn't seem quite right. I'd like to eliminate this psychological barrier (for myself). 

So, expect more posts, but maybe on different topics.

Cheers

Mark

Monday, May 18, 2015

Warfare isn't getting less likely -- conclusions from a new analysis



Is warfare getting less likely? Are we entering a new and more peaceful era of history? Quite a few people -- Steven Pinker and Niall Ferguson among them -- have suggested as much. But the mathematics of war statistics over the past 2,000 years doesn't back up the idea. At Bloomberg, I have a short piece describing some excellent new work by Pasquale Cirillo and Nassim Taleb. Also, more detail over at Medium.

Wednesday, May 6, 2015

To save the world -- give up on nature?


Here's a radical idea -- we can best safeguard the future of humanity not by learning to live with nature, but by turning our backs on her and learning to do without her. We should use our technology and science to isolate ourselves from nature, so that we can live without requiring nature. In so doing, we can also eliminate the burden we put on nature, and preserve it as well.

Does that sound crazy? A little, I think, but it is a creative idea and maybe some tempered and humble version of it isn't so crazy. Perhaps with better science and technology we can learn to help humans while also, to some large degree, eliminating our impacts on nature and carving out some safe space for her. That, at least, is the provocative idea suggested in The Ecomodernist Manifesto, a document published recently by folks from The Breakthrough Institute.

Many things in this manifesto seems just a little too optimistic to me -- they seem to suggest that we're already reducing our impact on nature, for example, despite causing the sixth greatest mass extinction in history -- but it is worth reading. We do need more creative thinking. I've written more at Bloomberg.